This is the second blog in a series of 3 blogs about the Velocity Conference 2019 in Berlin. Read the second blog here!
Thursday: Even more keynotes, sessions and the inevitable trip back home
How to deploy infrastructure in just 13.8 billion years Ingrid Burrington (Independent)
A talk about the history of the universe and how we got to computer. It was a very abstract talk, but in the end, I think the gist is that computers are still a very young field and we should work on the future of the technology instead of maintaining the status-quo of the systems of today.
Bas LangenbergBAS @O’REILLY’S VELOCITY CONFERENCE 2019 – Last Day
This is the first blog in a series of 3 blogs about the Velocity Conference 2019 in Berlin.
Within SynTouch, I try to be on top of the next big thing from infrastructure perspective. Therefore, I went to O’Reilly’s Velocity conference, in my opinion one of the best conferences someone in my area of expertise can go to. This blog is my report on my experience. I wrote down what I took away from the conference and most important, my lessons learned. This blog is a mixture of my own opinions and those of the speakers.
Bas LangenbergBas @O’Reilly’s Velocity Conference 2019 – Berlin
Bernd is 27 jaar oud en werkt nu 1,5 jaar als consultant bij SynTouch. Bernd is leergierig en heeft de ambitie om carrière te maken in de ICT-sector. Afgelopen week haalde hij nog zijn “Pega Senior System Architect” certificaat en nu is hij alweer bezig met zijn volgende opleiding. Naast het werk binnen SynTouch is hij druk met een onderzoek in de gezondheidszorg. Vanuit zijn studie doet Bernd onderzoek naar verstoringen in de operatiekamer. Zijn onderzoek wordt zelfs gepubliceerd in de Journal for Surgical Endoscopy. Daar zijn wij als SynTouch natuurlijk ook erg trots op!
Deze maand heten wij Man Chaun Yuen van harte welkom in ons Team! Man Chaun is 26 jaar en woont in Eindhoven! Met zijn ervaring en kennis zal hij ons team komen versterken als Junior Consultant. Heel veel succes Man Chaun!
After setting up the environment, it is now time to simulate the beer ratings flowing in. As explained, I will start off several generators simultaneously. To generate some (intended) data skew away from the average, several generators will have the same structural event definition, however they will be different in the combination of users, beers and ratings upper and lower bounds. Of course this is also based on my personal preference – who said my demonstration scenario should be fair?
In this blog, I am going to zoom into KSQL and the opportunities it offers for manipulating streaming data in Kafka, by merely using SQL-like statements. One of the neat things about the Confluent Kafka platform, is that it provides additional utilities on top of the core Kafka tools. One of these utilities is the ksql-datagen, which allows users to generate random data based on a simple schema definition in Apache Avro.
For a long time I have been interested in Apache Kafka and its applications. Unfortunately, forced by circumstances, work and other personal endevours, I had not been able to really dive deeper into the matters until Spring 2019. In April I have finally finished the Udemy course “Apache Kafka for Beginners“.
At work, my exposure to Kafka had only been limited, as we were (ultimately) publishing messages onto a Kafka topic using Oracle Service Bus. However, this was actually a Java-built integration, as we wer just pushing the messages onto a JMS queue, which had a MDB listening that propagated the messages to the Kafka cluster.
After completing the first training I got interested, especially in the role of Kafka in real-time event systems and I decided to take another course on Kafka Streams. I was a bit disappointed that this specific course focussed on the Java development quite heavily, and as an exception I decided to abandon the course uncompleted. During one of the Kafka Meetups, I found out that Confluent was actually offering a very interesting alternative to programming the Kafka Streams API in Java, viz. KSQL.