AWS re:Invent 2018 – new launches

The Event

In the first part of my blog I commented on Las Vegas itself and my experiences travelling to Las Vegas; in this part I will describe my experience with the actual reason for travelling to Las Vegas, viz. attending the AWS re:Invent 2018 event.

As you may already know, re:Invent is the yearly conference from Amazon Web Services (AWS), this time its seventh edition, held in Las Vegas, for customers, partners and vendors from the AWS ecosystem. The size of re:Invent is astonishing, during this conference (starting on Monday and ending around noon on Friday), it welcomes 53,000 participans, spread across 7 different venues for a total of over 2,200 sessions of content. Initially, I did not let the travel distances between venues put me off and decided to go walking between the different venues, although I had to walk for half an hour to get from my hotel to “The Strip”, where most of the venues are located:

Door to door travelling times between venues up to 45minutes

As it turned out, in most cases you have to get out of the hotel (or usually the conference center behind the hotel) first and exit through the casino to The Strip first … as these convention centers are HUGE and crowds are massive, it takes some additional 10 minutes to get out of. And still, you will have to find your way in these enormous buildings, to your session. As a re:Invent newbie, I have taken all the good advice of Annie Hancock and Jill Fagan from their “How to re:Invent” webinar series to heart and prepared and optimized my schedule largely before travelling to Las Vegas as I described in a previous blog. I must confess that I have not been able to keep up with my promise to myself to go by foot between the different venues and have resorted a couple of times to take the AWS supplied transportation.

Refactoring the schedule

Unfortunately, even when reserving the sessions I was interested in within the hour of the opening of the reservation process, I was not able to get into a third of the sessions I first selected. But with a catalogue of over 2,200 sessions, it was not difficult to find an alternative. During the weeks that followed however, repeat sessions of the sessions I was interested in started to pop up, driven by popularity. So, as is writing code, I also learned that scheduling for re:Invent is a process of refactoring mercilessly and continuously.

As my travel plans did not allow me to attend any sessions on Friday, I had to cram al the sessions I was interested in attending into four days and still leave room for any receptions I wanted to attend and space for visiting both the Expo and the Quad expositions at the Venetian and the Aria (and of course the wonderful AWS Certified Lounges). Over the course of four days, I have attended some 18 sessions, took four busses, must have walked at least thirty miles and only have taken a Uber once to return to the hotel after concluding a session until 8 pm.

The Announcements

The re:Invent programme is packed with sessions, however there are a few global sessions that are not to be missed – or should be rewatched if you could not make it … These are the sessions by the AWS leaders, like the CEO Andy Jessy and the CTO, Werner Vogels, as they are usually packed with newly announced features and products. In this post I will highlight a few new launches and improvements, a more complete list can be found on the AWS website:

Managed Kafka Service (Public Preview)

One of the new announcements AWS made are in the realm in which they are the absolute king: serverless. That is to say, Amazon is now offering a Managed Kafka Service, highly available and scalable, without requiring the software engineers to go through the process of provisioning the server infrastructure and pay-as-you-go usage.

Amazon QLDB (Preview)

Another new addition to the serverless family is the Quantum Ledger Database: again, nothing to install, no infrastructure to manage and just pay for what you use! Just like blockchain solution, the purpose of this new QLDB is to store an immutable history, where each transaction or state in history can be cryptographically verified. This offers the advantage of an immutable journal, where deletions or modifications are made impossible, offers an SQL-like language for developers to query the database. Furthermore, Amazon claims it performs much better than regular blockchain frameworks, scales automatically and does not require any setting up of nodes or a distributed environment. It was argued during the presentation is that there are certainly use cases for blockchain applications, but there are many more use cases for a datastore that is immutable by design and cryptographically verifiable.

Amazon QLDB (

Amazon Managed Blockchain

If the QLDB does not satisfy your needs, you can also opt for Amazon Managed Block service, again serverless and fully managed by AWS. Currently, the HyperLedger Fabric framework is already support and support for Ethereum has been announced. For analysis purposes, the Amazon Managed Blockchain can replicate to the Amazon Quantum Ledger Database, so you can perform analysis outside of your transaction processing environment!

Amazon DynamoDB

DynamoDB is Amazon’s flagship NoSQL database. This database is a ‘simple’ key-value document database. The database is used by a large number of big AWS customers around the world, offering very low latency data access that already offered global tables, point-in-time-recovery, automated backup and recovery etc.
As a lot of the AWS services, Amazon DynamoDB was already ‘serverless’, hence no patches or other maintenance to be performed by the end user, no infrastructure or server provisioning and simply pay for your usage. One of the disadvantages DynamoDB had – source of recurring questions on the certifications exams – was that you had to provision a certain amount of read and write throughput based on your (estimated) peak usage. This made it difficult to adapt to peak usages, but now this has been remedied by the introduction of Amazon DynamoDB On Demand:in this mode, DynamoDB will automatically scale up to accomodate the requests flowing in and will scale down after the peak has subsided to normal levels. Again, this is pay as you go.

Coming from the relation world of business applications, I am a big fan of database transactions for their ACID guarantee. Transactions are:

  • atomic: all changes succeed or all changes are reverted
  • consistent: all data will be valid after the transaction has completed
  • isolated: transactions are processed independently and have no means of performing ‘dirty’ reads on data that has not yet been committed.
  • durable: as soon as the commit is performed, all changes are persisted

One of the disadvantages of NoSQL database is that they generally do not support transactions: so if you would have a scenario where you had to update multiple records, in different tables, there was no way to guarantee that all changed would share the same fate: succeed or fail. Hence, application developers would need to try and handle this situations by compensating modifications in their application code. For DynamoDB, this situation does not longer exist – on re:Invent it was announced that DynamoDB now supports transactions!

Amazon Aurora

AWS does not only offer NoSQL database, but also a wide range of other options, like Oracle, MySQL, SQLServer, PostgreSQL and the like. Amazon Aurora, which is a MySQL compatible database, now offers a global database that spans multiple geographical databases.

Amazon TimeStream

Another new and purpose-built database is Amazon TimeStream. Again, this is a fully managed database solution, but this one is aimed at the high-volume sensor-like applications, e.g. collecting timestamped data from telemetry devices and sensors commonly found in IoT-devices. The database itself offers capabilities for all kinds op temporal analysis and predictions:

Amazon Timestream for temporal data and analysis

Amazon RoboMaker

For robotic applications, AWS has introduced AWS RoboMaker.for creating robotic solutions that can also leverage other functionalities from the AWS Cloud portfolio, like machine learning and speech recognition. On the Demo grounds I found a Dutch company that alread had built a very interesting robotics applications, offering support, stability and exercise for rehabilitation, but also for patients of Parkinson’s disease:

LEA – a robotics application built using AWS RoboMaker


When travelling between venues, suddenly I noticed a taxi sponsored by my previous employer, Oracle. As I recalled from the Open World 2016 keynote by Larry Ellison (‘but you have to be willing to pay less!’) that I attended, even then the claim was that Oracle would charge half of what it would cost on AWS … Here, in Las Vegas, during re:Invent 2018 when the city was full of AWS partners, customers and employee, there was just a single taxi on a back street … Nothing more to add to that!

But you have to be willing to pay less … revisited!


You can read the third and final part of my blog on re:Invent 2018 here.

Milco NumanAWS re:Invent 2018 – new launches

Related Posts