AWS re:Invent 2020 – week 2 and 3 review

December 21st, 2020 Written by Adrian Hesketh

So that was re:Invent – what a busy few weeks!

The second week was marked by the infrastructure keynote. I enjoyed it, and while I didn’t come away with any concrete actions, I did learn a lot more about concrete production techniques. The most impressive part for me was hearing about AWS’s commitment to running on 100% renewables in just three years – five years ahead of its initial target. I enjoyed the overview of the effort that the AWS team put into making their data centre operations slick, such as their custom UPS systems.

The biggest surprise for me was probably seeing that the Apple Mac EC2 instances were just Mac Minis in a tray, connected up to AWS’s Nitro system to look after the system. I was imagining some sort of custom card in a rack. The fact that they aren’t probably goes some way to explaining their relatively expensive monthly cost.

The infrastructure talk focussed a lot on AWS’s investment in custom silicon-based on ARM processors, like the Graviton 2 RDS systems. I was thinking that a launch of Lambda on ARM wouldn’t be out of the question this year, but it wasn’t to be. The focus from serverless seems to be about widening the use cases for AWS Lambda, by slowly broadening features, particularly in the data processing space.

On the other hand, I don’t really understand who some of them are aimed at. For example, API Gateway REST APIs can now execute express Step Function workflows – a type of workflow that has a maximum execution time of five minutes compared to the standard workflow of one year. Step functions allow for some parallel processing, but I’d prefer to bump up the Lambda vCPUs and use a programming language like Go than write complex code in the Step Functions language. I’m sure someone somewhere is happy about it, but it’s not me.

Lambda analytics were also launched, which is more of an integration style with Kinesis and DynamoDB that allows Lambda to process aggregations across Kinesis and DynamoDB data, by executing operations across windows of time in AWS Lambda. I think this is a handy addition; I can imagine keeping a rolling total together, or maintaining leaderboards. It could be used to roll your own time series database, but with I’m not sure why you’d want to.

Checkpoints is another improvement in stream and queue processing for Lambda. Without this new addition, if you got a batch of 10 SQS messages, but only managed to process five of them before throwing an error, then Lambda would be sent all 10 messages again. With the new change, it’s possible to mark the successful messages as processed, which reduces wasted processing.


The AWS console got some improvements. A new search makes it easier to find services, and a shell feature similar to the ones in GCP and Azure called Cloud Shell. I think this will help people that are trying out the AWS command line tools for the first time.

AWS SSO got an Active Directory synchronisation feature. This was something I thought already existed, but apparently not.

Infinity Works uses a lot of Prometheus and Grafana for monitoring container workloads and running visualisations, so there was a lot of interest in the new managed Prometheus and Grafana services. Grafana isn’t exactly difficult to run, but managing long-term metric storage in Prometheus is a bit trickier, so there’s potential there.

I hadn’t noticed that there was a new JavaScript SDK coming along, so it was news to me when it became generally available. It does look like a big improvement over the V1 SDK, losing the call-back syntax from the earlier versions – putting async/await front-and-centre, and breaking up the big SDK into lots of smaller packages, which helps with reducing bundle sizes. It’s not API compatible with the previous version in all cases, for example the DynamoDB DocumentClient doesn’t exist in the new SDK. It’s a reasonable decision to remove it because having two APIs that look really similar but do very different things is confusing, but it makes migrating a bit of a pain.

The new Amazon Location service that was launched in preview immediately took my eye as something that I really need to investigate further. It’s not just about map tiles, there’s geospatial tracking, geofencing and address lookup. I can think of several projects that I’ve worked on that might have been able to use these capabilities.


The last week of re:Invent was a big week for edge compute and IoT announcements. One launch I found interesting was the IOT Core for LoRaWAN. LoRaWAN is a low power radio communication system that has much longer range than bluetooth. It’s relatively cheap to add to a project, you can buy an ESP32 with LoRaWAN built in for about £20 on Amazon, but what would you connect it to? And how would you secure it?

That’s where IoT Core for LoRaWAN comes in, it lets you run your own LoRaWAN network connected to AWS IoT. There are a bunch of network devices available for a few hundred dollars including for outdoor installations. This might be a good solution for projects that involve tracking equipment or assets within a building site, large warehouse or city centre..

One thing that made me look twice was Amazon Sidewalk. It enables Internet sharing on Amazon’s Edge devices like Amazon Echo, allowing devices to piggy-back on the device owner’s home network. It seems like it’s a good thing for Amazon because it would allow them to create a valuable network based on their customer’s devices. It made the news in the UK, with people concerned about the privacy implications, but it’s not coming to the UK. Even though BT and Fon have done similar things, it seems like the UK just isn’t happy with that.

I took a look over the new AWS IoT EduKit too. I’ve recently run a workshop for students who were on an Infinity Works and Generation data engineering course, introducing them to using an ESP8266 and temperature/humidity sensor with AWS Lambda. The EduKit is actually really similar – it uses a more powerful ESP32, from M5Stack, which has a bunch of other plugin modules, making it a bit more expensive, but it allows it to better run the encryption required for AWS IoT integration. There’s also workshop training materials to go along with the kit, but based on my experience, I’d say it was maybe a little over-ambitious on what you can get done in a day, however it looks really comprehensive.


My first job in the new year is to work with the Infinity Works tech leads to work out which new AWS features offer real benefits to our existing and new customers, and to plan out a strategy for assessing them.

To say that re:Invent wasn’t the same as it in real life would be an understatement. It’s not that the talks weren’t as good; in fact, Rick Houlian’s annual DynamoDB talk was even better broken into two parts than squeezed into a single rapid fire onslaught. But the thing I missed most was really immersing myself in the AWS world for a week. Maybe the best thing about re:Invent is spending a week in a community of people with nothing better to do than drink and talk about tech!

Fortunately for me, being on the pundit panel for AWS Community Summit’s coverage, really filled the gap for me. So, on to the new year, with a backlog of new features and capabilities from AWS, and likely another 2,000 feature releases coming up in the next year.

Written by Adrian Hesketh