New features, big ideas, and building better.
When Dr. Werner Vogels, the CTO of Amazon, takes the stage, builders listen.
Why? Because every year Dr. Vogels, or simply “Werner” as he’s better known, speaks directly to the core challenges that people building in the cloud face.
Yes, there are always some new features and service announcements, but at its heart a Werner keynote will fundamentally change how you view creating solutions.
But first, a worthwhile history lesson. This is the ninth AWS re:Invent and it certainly has changed since the first one in 2012—yes, typical pandemic 2020 dumpster fire aside.
In 2012, 6,000 of us gathered in Las Vegas and had a blast learning about the latest in cloud. Werner’s keynote was very similar to Andy Jassy’s. It was a one-two punch that fit the times.
But as the cloud evolved, so too did Werner’s keynotes.
In 2013, alongside the launch of Amazon Kinesis, he spoke about the aspects of cloud thinking around performance, security, reliability, cost, and scale. These themes are similar to those that appeared a two years later in the original version of the AWS Well-Architected Framework.
The next year, Werner talked about containers and how they were changing how developers thought about their applications. He then took things a step further and introduced AWS Lambda, kicking off the serverless compute movement.
After another year of AWS growth, Werner returned to the stage and this time framed his talk using several laws and tenets of computer science, like the Law of Demeter and Reed’s Law. The key takeaways? Getting the most out of cloud requires adjusting how you think about resources and your solution.
2016 was the firs year that Werner’s keynote showed its currently form. This is the year he introduced The Twelve-Factor App and talked extensively about transformation both at AWS and as something that builders need to embrace.
Continuing to push further into how to think about applications. 2017 brought the idea of the “21st Century Application”. An application that is fully cloud-native and built in a way that allows the builder to focus on delivering value…not worrying about plumbing.
After years of advocating for a different way of building, 2018 brought a stronger path forward for builders. This keynote focused almost exclusively on the idea of the “blast radius” and how to plan for the inevitable failure of systems that Werner has so consistently warns us about.
Which brings us to last years keynote. In 2019, Werner continued to advocate for blast radius awareness and spent a significant amount of time explaining an AWS research paper about Physalia, the millions of tiny databases that make up the Elastic Block Store.
Every year, Werner’s keynote offer us invaluable insights into the biggest challenges in IT, how cloud approaches them, and how we can adapt and frame out thinking to get the most out of our builds.
This year was no different.
The easiest part of a Werner keynote review is the announcements. Keeping in line with the past few years, there weren’t many but what there was will change how we build in the cloud.
AWS CloudShell is generally available today. The premise is very simple: a command line interface into the AWS Cloud. This makes it easy to security manage and work with resources in your AWS account.
It’s one of those services that doesn’t sound like much…then a few days later you realize that you’re spending most of your time using it.
The biggest advantage is that this has the possibility of getting rid of all of the access/secret keys currently saved on your teams laptops. Whether you realize it or not, those pose a big risk to your organization as cybercriminals move up the development chain to find easier ways into your builds.
AWS Fault Injection Simulator
A new service (coming 2021), AWS Fault Injection Simulator, makes chaos engineering accessible to all builders.
As part of Werner’s keynote in 2017, Nora Jones from Netflix introduced the wider-AWS community to chaos engineering. The concept is simple: force failures to see what happens so you can either;
- Prevent it from happening in the future
- Learn how to respond to those failures if they do happen
This technique lets you implement a core principle of the AWS Well-Architected Framework; practice! It’s far better to fail in private vs. in public. This service will let you do that and focus on the results instead of all of the plumbing required to setup a chaos engineering practice.
Amazon Managed Service for Grafana & for Prometheus
Two new services for Cloud Native Computing Foundation (CNCF) projects. Amazon Managed Service for Grafana and Amazon Managed Service for Prometheus remove the burden of running these popular open source projects and allow you focus on applying them in your observability practice.
Taken as a group, it’s a nice showing of support by AWS for the open source work down at the CNCF and will make figuring out what’s going with your applications much easier.
? Mind Blown ?
That’s what today’s keynote was really about. Figuring out what’s going with your applications.
While those new services are interesting and will no doubt be a useful addition to the tool belt, the real substance of the keynote centred around the idea of dependability.
Werner introduced and expounded on the concept of dependability and how it is critical that builders focus on it. It’s really all about building well.
While the keynote covered some more complex topics, the high level take away is that builders need to account for things that are;
This means that your team needs to plan for what you think is going to happen, what you think might happen, and be ready to react when things come out of left field (2020 anyone?).
The rest of keynote elaborated on this ideas and provide a lot to process and research for teams building in the cloud.
First up and someone to the side was the idea of sustainability. This is the third time during the show a keynote has highlight AWS’ efforts around sustainability. This time, Werner threw the gauntlet down for builders to step up.
Technology has the potential help solve the climate crisis and our choice have an impact. Letting resources idle, continuously processing data when it’s not required, and even your choice of processor all have impacts outside of the digital world.
Werner specifically called out the Graviton2 processor and how it’s improved performance/watt can help reduce the impact of your builds.
This messaging was really interesting because it runs counter to the typical “I don’t have to worry about resources anymore” attitude that quickly develops in the cloud. Calling this out in a major keynote is a important bellwether that we need to be thinking about sustainability more as a community of builders.
The next aspect of dependability that Dr. Vogels touched on was that of the growing complexity of our builds.
When you break a monolith apart, that simplistic microservices design only lasts for a short while. You’ll quickly find that your teams are deploying more and more microservices, increasing the complexity of your distributed system.
That complexity means that people can’t really wrap their head around what’s going on anymore.
Here, the Amazon CTO went on to explain *automated reasoning* and how these systems can help verify that our builds are doing only what we intend.
The first demonstrated uses of this approach have come from the AWS Automated Reasoning Group in the form of provable security. That’s delivered tools like the new VPC reachability analyzer, IAM access analysis, and Access Analyzer for S3.
All of the tools using automated reasoning to make sure that you haven’t accidentally misconfigured your AWS services and opened up your data to unnecessary exposure.
As a side note, if you’re not already regularly using those three analyzers, you should. They are well worth your time to learn and will help improve the security and dependability of your builds almost immediately.
The next topic on Werner’s agenda was that of observability. Most teams focus on monitoring. An important topic but one that can only be done if you understand the system.
Observability is the ability to determine the behaviour of the entire system from the systems’ outputs. That’s the common usage definition from observability expert, Charity Majors.
Now you might be wondering what the actual difference is between monitoring and observability is. I’ll leave the details explain to Charity who has written extensively on the subject.
For the purposes of the keynote, it’s about understanding the—as Charity puts it—“the nexus of users, production, and code.”
In order to understand that nexus, things like tracing must be implemented. You and your team need to be able to understand how a system is working…not just is individual components are functioning.
To do that, Werner took the time to forcefully remind us two things.
- We should log everything
- A trace ID need to be added to log data
That trace ID allows for your team to follow specific request as it travels through your system. That can deliver some amazing insights into how your system is working.
One way of implementing this practice and ensuring you deliver a positive user experience is through canaries. Canaries are continuous tests of the user experience in your system.
These are especially helpful during times of change and—as Becky Weiss, Sr. Principal Engineer at AWS puts it—help highlight issues before they become problems.
The idea that observing the behaviour of a canary shows the norms of your systems on a regular basis which allows you to spot anomalies when they happened and to quickly determine if they are predictors of a bigger issue.
Every year, Dr. Vogels covers important issues that builders should be thinking about. He announces new features and services from AWS that help us do that.
This year was no different. Dependability is a critical aspect of any systems.
The ideas around automated reasoning, monitoring, and observability will help your build more resilient and dependable systems.
No blog recap can do the full keynote justice. While I’ve tried to distil the essence of the core ideas Werner talked about, I strongly recommend that you watch the keynote in full either during a rebroadcast or when it’s made available on demand.