Amazon has seen its share of migration pain. Although re:Invent presented an opportunity to fire another shot across Oracle’s bow (boating reference!), it also showed that even Amazon takes multiple years to migrate production workloads. Werner’s keynote was a masterclass on database design at scale – he discussed the challenges that a growing Amazon.com faced early on and how it overcame them. There was also a recurring theme of durability and fault tolerance across major AWS services.
Werner covered a number of groundbreaking new announcements including Lambda Layers, expanded support for additional development tools, custom runtimes and Ruby support for Lambda, Application Load Balancer support for Lambda, and web sockets for Amazon API Gateway. AWS is pushing ahead with the message that the future of cloud-native Modern Application Design is Serverless. You can see a full list of announcements here.
Despite the strong serverless messaging there was still something for everyone at this year’s re:Invent. Enterprise saw a range of management, security and governance tools, as well as on-premise AWS hardware and better Windows file system support with Amazon FSx. Machine Learning saw the announcement of over 13 new services and tighter inter-service integration. There was the usual release of new EC2 instances aimed at optimising workloads of various use cases and niches. And Blockchain fans finally got their first AWS cloud services.
Andy Jassy’s emphasis to refer to builders, rather than developers, was a sign AWS is looking to be more inclusive to those who do not associate themselves with the term ‘developer’. It’s now less about the nuts and bolts of development and more about the combination of services to achieve business outcomes.
So with the confetti from re:Play finally settling, it’s time to review some of A Cloud Guru’s favorite announcements from this year’s conference.
Ground Station to Major Bezos
AWS Ground Station: We were happy to see Ground Station announced, not because we have a use case for it, but because it’s a step towards commoditizing technology previously reserved for Fortune 500 companies and sovereign nations. AWS has a good track record here and you only have to look to how common place Machine Learning is now to see what kind of impact lowering the barrier to entry can have. Ground Station is designed to control satellite communications as well as download and process satellite data. While the service is quite niche, the likes of Blue Origin, SpaceX and Rocket Lab are making commercial satellites cheaper to deploy. Perhaps one day, satellites will become a basic commodity that more companies will be able to afford.
Verdict: This is a great boon to the research industry, if you can get your budget to stretch to a micro-satellite or two you no longer need to build, staff and manage earth-based communications equipment. It also feels like Jeff has figured out a clever way to help fund infrastructure needed for Blue Origin.
Outpost Steakhouse
AWS Outposts: This meaty announcement was quite unexpected! On the surface, running AWS hardware in your own data center flies in the face of the traditionally espoused benefits of cloud-native applications. There are, however, some immediate benefits to this kind of migration style. For example, apart from power and networking, the equipment is still managed by AWS. Any applications built on or migrated to this hardware should “just work” in the cloud due to common APIs and application stacks. AWS rarely builds anything without the feature request coming directly from a customer, so you can be assured there are businesses rejoicing that you can now run AWS hardware in your own datacenter.
Verdict: Largely seen as an attempt to placate large enterprises who refuse to dive into cloud with both feet, and as an exercise to keep up with Joneses (Azure Stack), it remains to be seen what sort of impact this will have on speed of enterprise migration, if any.
Stay up-to-date with AWS news by signing up for AWS This Week, an original series from ACG.
Functions are like onions, they have Layers.
AWS Lambda Layers: Lambda Layers is a place to share code and data between functions and frankly – it’s a game changer, resolving a lot of existing headaches for development teams. No longer does common code need to be included separately in every function! In doing so, Lambda Layers helps developers focus more on custom business logic and less on the boilerplate needed to make it work. Lambda Layer packages can be private or shared, both across AWS accounts and publicly. Many third party AWS vendors like Epsagon, Datadog, Thundra and Serverless Framework are already leveraging the service to simplify the delivery of their products to customers. Lambda Layers will also play a pivotal role in security, by reducing the included dependency footprint and simplifying the patching process.
Verdict: Less time managing and maintaining dependencies, more time delivering value, what’s not to like?
Have your Runtime, and Invoke it too!
AWS Lambda Runtime API: Hot on the heels of the Lambda Layers announcement, AWS rather quietly announced native support for Ruby in Lambda. We say rather quietly, because this announcement, whilst great news for Ruby fans, was overshadowed by the announcement of the Lambda Runtime API. The Runtime API allows for just about any programming language to be bootstrapped into your Lambda environment. To go along with this announcement, AWS has released open source runtimes for both C++ and Rust, with plans for Erlang, Elixir, Cobol and PHP to come. Coupling the Runtime API with Lambda Layers is a killer combo for teams who have been looking to make the jump to serverless but weren’t comfortable with the supported languages. We look forward to seeing what other languages the community creates support for.
Verdict: There really is no excuse not to fire up a Lambda function and see what this serverless stuff is all about… I mean Cobol… Cobol!? We can’t wait to see what developers build with Cobol functions.
Lighting the fuse with Firecracker
Firecracker: One of the more “whoa!” moments of this year’s re:Invent was the announcement of Firecracker. Firecracker is a virtual machine manager that provides a REST API to quickly launch and manage what AWS are calling microVMs . Firecracker is not a new AWS service but is actually the technology that underpins the AWS Lambda and Fargate services. The extra step of releasing Firecracker as an Open Source project is also a nice touch. You can get involved on GitHub (https://github.com/firecracker-microvm/firecracker). It’s incredibly interesting to finally get a look under the hood at how AWS has architected these technologies for security and speed in what is invariably a multi-tenanted environment.
Verdict: Not directly exciting if you are a serverles fan, as implementing Firecracker means taking on responsibility you previously avoided with the likes of Lambda. You can guarantee that any progress made by the OSS community will be incorporated into AWS’s serverless offerings and this is a most tantalising prospect.
Here he comes, Here comes DeepRacer….
AWS DeepRacer: At first glance this is a fun, albeit pricey, entry point for builders to get started in the world of Machine Learning. After a closer look however, DeepRacer shows itself to be a canny business decision from AWS wrapped in a fun autonomous robotic car. We already know ML is the future, but for many, the ‘When’ and ‘How’ to use machine learning is still a mystery. This confusion could be down to a lack of familiarity with the technology and therefore lack of knowledge of what type of problems ML can solve. Enter DeepRacer and the DeepRacer League, which not only gives builders a challenge to solve but also encourages hands-on experience with a range of AWS IoT and ML services and capabilities (RoboMaker and Sagemaker RL, for example).
Verdict: Expect the competition to be fierce. We predict that the community that springs up around DeepRacer will be passionate, welcoming and helpful. DeepRacer seems like a logical choice for office hackathons and team-building exercises.
How to manage your Tachikomas
AWS RoboMaker: AWS RoboMaker is a service that is not about necessarily introducing new capabilities, but instead drastically reducing the current overhead of developing, training and managing robots on AWS. At its core, RoboMaker provides 4 capabilities:
Cloud Extensions for the Robotic Operating System – allowing robots to more easily leverage cloud services like Rekognition or Polly.
Dev Environment – based on Cloud9, this lets your operating system, development software, and ROS be automatically downloaded, compiled, and configured for you.
Simulation – allows you to understand how your robotics application will behave before deploying it to hardware.
Fleet Management – track your robots in one place. Includes OTA update feature to help you update, patch and maintain your robotic fleet.
Verdict: By providing robots with access to cloud services, look to see RoboMaker empowering a helpful robot near you. DeepRacer enthusiasts will note that DeepRacer relies on RoboMaker’s simulator for training.
Back to school
Sagemaker RL: If there was a theme to this year’s Machine Learning announcements, it was that AWS has your reinforcement learning needs covered. Until now, Sagemaker has focused on supervised and unsupervised learning models, which are geared towards looking for patterns in your data in order to classify it. Reinforcement learning is about experimenting with actions to achieve a certain outcome (rather than learning a dataset); and is the learning model typically used in autonomous vehicles. RL models work by rewarding correct actions, thereby reinforcing desired behaviour, for example navigating a race course by staying on the track. Sagemaker RL is the addition of pre-packaged reinforced learning toolkits to the existing Sagemaker service and is already supported by AWS RoboMaker and Sumerian.
Verdict: It’s now easier than ever to explore reinforcement learning models for your robots. This service will be key to your success in the DeepRacer league!
DynamoDB goes Atomic
DynamoDB Transactions: If you are a fan of modern application design you are building systems that can deal with eventual consistency and failures. By and large, the odd failure to write data is no big deal and, at most, results in a small inconvenience for your customer. However, there are instances when it’s critical to be able to verifiably write data or fail immediately – think financial transactions or order fulfilment. With the addition of Transactions, DynamoDB can now handle these types of use cases. Transactions provide developers with atomicity, consistency, isolation, and durability (ACID) on reads and writes across one or more tables in a single region.
Verdict: The power of DynamoDB as a service stands out here as Transactions is available to use right now on new and existing DynamoDB tables.
Amazon, now with more Zookeeper
Amazon MSK: Have you ever had a scenario where you wanted to leverage real-time data streams to drive application features, but the implementation fell over when you realized the amount of effort required to configure, manage and secure the infrastructure required to deliver this functionality? With the announcement of Amazon’s Managed Streaming service for Kafka, it might be time to revisit that decision. MSK is a fully managed Apache Kafka cluster service designed to help developers leverage real-time streaming data pipelines without spending a lot of time managing the complex infrastructure that comes with this type of application.
Verdict: More than ever, click stream analytics, content-responsive applications and personalization are now within reach of more developers. For larger organizations that may already be using and maintaining a Kakfa cluster, this means more time creating business value and less time on undifferentiated heavy lifting.
Going Global
AWS Global Accelerator: One of the many selling points of AWS is the ability to “go global” in a matter of minutes. Typically this is achieved by leveraging infrastructure as code to replicate your application to the myriad of AWS regions available around the world. But how do you direct user traffic to your application? You could use Route53, but this comes with its own set of limitations and traffic still traversing the public internet. Global Accelerator provides a fixed static IP address for use in your application. Traffic directed to this IP address enters AWS’s network at the nearest POP and traverses AWS’ global backhaul, instead of the public internet backhaul. Global Accelerator also features a range of tools for controlling how traffic is routed once it enters your endpoint. The end result is a service that improves the responsiveness of your application and simplifies deployment and failover processes.
Verdict: Move your application closer to your users with AWS Global Accelerator.
Where we’re going, we won’t need servers
Overall, re:Invent 2018 felt like the year Serverless reached a certain kind of mainstream maturity. There are now very few technical barriers to entry, especially when it comes to language support.
Every year the AWS toolkit gets larger and more exciting, but also more complex. As always, the ACG team will be there to help you build, keep your skills sharp, encourage you to explore, and get certified. See you on the DeepRacer race track, and keep being awesome Cloud Gurus!