Share on facebook
Share on twitter
Share on linkedin

AWS Lambda is winning, but first it had to die

Forrest Brazeal
Forrest Brazeal

Major feature changes have successfully pushed Lambda workloads into the mainstream, even if FaaS purists feel betrayed

Is serverless winning?

If you ask AWS, they’ll say definitely. Nearly half of all new applications built inside Amazon this year are running on Lambda. Andy Jassy took time in his re:Invent keynote to call out the thousands of enterprises who now run Lambda workloads in production, but perhaps most interesting was that this year the new serverless feature announcements graduated to Jassy’s keynote — a spot usually reserved for the flashiest, most strategic reveals of the year. AWS, at least, clearly believes that serverless is now one of the primary selling points for its cloud.

(On the other hand, there’s this odd article, which spends most of its length demonstrating FaaS growth, but ran under the headline “Serverless Adoption Stalls” because … Kubernetes teams say they aren’t using as much Lambda? In other news, cats report eating fewer vegetables.)

What’s not in debate is that Lambda, still the 900-pound gorilla of FaaS, looks a lot different today than when it went GA in 2015. The loud, weird niche of engineers who bought into serverless early have always had very specific ideas about what good serverless systems look like … and lately, Lambda’s been rattling their assumptions.

The Decline and Fall of The Serverless Manifesto

Those original ideals of FaaS purity didn’t just materialize out of thin air. As recently as 2016, AWS serverless leadership was talking up something they called the Serverless Compute Manifesto. This was a radical vision for how Functions-as-a-Service should, uh, function. The manifesto showed up at all sorts of talks, including at ACG’s ServerlessConf. Here are its key tenets:

The Serverless Compute Manifesto (circa 2016)

  • Functions are the unit of deployment and scaling
  • No machines, VMs, or containers visible in the programming model
  • Permanent storage lives elsewhere
  • Scales per request; Users cannot over- or under-provision capacity
  • Never pay for idle (no cold servers/containers or their costs)
  • Implicitly fault-tolerant because functions can run anywhere

This manifesto formed a shocking and highly countercultural blueprint for how to build and deploy software. It attracted a small but vibrant community of true believers and a somewhat larger and much louder set of skeptics. 

But in retrospect, the serverless compute manifesto had trouble penetrating beyond that small, engaged nucleus. The bold vision just left out too many existing, legacy workloads and teams.

So, gradually, like the commandments in Orwell’s Animal Farm, AWS’s non-negotiables of serverless compute began to change. Let’s run through each point of the manifesto and see if Lambda still adheres to it.

Functions are the unit of deployment and scaling?

Sorta, but not strictly. You can deploy multi-function Lambda Layers to manage large binaries, or (now in preview) Lambda Extensions to plug in third-party agents that I’ve been told should definitely not be thought of as “sidecars for Lambda”.

No machines, VMs, or containers visible in the programming model?

No longer true. As of re:Invent 2020, Lambda now allows you to bring your own containers instead of just shipping a ZIP file of code.

Permanent storage lives elsewhere?

Not necessarily. Lambda now integrates with EFS, which I guess is technically “elsewhere”, but no more so than any other NFS share attached to a server is “elsewhere”. It’s a persistent filesystem mounted to your compute.

Scales per request?

Yes! Now that Lambda can run containers, I believe the request model is now the biggest remaining conceptual difference between Lambda and a managed container service like AWS Fargate. Lambda gives you a new execution environment per concurrent request, whereas a Fargate-like service will still process multiple concurrent requests on the same (long-lived) container.

Users cannot over- or under-provision capacity?

No longer true. Meet Provisioned Capacity, which trades your cold start problem for a hot cost problem. (I snark, but Provisioned Capacity is way better than running your own Lambda pre-warming job, which a LOT of people used to do.)

Never pay for idle (no cold servers/containers or their costs)?

No longer true – not if you use Provisioned Capacity or EFS, anyway.

Implicitly fault-tolerant because functions can run anywhere?

Ehhhh. This one is about abstracting away availability zones, which a lot of higher-level AWS services besides Lambda (including Fargate!) now do. That said, we’re starting to hear more guidance from AWS that really, the fault domain you need to be thinking about is regions. That’s right: regions are the new availability zones, and savvy Lambda developers are rolling their own (very un-managed) multi-region architectures as we speak. 

To reinforce the trend, let’s check in with two other defining early features of Lambda:

Intentional time and space constraints?

Less and less true. Function disk sizes, runtime limits, and CPU and memory sizes have steadily increased over the last 5 years. Today you can run a Lambda function for 15 minutes with 10 GB of memory and 6 vCPUs – a hefty enough blob of compute to make you think seriously about multi-threading, multi-purpose functions, and other things that are against the old ideals of FaaS.

Zero-trust networking?

Only if you want it. In the old days, Lambda architectures biased heavily toward IAM security rather than using VPCs, partly for ideological reasons but also because attaching functions to custom ENIs added tremendous cold start overhead. Today, innovation from AWS has made customer-managed VPCs much more compatible with Lambda.

So what we have now is an updated manifesto that looks like this:

The Serverless Compute Manifesto (circa 2020)

  • Functions are the unit of deployment and scaling
  • No machines, VMs, or containers visible in the programming model
  • Permanent storage lives elsewhere
  • Scales per request; Users cannot over- or under-provision capacity
  • Never pay for idle (no cold servers/containers or their costs)
  • Implicitly fault-tolerant because functions can run anywhere
  • Intentional time and space constraints
  • Zero-trust networking

I feel like Willy Wonka at the end of the chocolate factory tour, looking around to see where all the children went. Of all those ideological principles, those lines in the sand, per-request scaling is the only thing left. 

You can still build according to the original serverless manifesto, of course. But the service doesn’t require you to.

If you had to distill the revised value prop of Lambda down into a single maxim, Animal Farm-style, what would you even say at this point? “All workloads are managed, but some are more managed than others?

The rise of serverless for everyone

This seems like the right time to clarify that I mostly like the additions to Lambda. I think they are necessary and in many cases, an unmixed good.

I wouldn’t have said that four years ago. I was much more of a FaaS purist at that time. What changed my mind was years of hearing a very particular type of statement from engineering teams, repeated over and over in different variations:

I would love to use Lambda, if only it … [connected back to my on-prem VPC, was big enough to run my workload, let me use the same language or developer tooling as my other systems, was always warm, supported some form of shared storage, etc, etc]

My first reaction to these statements was: Why would you want to use Lambda if you can’t or won’t embrace stateless, containerless, scale-to-zero functions? That’s the whole point of Lambda. It sounded to me like those Kubernetes cats from the New Stack survey, saying they’d really like to eat vegetables if they only contained more meat.

But I’ve since come to understand that the really powerful part of that statement is the beginning part. It’s a statement of longing.

I’d love to use Lambda, if only …

What is it about Lambda, and serverless compute in general, that inspires this reaction in so many people? Why does a brand-new baby computing paradigm spark such compulsive interest everywhere from tiny startups to big ol’ legacy enterprises?

Because more than any specific technical detail, serverless computing is an idea. An idea expressed in a simple phrase: Own less, build more. All the serverless doctrine, all the technical guidance boils down to this goal. Lambda is an aspirational lifestyle.

Lambda, circa 2016, was massively ahead of its time, and to some extent still is. But because of the new feature additions, up to and including container support, it’s making the own less, build more identity more accessible to more builders than ever before. 

What AWS is doing is, one by one, taking away the if onlys. They’re removing reflexive objections to serverless by providing reassuring options — shared storage, provisioned capacity — for teams that need them. I’d love to use Lambda!

Guiding the future

Will some teams use these features out of confusion or inertia, not realizing that they could build something more radical, more manifesto-like? Yes, and I think this is an area where AWS can do a lot more to help.

Look, the Lambda console is a mess – you know it, I know it, AWS knows it. It’s got more tacked-on feature creep and incoherent UX paths than a 2001 VCR. And say what you will about infrastructure-as-code, the console is still how people explore new services. It’s how they form feelings about what kind of builders they want to be.

A console redesign is needed for sure, but not just an arbitrary facelift. We need one that foregrounds the Lambda features that support the original manifesto: code-level programming abstractions, function-level deployments. And then a careful reveal of progressive complexity as you push up against the limits of the manifesto or your own organizational constraints. (Is there such a thing as “regressive complexity”? Because that’s the current console.)

What you want is for builders to be confronted with the simpler, more managed options every time they start a new project. Over time, that changes people’s perceptions. It constantly presents them with the “serverless way”: hey, I could build this using events! I don’t need to put this function in a VPC! Defaults matter. I look forward to seeing how AWS can continue to guide its customers toward success here.

In the meantime, Lambda is finally gaining serious adoption, finally winning, because it’s shedding the FaaS purism that marked its early days — without compromising on its real value prop, the progressive promise of own less, build more. The serverless compute manifesto had to die so the promise of serverless could finally live.

Recommended

Get more insights, news, and assorted awesomeness around all things cloud learning.

Get Started
Who’s going to be learning?
Sign In
Welcome Back!
Thanks for reaching out!

You’ll hear from us shortly. In the meantime, why not check out what our customers have to say about ACG?

How many seats do you need?

  • $499 USD per seat per year
  • Billed Annually
  • Renews in 12 months

Ready to accelerate learning?

For over 25 licenses, a member of our sales team will walk you through a custom tailored solution for your business.


$2,495.00

Checkout