Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Serverless showdown: AWS Lambda vs Azure Functions vs Google Cloud Functions

What are the differences between AWS Lambda vs Azure Functions vs Google Cloud Functions? Compare the FaaS services of AWS, Azure, and Google Cloud.

Jun 08, 2023 • 13 Minute Read

Please set an alt value for this image...

The TL;DR

When it comes to trendy buzzwords, “serverless” might be the most popular.

At the heart of the serverless paradigm is the Function as a Service (FaaS) model, a category of services that make it ridiculously easy to run code in the cloud without provisioning any compute infrastructure.

FaaS has truly been a game-changer: it considerably accelerated the ability to deploy complex backend services and democratized application development. Gone are the days when companies need to invest capital and resources to take their ideas to market. Now, all you need is an idea, good code, and an account with a major cloud provider. Within minutes, your application can be running on managed infrastructure, serving millions of requests, and scaling (virtually) infinitely.

This article will compare the FaaS services of Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) and offer some insight into one of the most transformative technologies of the modern cloud age! 

Background

At the core of any application is the code that makes up its logic and functionality. Whether it is the latest mobile game, or your typical boring enterprise finance software, there are lines of code (sometimes thousands of lines) that need to run somewhere. This “somewhere” is typically a server, or groups of servers, where CPU cycles execute the logic that powers these applications.

But servers, even virtual cloud servers, are expensive and can be a pain to maintain, often requiring highly trained and experienced administrators to secure and manage them. Additionally, when no users are playing that game or using the finance software, these expensive servers will sit idly, virtually twiddling their thumbs, waiting for new “work” to come in. 

Traditional compute infrastructure can be very inefficient, and that is exactly what makes FaaS so appealing.

Instead of standing up complex infrastructure (servers, load balancers, etc), FaaS lets you run your code on a managed pool of compute resources, while paying only for the duration of execution. 

FaaS functions are event-driven, meaning they run in response to certain events. While not always the case, those functions are often short-lived, ephemeral, stateless and single-purpose.

The table below is a brief summary of the FaaS services offered by AWS, Azure, and GCP:

ServiceGA SinceRegional Availability
AWS LambdaNovember 2014Global
Azure FunctionsMarch 2016Global
Google Cloud Functions February 2016Global

Pricing

One of the main reasons for the popularity of FaaS is cost: it is dirt-cheap, and in many cases, practically free. With that being said, like everything else in the cloud, that price tag could dramatically change as the scale gets larger. 

FaaS Cost

The two main contributors to FaaS cost are:

  1. Number of requests: typically billed for every million requests per month
  2. Compute time: this is a measurement of the duration of the life of the function and the amount of memory provisioned, measured in GB-seconds (higher-memory execution will cost more than lower-memory execution of the same duration)

Additionally, other charges like data transfer or storage costs might apply, but those depend on the specific use case.

Pricing Comparison

All of the three major providers have a monthly free tier quota, and cost is incurred after that quota is reached. If you are experimenting or building a proof-of-concept, you will most likely be covered in the free tier.

The table below compares what each provider offers in terms of free tier quota, and how much additional usage costs.

ProviderFree Monthly Duration (GB-seconds)Free Monthly RequestsCost of Each Additional 1 Million  RequestsCost of Each Additional 1 GB-secondDuration is Rounded to the Nearest
AWS400,0001 Million$0.20$0.0000161ms
Azure400,0001 Million$0.20$0.0000161ms
GCP400,0002 Million$0.40$0.0000125100ms

Takeaway: The pricing structure is almost identical across all three providers.

AWS and Azure have identical pricing and free monthly quotas, while GCP offers an extra 1 million free requests per month, and has comparable pricing for additional requests. Both AWS and Azure round their duration to the nearest 1ms, while GCP rounds to the nearest 100ms increment, which could add to the overall cost at scale.


Cloud Dictionary

Get the Cloud Dictionary of Pain
Speaking cloud doesn’t have to be hard. We analyzed millions of responses to ID the top concepts that trip people up. Grab this cloud guide for succinct definitions of some of the most painful cloud terms.


Feature Comparisons

Language Support

There are more programming languages out there than there are Star Wars sequels, prequels, and spin-offs. And for obvious reasons, every programming language has strengths and weaknesses, so one needs to pick the right tool for the right job. Cloud providers, for the most part, support most of the popular languages in their respective FaaS offering.

Available Runtimes

The table below shows the currently supported FaaS runtimes for AWS, Azure, and GCP:

ServiceSupported Languages
AWS LambdaC#, Go, Java, Node.js, PowerShell, Python, Ruby
Azure FunctionsC#, F#, Java, Node.js, PowerShell, Python, TypeScript
GCP Cloud FunctionsC#, F#, Go, Java, Node.js, Python, Ruby, Visual Basic

Takeaway: Very comparable language support between the three major providers, with the only notable exceptions of GCP lacking support for PowerShell, and Azure lacking support for Go (a rather interesting observation considering the fact that Go was developed by Google and PowerShell by Microsoft!)

Custom Runtimes

If your language of choice is not listed above, it is still possible to “bring your own runtime”, which allows you to implement a provider’s FaaS in any programming language. See the table below for current support for custom runtimes.

ProviderSupport for Custom Runtimes
AWS LambdaYes, using custom deployment packages or AWS Lambda Layers
Azure FunctionsYes, using Azure Functions custom handlers
GCP Cloud FunctionsYes, using custom Docker images

Scalability, Concurrency, and Cold Starts

Early in the days of FaaS, the most common use case was to “kick the tires” or “mock something up” before moving to a more mature solution. Those days are gone, and today it is not uncommon to find global-scale production apps running fully on a FaaS backend.

To achieve this, it is important to understand how cloud providers scale FaaS workloads in response to increased demand, and how they handle concurrent requests.

Scalability

As mentioned earlier, FaaS functions are event-driven, so when an event is received, an instance of the function is spun up and the request is processed. The instance is kept alive to process subsequent events, and if none are received within a certain time frame, it is recycled.

All three providers advertise virtually unlimited and automatic scaling, although there are other factors that might be at play here, which are discussed below.

Concurrency

When a request is being processed by a FaaS instance and another request is received, a second instance is spun up to process this additional request (instances process only one request at a time.) Concurrency is the number of simultaneous functions that can run at any given time.

ServiceConcurrency
AWS LambdaStandard: 1000 per region (soft limit)
Reserved: varies
Provisioned: varies
Azure FunctionsNo advertised concurrency limit
GCP Cloud FunctionsNo advertised concurrency limit

AWS is the only provider to offer highly customized concurrency management options, while Azure and GCP are a little vague on how concurrent executions are handled.

Cold Starts

Given the relatively young age of FaaS as a technology, it has many detractors and naysayers. Their favorite critique, by far, is the notion of “cold starts”. 

So what is a cold start, anyway?

Imagine this familiar action movie scene: our hero is racing down the highway at 120 mph, probably on his way to save some lives, when he zooms past an unsuspecting highway trooper, on his break, reading a newspaper. By the time the trooper fumbles to start his engine and gets up to highway speed to give chase, our hero is miles away already, and the trooper will need some time to catch up. 

Similarly, a FaaS instance in an inactive state will require some additional time to respond to a request. This initial delay encountered is known as a cold start.

Contrast that with an FaaS instance that is already in an active state and receives a request, in this case no initial delay is experienced and the instance can start processing the request almost instantly. Using our example, this would be analogous to a highway trooper driving down the highway at normal speed when he gets passed by a speeding driver. In that case, the trooper can very quickly accelerate and catch up in mere seconds.

While cloud providers do not publish their cold start statistics, the following table shows estimated averages as observed by industry analysts:

ServiceAverage Cold Start (in seconds)
AWS Lambda<1
Azure Functions>5
GCP Cloud Functions0.5 - 2

Cold starts can affect the performance of FaaS workloads that are very sensitive to delay but there are ways to mitigate it, and it appears to affect the Azure platform more than its two competitors. Additionally, AWS now offers “provisioned concurrency” as an approach to eliminate cold starts with Lambda. We discuss this feature in more detail later in the article.


Automating AWS Cost Optimization
AWS provides unprecedented value to your business, but using it cost-effectively can be a challenge. In this free, on-demand webinar, you'll get an overview of AWS cost-optimization tools and strategies.


Configuration and Performance

Not all functions are created equal, so different workloads might require different settings to optimize performance.

Memory

Depending on how resource-hungry the code is, memory will have to be adjusted accordingly. If the memory allocated is too low, a function will take longer to execute and could potentially time out, but if the memory is set too high, you might end up over-paying for unused resources.

Cloud providers offer different maximum memory configurations, while CPU power is linearly and automatically configured in proportion to the amount of memory chosen.

ServiceMemory
AWS Lambda128 MB - 10240 MB
Azure Functions128 MB - 1500 MB (Consumption Plan)
128 MB - 14000 MB (Premium and Dedicated Plans)
GCP Cloud Functions128 MB - 4096 MB (in multiples of 128 MB)

Of note is the maximum memory that can be configured on GCP Cloud Functions: at 4096 MB, that limit is considerably lower than what AWS and Azure offer.

Execution Time

The other configurable aspect of FaaS is the maximum execution time. While most functions in the wild take seconds (or less) to execute, some intensive workloads can potentially take much longer, on the order of minutes, or even hours (for example, intensive machine learning or data analysis workloads).

The table below shows the maximum timeouts that each cloud provider offers:

ServiceMaximum Timeout
AWS Lambda15 minutes
Azure Functions5 minutes  (Consumption Plan)
30 minutes (Premium and Dedicated Plans)
GCP Cloud Functions9 minutes

It is important to note that increasing the timeout is not always the solution, and it should be considered in conjunction with adjusting the memory.


Post-COVID DevOps

Post-COVID DevOps: Accelerating the Future
How has COVID affected — or even accelerated — DevOps best practices for engineering teams? Watch this free, on-demand webinar panel discussion with DevOps leaders as we explore DevOps in a post-COVID world.


Orchestration

As discussed earlier, functions deployed on a FaaS are, by nature, stateless. In other words, functions are not aware of other functions or of the execution results of other functions.

Even invocations of the same function are completely independent of each other. This stateless paradigm is what makes FaaS so scalable and easy to provision.

While the stateless approach is excellent for executing a large number of short-lived and single-purpose functions (for example, a contact form on a website), it makes it difficult to build any meaningful complex applications that often require some sort of state management. Realizing this, cloud providers built orchestration services that integrate with these functions as “steps” in a workflow, where the output of one step can be passed as input to another step. This enabled building fairly complex workflows in a completely serverless approach!

The following table lists what each provider offers for orchestration services. Those services are often very scalable as well and have many features that are beyond the scope of this article:

ProviderOrchestration Service
AWSAWS Step Functions
AzureDurable Azure Functions
GCPGCP Workflows

HTTP Integration

The power of FaaS services lies in the fact that they are event-driven, meaning certain “interesting events” can trigger an execution of the function. 

“What are interesting events?” you might ask.

Anything from a simple cron schedule (example: run this function every day at midnight) to other services within the cloud provider’s ecosystem (for example, run this function when a file is uploaded to cloud storage). But one of the most popular scenarios is integrating FaaS with an HTTP endpoint. 

A static web frontend (i.e. HTML/CSS/JavaScript) with an integrated FaaS backend is a very common and popular architectural pattern for building serverless web apps. 

While all three providers support HTTP integration, AWS requires provisioning and configuring a separate resource, API Gateway, which is billed separately as well. Azure and GCP have a much more streamlined HTTP integration.

ServiceHTTP Integration Support
AWS LambdaYes, requires API Gateway (billed separately)
Azure FunctionsYes, out of the box
GCP Cloud FunctionsYes, out of the box

Hosting Plans

Given the various needs in terms of availability and latency required as discussed earlier, cloud providers reacted by providing different tiers of availability baked into their FaaS offering.

AWS Lambda historically came as one basic hosting plan, but more recently AWS started offering Provisioned Concurrency, which ensures that functions are initialized and ready to respond to events, cutting down the dreaded cold-start time to mere milliseconds. Azure offers a more complex variety of hosting options, while GCP just offers a one-size-fits-all plan.

ServiceHosting Plans
AWS LambdaGeneral
Provisioned Concurrency
Azure FunctionsConsumption
Premium
Dedicated
GCP Cloud FunctionsGeneral

The Bottom Line

When it comes to comparing the FaaS offerings of the three major cloud providers, one thing is very obvious: they are extremely similar and comparable, both in terms of features and cost. 

While AWS Lambda is the more mature and most popular of the three, Azure Functions appears to have some very similar features and in some ways, more options to accommodate edge cases. GCP Cloud Functions has a few less bells and whistles but is still fairly comparable to the other two.

The devil, as they say, is in the details. Most likely, there are other factors to consider when comparing these three FaaS services. But whatever the case is, the key takeaway here is that FaaS is here to stay, and it is a truly transformative technology that will only continue to gain adoption, so if you have not already, make sure that you are leveraging it!


Level up your cloud career

Looking to get certified or level up your cloud career? Learn in-demand cloud skills by doing with ACG's courses, labs, learning paths and sandbox software.