Share on facebook
Share on twitter
Share on linkedin

Packaging AWS Lambda functions as container images

James Beswick
James Beswick

Container image support for AWS Lambda was announced during AWS re:Invent 2020. This is a major new addition to the AWS functions-as-a-service offering. Lambda provides many benefits to developers in managing scaling, high availability and fault tolerance, and also enabling a pay-per-value model. By supporting container packaging for functions, Lambda is now an option for a broader audience of developers. 

In this post, I explain what this new functionality offers and walk through a tutorial showing how to build a container image and run in a Lambda function.

Why did AWS add support for container packaging?

Before this change, the Lambda deployment package was a zip file. The zip file contains the code and any libraries and dependencies. You could upload this file manually or use automation tools like AWS Serverless Application Model (AWS SAM), AWS CDK, or Serverless Framework

However, many customers have existing investments in container-based deployment tools and workflows. These include Docker in addition to CI/CD, security, and governance tools. With this change, developers can benefit from a uniform development and deployment process.

The benefits of using container packaging for functions

Lambda treats container-based functions as immutable entities. When invoked, functions deployed as container images are run as-is. This means that the deployment package is immutable across your different environments, including desktop, the CI/CD process, and the Lambda execution environment.

For developers with larger workloads, a container-based deployment package can now be up to 10 GB in size. This unlocks many new workload possibilities, especially for data-heavy or-dependency-heavy applications. For machine learning or data analytics, this allows developers to take advantage of the benefits of serverless compute. If you use PyTorch, NumPy and similar libraries, the previous 250 MB deployment package limit prevented many workloads from using Lambda.

This new approach also offers increased portability across different AWS compute options. You choose a preferred based image for your code and it’s simpler to achieve portability between services like AWS Fargate or Amazon EC2.

How it works

With container image support, Lambda supports the Docker image manifest schema and the Open Container Initiative (OCI) specification (version 1.0 onwards). It supports container image deployments from the Amazon Elastic Container Registry (ECR).

The Lambda service provides a variety of base image options with pre-installed runtimes. These base images will be patched and maintained by AWS. Currently, the runtimes supported include dotnetcore2.1, dotnetcore3.1, go1.x, java8, java8.al2, java11, nodejs12.x, nodejs10.x, python3.8, python3.7, python3.6, python2.7, ruby2.5, ruby2.7, provided.al2, and provided. Developers can also provide their own images based on Linux kernels.

One new interesting component here is the runtime interface client (RIC). RICs are wrappers that integrate customer function code with the Lambda API at runtime. These are preinstalled on the AWS-provided base images. For images you build, you must ensure that a RIC is present. There is an open source version available for custom base images.

There is also a runtime interface emulator (RIE) that enables you to test function code locally. It’s useful in development and other pre-production environments for testing the function by sending HTTP requests. The emulator listens for HTTPS requests on a port inside the container, then it wraps those requests and serves as events to the function. This is also an open source project.

The Lambda execution environment provides a read-only file system at runtime. To write files, you have access to the 512MB /tmp storage space. The default user is the only supported user, which enables Lambda to provide least privilege security permissions during invocation.

Many things don’t change

The ability to package Lambda functions as container images brings new capabilities but many things don’t change. If you are an existing Lambda developer who is happy with your current method of building and deploying applications, you don’t need to change anything. If you want to use this new type of packaging, you can rely on many things continuing to work as they did before.

The resource and operational models for Lambda are exactly the same. This means that automatic scaling characteristics, high availability across multiple Availability Zones, and the security and isolation models are identical.

Performance profiles also still depend upon image size, runtime choice, and dependencies in your function. Many of the optimization tips I discuss in this YouTube video still work for container-based Lambda functions. Lambda pricing is also the same, regardless of which packaging method you use.

Major Lambda features are still available to container-based functions. You can continue to use Provisioned Concurrency, extensions, Amazon Elastic File System (EFS), and X-Ray integration. You can also provide these functions access to your VPC as before, use reserved concurrency, or route success and failure handling to Lambda destinations. 

How to package a Lambda function as a container image

To show how this works in practice, this walkthrough uses a Linux-based environment with the AWS CLI, Node.js, and Docker already installed.

1. Create an application directory, set up npm, install the Faker.js package for generating test data:

mkdir getCustomerFunction
cd getCustomerFunction
npm init –y
npm i faker --save

2. Create a file called app.js and paste the following code. This the same Lambda handler code you use in a regular zip-file deployment:

const faker = require('faker')
module.exports.lambdaHandler = async (event, context) => {
    return faker.helpers.createCard()
}

3. Create a file called Dockerfile and paste the following code. This instructs Docker how to build the container, installs any necessary packages, and shows where the Lambda handler is available.

FROM public.ecr.aws/lambda/nodejs:12
COPY app.js package*.json ./
RUN npm install
CMD [ "app.lambdaHandler" ]

After these steps, my IDE looks like this:

4. Use Docker to build an image using this function code:

docker build -t get-customer .

5. Create a new repository in ECR and push the Docker image to the repo. Replace <accountID> with your AWS account ID and <region> with your preferred AWS Region:

aws ecr create-repository --repository-name get-customer --image-scanning-configuration scanOnPush=true

docker tag get-customer:latest <accountID>.dkr.ecr.<region>.amazonaws.com/get-customer:latest

aws ecr get-login-password | docker login --username AWS --password-stdin <accountID>.dkr.ecr.us-east-1.amazonaws.com

docker push <accountID>.dkr.ecr.<region>.amazonaws.com/get-customer:latest

Invoking container image as a Lambda function

Once the image is pushed to the ECR, you can use it in a new Lambda function. In the Lambda console, choose Create function, and then select the new container image in the Basic information panel. Choose Create function to finish the process.

In the next page, a notification appears when the function is successfully created with the container image. You can test this function in the same way as any regular Lambda function. After choosing Test, you see the random test data returned by the function code:

In the Lambda console, you can set the timeout (1–900 seconds) and the memory allocation (128 MB to 10,240 MB). The 10 GB limit is a new feature, raising the previous memory maximum of 3 GB.

Using AWS SAM to automate the process

Using AWS SAM can automate the build and deployment of container-based Lambda functions. To do this, you need the AWS SAM CLI installed. You need the ECR repo URI – to find this, navigate to the ECR console and copy the URI from the get-customer repo.

This walkthrough deploys exactly the same function, using the ECR repo created earlier. First, you use AWS SAM to initialize a project to generate a sample function and Dockerfile. Next, you use the build and deploy commands to automate to build the image, push to the ECR repo, and create the Lambda function.

From a terminal:

1. Enter sam init to start the AWS SAM wizard.

2. Choose ‘1 – AWS Quick Start Templates’.

3. You have a choice of zip or image deployment. Choose 2 – Image.

4. You can select your preferred runtime base image. In this case, choose 1 – amazon/nodejs12.x-base.

5. For Project name, enter ‘my-sam-function’. This creates a sample project complete with an AWS SAM template, README file, and unit tests.

6. Navigate into the hello-world function directory in the my-sam-function project, and open the app.js. Paste the following function code and save the changes:

const faker = require('faker')
module.exports.lambdaHandler = async (event, context) => {
    return faker.helpers.createCard()
}

7. With the terminal in the my-sam-function project directory, build the project:

sam build

8. Deploy the container-based function using the guided mode of AWS SAM deployment:

sam deploy –guided

For Stack Name, enter ‘my-sam-project’, enter a preferred Region, then enter the ECR repository URI copied from earlier.

After the deployment is complete, the new functions appears in the Lambda console. You can invoke the function by using the Test options used earlier.

Conclusion

With the new container image support for Lambda, you can use Docker to package your custom code and dependencies for Lambda functions. The 10 GB deployment package limit makes it possible to deploy larger workloads that do not fit into the existing 250 MB quota for zip files. 

In this post, I show how to build a Docker image and deploy the image in the Lambda service. I also show how to use AWS SAM to simplify the generation of a boilerplate project, build the image, and deploy the function.

For more tips and tricks to help you get the most from your Lambda-based applications, visit Serverless Land.

Recommended

Get more insights, news, and assorted awesomeness around all things cloud learning.

Get Started
Who’s going to be learning?
Sign In
Welcome Back!

Psst…this one if you’ve been moved to ACG!