Containers and serverless offer many of the same benefits. But what are the differences between containers vs serverless? In this post, we’ll talk about the pros, cons, and uses of each.
Serverless was nothing new when AWS first introduced Lambda in 2014, but in the years since it has grown exponentially. Today, you may find yourself curious about whether a move to serverless would be the right one for you, your company, and your applications. And while this heavily depends on how your applications are architected, we can make some generalities based on setup.
Let’s take a high-level look at some of the benefits and concerns when choosing between staying on containers or moving to serverless.
Accelerate your career in cloud
Containers vs Serverless
If we don’t look too deep, containers and serverless seem to share many of the same benefits: our code is reusable, reproducible, and assuming we’re using some kind of container service alongside our containers, highly scalable.
In fact, if we were to get the opportunity to look at the servers that host our serverless code (because there is a server under there, even if we can’t access it), we would discover our code is running in a container. But that doesn’t mean containers and serverless are one in the same.
When we’re developing a serverless application the only thing we need to supply to our serverless platform is our code, generally through some kind of version control repository. This eliminates any up-front setup to actually launch your code — we don’t even need to pick a container image! That said, any integrations outside of our code will need to find a new home if you have any containerized monoliths, which really only means that by using serverless you’ll be forced to drop some bad habits (like keeping around PHP monolith).
Serverless applications only run when triggered. While we can set things up so the parts of our application that need to remain “warm” do so, there’s no need for other, less-commonly used functions to stick around when not needed. (Compared to containers which need to be left running for our code to work.) This means we’re not paying for what we’re not using, which means we’re saving money.
We noted that both serverless and containers can scale, but how they scale can make a difference. When using serverless, we’re scaling at a function level instead of an application or service level, which means the components we’re scaling are smaller, more modular, and thus cost-saving.
When working with serverless, we don’t have access to the underlying infrastructure — container, server, or otherwise. This means how we troubleshoot, debug, view logs, and generally interact with our application as developers or devops engineers will change, leaving us entirely at the mercy of our serverless platform of choice. This may be a good thing, but for many it will be letting go of valuable resources for working with our apps.
When working with serverless, we’re limited in how we architect our application. Most function-as-a-service platforms are built around the idea of event-driven architectures, with limited flexibility for other options. We’re also restricted by whatever limits our platform imposes on us, such as function length, code upload size, and runtime.
Finally, for those with intense security requirements that may involve things like data being held in certain regions or on-prem, containers may well still be the best choice. That said, remember that most cloud platforms follow stringent security regulations, and may have better security than smaller companies can manage. Ultimately, it depends on what standards you need to meet, but you may still find that serverless is prohibited in some instances.
At a Glance: Containers vs Serverless
|Setup||Container infrastructure must be set up; choose container image, size, etc.||Just provide code; will also most likely leverage managed services for integrations|
|Architecture||Containers can support any kind of code architectures – even your PHP monoliths||Serverless platforms rely on event-driven infrastructures|
|Scaling||Scalable at the container-level||Scalable at the function-level|
|Cost||Pay as long as the container runs||Pay only when the function runs; functions only run when called|
|Access||Access to the underlying container (and often infrastructure) aids in troubleshooting and debugging||We only have access to our code and other associated services we may be using; new troubleshooting and debugging techniques must be leveraged|
|Flexibility||Architect however you want, run functions as long as your want, and use you own runtimes||Restricted to the limitations of the platform, including function length, upload size, and runtime|
|Security||Full control of your own security choices – for better or for worse||Backed by the security power of your chosen cloud platform|
While serverless is undoubtedly exciting and isn’t going anywhere anytime soon, whether or not it’s the right choice is still a decision that is ultimately unique to each application or company. Serverless can certainly offer cost-savings and minimal overhead, but it’s in exchange for letting go of the underlying access many developers, engineers, and administrators have relied on and the flexibility benefits that containers offer.
To learn more about serverless and containers, check out our course Changing Architectures from Containers to Serverless. In it, we’ll explore the why, if, and how to move from containers to serverless.
Accelerate your career in cloud