Share on facebook
Share on twitter
Share on linkedin

Cloud Container Services Compared – AWS vs Azure vs GCP

Janani Ravi
Janani Ravi

This installment of our Cloud Provider Comparisons series focuses on containers. We’ll take a look at the tools each of the main cloud platforms (AWS, Azure and GCP) offer for deploying and running containerized applications. And we’ll assess the advantages and disadvantages of each so you can determine what’s best for your particular application. We’ll also cover container registries, standalone container limitations, Kubernetes, PaaS, serverless containers, and hybrid multi-cloud offerings.

Let’s dive in!



What is a container?

The word ‘container’ might make you immediately think ‘shipping containers’, and that’s actually not a bad analogy. A container brings together everything you need to run your application in a single lightweight package.

Unlike old-school virtual machines, containers do not run directly on top of your operating system. Instead, they run on top of a container runtime. A runtime is a set of instructions that comes into play when an application is running, and tells the operating system how to parse and execute code. This makes your apps highly portable. A benefit of this portability is that it greatly speeds up the development and release cycles.


Keys

Your keys to a better career

Get started with ACG today to transform your career with courses and real hands-on labs in AWS, Microsoft Azure, Google Cloud, and beyond.


What are container registries?

Azure, AWS and the GCP all support container clusters. First, you build a container image, which is a file with code that can create a container. Then, you register this container image within a container registry, which allows you to secure access to your container images, manage different versions of images, and more.

  • On AWS, it’s called an ECR (Elastic Container Registry).
  • In GCP, given that teams need to manage more than just containers, you’ll find a next-gen container registry called the GCP Artifact Registry. The Artifact Registry is used not only for container images, but also for language packages like Maven and NPM, and operating system packages like Debian.

What are the limitations of standalone containers?

On all three cloud platforms, you can deploy your container images directly on virtual machine instances. This is known as Infrastructure-as-a-Service deployment. The downside is that this has very high administrative overload and forces you to deal with individual standalone containers, which is not ideal.

Standalone containers are not able to provide replication, auto-healing, auto-scaling, or load balancing, which are all must-haves in modern applications.

These drawbacks highlight exactly why you need a container cluster orchestrator like Kubernetes, which automates software development, replication, scaling, and load balancing on container clusters.


All about Kubernetes

All three cloud platforms offer their own managed Kubernetes offerings, so if most of your applications run on one of the cloud platforms, stick to running Kubernetes on that same platform. This will offer you great integrations with the other services you’re already using on that particular cloud platform. 

The pros and cons of Kubernetes on each cloud platform

In terms of naming conventions, Azure’s version of managed Kubernetes is called Azure Kubernetes Service (AKS). AWS calls theirs Elastic Kubernetes Service (EKS), while GCP – the birthplace of Kubernetes – has Google Kubernetes Engine (GKE).

Each of the cloud providers’ managed Kubernetes service offers its own distinct advantage.

  • Amazon’s EKS is the most widely used.
  • Azure’s AKS is arguably the most cost-effective option.
  • And then there’s Google’s GKE, which has the most features and automated capabilities of the three providers.

Upgrades

AKS and GKE are more automated than EKS – they will automatically handle security patches on the control plane and upgrade the nodes that make up your Kubernetes cluster. Upgrades to components and node health repair on EKS requires some manual steps. 

Cluster Nodes

AKS, EKS, and GKE all support virtual machine nodes with GPUs enabled, but only EKS also supports bare metal machines as your cluster nodes. 

Command line support

AKS and GKE have complete command line support, but AKS’ command line support is much more limited. 

Service mesh

EKS and GKE both offer an integrated service mesh that works on Kubernetes called App Mesh (EKS) and Istio (GKE), but AKS does not yet offer an integrated service mesh to allow you to work with microservices.

Nodes

AKS can support 500 nodes in a Kubernetes cluster, EKS can support 100, and GKE can support up to 5,000.


For a continued in-depth comparison of each managed Kubernetes service, check out this post by our very own Training Architect Alexander Potasnick.


Turnkey Kubernetes

If you’re looking for a more turnkey solution using Kubernetes, Red Hat OpenShift is a Platform-as-a-Service offering that offers OpenShift build and deployment tools. OpenShift is built with Kubernetes at its core but it also offers IDEs, runtimes, build tools, CI/CD services, a service mesh, and more. Red Hat OpenShift is available as a service on both AWS and Azure, but not on the GCP.


Complete guide to the Cloud and Dictionary

Get the Cloud Dictionary of Pain
Speaking cloud doesn’t have to be hard. We analyzed millions of responses to ID the top concepts that trip people up. Grab this cloud guide for succinct definitions of some of the most painful cloud terms.


Serverless containers

What if you want to be able to deploy and run containerized applications without managing infrastructure and creating clusters?

Well, you can do this using serverless containers. 

  • Microsoft was the first in the industry to offer serverless containers in the public cloud via Azure Container Instances, which run containers without using a Kubernetes cluster.
  • On GCP, you can run containerized workloads in a serverless manner using CloudRun without needing an underlying Kubernetes cluster.
  • AWS Fargate is a serverless offering on AWS that removes the overhead of scaling, patching, and managing servers on your container clusters. An important way it differs from Azure Container Instances and GCP CloudRun is that Fargate is used to abstract away the overhead of using an orchestrated cluster, either EKS or ECS. It is not used without an underlying cluster.

Hybrid multi-cloud offerings

Since Kubernetes can be deployed at on-premises data centers as well as on all cloud platforms, it offers a middle ground between IaaS and PaaS options. This allows you to work effectively in today’s hybrid, multi-cloud world.

Each cloud platform has its own offering to support hybrid, multi-cloud environments. 

  • On Azure, there’s Azure Arc, which goes beyond managing hybrid Kubernetes deployments. Azure Arc allows you to manage servers, Kubernetes clusters, Azure data services, and SQL servers hosted on resources outside the Azure platform.
  • Amazon EKS Anywhere is a deployment option that allows customers to create and operate Kubernetes clusters on their own infrastructure, supported by AWS.
  • GCP offers Anthos, which is built around a Kubernetes core where you run GKE on both your cloud machines and machines at your on-premises data center. Google then uses a single control plane to manage your applications consistently in this hybrid environment.

We can’t contain our excitement about our awesome range of Docker and Kubernetes courses. You can also subscribe to A Cloud Guru on YouTube for weekly cloud news, like us on Facebook, follow us on Twitter, and join the conversation on Discord.

Recommended

Get more insights, news, and assorted awesomeness around all things cloud learning.

Sign In
Welcome Back!

Psst…this one if you’ve been moved to ACG!

Get Started
Who’s going to be learning?