If part of your job involves developing or deploying applications, chances are you’ve played with Docker containers.
You may have even had an “aha!” moment when you realized that containers are a great way to deploy and update apps, and they solve a lot of historical problems (config management systems, I’m looking at you!).
But you may also be worried about the next step: actually using containers in production. After all, that usually means running Kubernetes, and online sysadmin communities may be telling you:
- Kubernetes is too hard!
- Kubernetes is too complex!
- No one runs containers in production anyway!
Of course, these statements are all completely false.
Kubernetes has become the industry leader for running containers at scale, and is relied upon by huge organizations like Box, Ocado, The New York Times, Spotify, Squarespace, and many more. This is why demand for Kubernetes skills — and salaries for roles that require Kubernetes — continue to soar.
But how do you break into Kubernetes? Isn’t the learning curve to building your own K8S cluster pretty steep?
Thankfully, there’s a great way to start learning Kubernetes without having to ingest hundreds of pages of documentation just to get a single working node up and running.
Google Kubernetes Engine provides fully managed Kubernetes clusters with just a few clicks. Let’s count the clicks to deploy NGINX on GKE!
You’ll need a GCP project to follow along with these instructions. From the GCP Console, just go to Kubernetes Engine and click Create Cluster.
GKE pre-populates all sorts of configuration information to give you a 3-node, reliable, automatically updated Kubernetes cluster. But when you’re just starting out and looking to keep your costs down, you can also choose the My First Cluster template.
Once you’ve clicked Create, GKE provisions the Kubernetes master and the control plane behind the scenes, which should only take a few minutes. These are the components that run the cluster, like a scheduler and various controllers. But you never have to worry about these as they are completely managed for you.
Now your cluster is up and running, deploying applications is yet again just a matter of a few clicks. Just select Deploy from the cluster information page.
GKE prompts us to choose a container to deploy. You can use the suggested demo container, which deploys the latest version of the NGINX web server. Then click Continue. On the next page we can specify some optional configuration. Take a look at this section:
One of the simplest but most powerful concepts in Kubernetes is the notion of declarative configuration. Everything that runs on your cluster is created from a declarative statement in a YAML file. Click the View YAML button to take a look at what GKE is creating for you.
It may seem confusing at first, but these YAML files are a convenient way to declare what applications and services you want to run on your cluster. One of the most important jobs Kubernetes has is to make sure that the state defined in our configuration files remains true. So when we click Deploy, Kubernetes will create these objects to make sure that this is the case.
GKE has now created 2 things for us:
- A Deployment — This is a configuration object that lets us manage revisions of deployed applications. It defines a template for our containers, including the container image and version, and states that we’d like 3 copies of this container, or 3 replicas running.
- A HorizontalPodAutoScaler — This is a clever little object that scales the number of replicas running based on demand. It will run a minimum of 1, and a maximum of 5.
GKE shows us an overview of the deployment:
Notice the 3 managed pods? Those are our replicas. Using Pods instead of individual containers allows us to optionally pair up containers to share an IP address and other resources.
So right now our NGINX Pods are running, but we can’t access them. Thankfully, GKE is prompting us to create a new object — a Service — by clicking Expose at the top of this page. Service objects route traffic into our cluster to groups of Pods. It’s like built-in traffic direction and load balancing!
Now we just click Expose again, and let GKE work its magic. When the service is set up, you can click the External Endpoints link to view your NGINX service.
In just 8 clicks we’ve created an auto-scaling NGINX web server on Kubernetes, exposed to the Internet with a global load balancer. 9 clicks if you include clicking on the URL at the end!
Turns out, it’s really not that hard or complex. Who knew sysadmins could be so snarky? 😉
Want to learn GKE Basics?
Learning the basics of Kubernetes through GKE grants you the freedom to understand the basic Kubernetes building blocks without being overwhelmed by cluster management straight out of the gate.
In this short course, we’ll explore Pods and other Kubernetes objects like Deployments and Services. We’ll learn how to combine these to reliably deploy applications and maintain uptime during updates and changes. And we’ll get real experience deploying applications to GKE with hands-on labs.
I’ve been working with infrastructure for over 20 years, and I’ve been embracing containers and Kubernetes for nearly 5 years. So I’m excited to help you overcome your fear of Kubernetes and enjoy the benefits of running containers in production!