Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.
  • Labs icon Lab
  • A Cloud Guru
Google Cloud Platform icon
Labs

Autoscaling in Kubernetes

One of the most power features of orchestration tools such as Kubernetes is the ability to automcatically scale resource allocation in response to real-time changes in resource usage. In the context of continuous deployment, this provides a great deal of stability, with less need for human intervention. In this lesson, you will learn the basics of autoscaling in Kubernetes by creating a simple Horizontal Pod Autoscaler which will create and destroy pod replicas in response to CPU utilization.

Google Cloud Platform icon
Labs

Path Info

Level
Clock icon Intermediate
Duration
Clock icon 1h 30m
Published
Clock icon Jul 08, 2018

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Table of Contents

  1. Challenge

    Install the Kubernetes metrics API in the cluster.

    To accomplish this, do the following:

    • Clone the Kubernetes metrics repo.
    git clone https://github.com/kubernetes-incubator/metrics-server.git
    
    • Apply the standard configurations to install the Metrics API.
    cd metrics-server/
    git checkout ed0663b3b4ddbfab5afea166dfd68c677930d22e
    kubectl create -f deploy/1.8+/
    
    • Wait a few seconds for the metrics server pods to start. You can see their status with kubectl get pods -n kube-system.
  2. Challenge

    Configure a Horizontal Pod Autoscaler to autoscale the train schedule app.

    Check the example-solution branch of the source code repo for an example of the code changes needed in the train-schedule-kube.yml file: https://github.com/linuxacademy/cicd-pipeline-train-schedule-autoscaling/blob/example-solution/train-schedule-kube.yml.

    To complete this task, you will need to do the following:

    • Create a fork of the source code at https://github.com/linuxacademy/cicd-pipeline-train-schedule-autoscaling.
    • Add a CPU resource request in train-schedule-kube.yml for the pods created by the train-schedule deployment. Check the example solution if you need to need to know where to add this.
    resources:
      requests:
        cpu: 200m
    
    • Define a HorizontalPodAutoscaler in train-schedule-kube.yml to autoscale in response to CPU load.
    ---
    
    apiVersion: autoscaling/v2beta1
    kind: HorizontalPodAutoscaler
    metadata:
      name: train-schedule
      namespace: default
    spec:
      scaleTargetRef:
        apiVersion: apps/v1
        kind: Deployment
        name: train-schedule-deployment
      minReplicas: 1
      maxReplicas: 4
      metrics:
      - type: Resource
        resource:
          name: cpu
          targetAverageUtilization: 50
    
    • Generate some load on the app to see the autoscaler in action!

The Cloud Content team comprises subject matter experts hyper focused on services offered by the leading cloud vendors (AWS, GCP, and Azure), as well as cloud-related technologies such as Linux and DevOps. The team is thrilled to share their knowledge to help you build modern tech solutions from the ground up, secure and optimize your environments, and so much more!

What's a lab?

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Provided environment for hands-on practice

We will provide the credentials and environment necessary for you to practice right within your browser.

Guided walkthrough

Follow along with the author’s guided walkthrough and build something new in your provided environment!

Did you know?

On average, you retain 75% more of your learning if you get time for practice.

Start learning by doing today

View Plans