Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.
  • Labs icon Lab
  • A Cloud Guru
Google Cloud Platform icon
Labs

Creating a Load Balancer

In this learning activity, you will install and configure a load balancer to be the front end for two pre-built Apache nodes. The load balancer should be configured to run in the best-effort stickiness mode.

Google Cloud Platform icon
Labs

Path Info

Level
Clock icon Intermediate
Duration
Clock icon 1h 30m
Published
Clock icon Nov 14, 2018

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Table of Contents

  1. Challenge

    Install HAProxy on `Server1`

    Now, we'll start off by installing HAProxy:

    [root@Server1]# yum -y install haproxy
    

    Enable and start the service:

    [root@Server1]# systemctl enable haproxy
    [root@Server1]# systemctl start haproxy
    

    Then verify that incoming port 80 traffic is permitted through the firewall. Running firewall-cmd --list-all should show http in the list of allowed services.

  2. Challenge

    Configure HAProxy on `Server1`

    We need to configure the HAProxy's frontend and backend. HAProxy should listen on port 80, and use node1 and node2 as the backend nodes. Let's edit /etc/haproxy/haproxy.cfg and add the configuration code:

    
        timeout check            10s
        maxconn                  3000
    
    # Our code starts here:
        
    frontend app1
            bind *:80		
            mode http
            default_backend apache_nodes
        
    backend apache_nodes
            mode http
            balance source		
            server node1 10.0.1.20:8080 check
            server node2 10.0.1.30:8080 check
    
    # End of our code
    #-------------------------------------------------------
    # main frontend which proxys to the backends
    #-------------------------------------------------------
    

    We'll have to restart the daemon so that our changes take effect, then check with ss:

    [root@Server1]# systemctl restart haproxy
    [root@Server1]# ss -lntp
    
  3. Challenge

    Configure the `node1` and `node2` Firewalls

    We need to permit incoming port 8080/TCP traffic on node1 and node2. On each node (after logging in, then becoming root), perform the following:

    [root@node]# firewall-cmd --permanent --add-port=8080/tcp
    

    Then reload the firewall configuration:

    [root@node]# firewall-cmd --reload
    
  4. Challenge

    From `Client1`, Validate That Settings Are Correct

    On Client1, we can run a curl on the Server1 private IP address:

    [cloud_user@Client1]$ curl &ltServer1_IP_ADDRESS>
    

The Cloud Content team comprises subject matter experts hyper focused on services offered by the leading cloud vendors (AWS, GCP, and Azure), as well as cloud-related technologies such as Linux and DevOps. The team is thrilled to share their knowledge to help you build modern tech solutions from the ground up, secure and optimize your environments, and so much more!

What's a lab?

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Provided environment for hands-on practice

We will provide the credentials and environment necessary for you to practice right within your browser.

Guided walkthrough

Follow along with the author’s guided walkthrough and build something new in your provided environment!

Did you know?

On average, you retain 75% more of your learning if you get time for practice.

Start learning by doing today

View Plans