Configuring HTTP Load Balancing Using HAProxy

30 minutes
  • 2 Learning Objectives

About this Hands-on Lab

HAProxy is well-known for its ability to load balance HTTP traffic. In this lab, we’re going to get hands-on with HAProxy, using it to load balance traffic for a number of nginx web server containers. We’ll get hands-on, configuring a 2-site installation, with round-robin load balancing for each site. Upon completion of the lab, you will be able to configure HAProxy to load balance HTTP connections.

Learning Objectives

Successfully complete this lab by achieving the following learning objectives:

Configure HTTP Load Balancing

For this objective, we’re going to configure HTTP load balancing for our 2 sites.

Before we start, check the status of our nginx containers to make sure they’re up and running.

Perform the following:

  • Configure 2 HTTP frontends, 1 for each site.
    • Make the first site available on port 8000.
    • Make the second site available on port 8100.
  • Configure 2 HTTP backends, one for each site.
    • Use roundrobin load balancing for each site.
    • Add all 3 nginx containers for each site to the backend.
  • Add a configuration block for the stats web page.
    • Use port 8050 for the stats page.
    • Set the mode to http.
  • Enable and start the haproxy service. Check your work.
    • Make sure the haproxy service is enabled and started.
    • Use curl to confirm that you can connect to the following and that load balancing is working:
    • http://127.0.0.1:8000/test.txt
    • http://127.0.0.1:8100/test.txt
    • Confirm you can connect to the stats web page on port 8050.
Test HTTP Load Balancing

For this objective, we’re going to test round-robin load balancing on both sites and confirm it is working as intended.

Perform the following:

  • We’re going to see how HAProxy handles down servers.
    • Use the podman command to stop the site1_server3 container.
    • Use the podman command to stop the site2_server2 container.
    • Use the podman command to show the status of all containers.
    • Confirm the 2 containers you stopped are exited.
    • Reload the sites using a browser or curl, and confirm things still work and that traffic is routed around the down servers.
    • See the change in the stats web page. Refresh and see if HAProxy has picked up the down servers.
    • Use the podman command to stop all the remaining containers.
    • See the change in the stats web page. Refresh and see if HAProxy has picked up the down servers.
    • Try to connect to each site. See how HAProxy responds.
  • Use the podman command to start all the containers.
    • Use the podman command to show the status of all containers.
    • See the change in the stats web page. Refresh and see if HAProxy has picked up the up servers.
    • Try to connect to each site. See how HAProxy responds.

Additional Resources

It's time to build a web farm!

We're building a web development environment using containerized nginx web servers, and we need to add load balancing using HAProxy. We're going to configure 2 separate HTTP frontends for testing, 1 for each site. We have the containers set up, and now all we need to do is configure HAProxy to load balance our 2 sites.

Let's give it a go!

When the lab starts, you will want to open an SSH connection to your lab instance(s):

ssh cloud_user@PUBLIC_IP_ADDRESS

Replace PUBLIC_IP_ADDRESS with either the public IP or DNS of the instance(s). The cloud_user password has been provided with the instance information.

Entries for www.site1.com and www.site2.com have been created in /etc/hosts that point to 127.0.0.1. Additionally, SSL certificates for HAProxy have been generated in /etc/haproxy/certs/. The HAProxy package has also been installed but is not running.

On our system, we have 2 sites, site1 and site2, configured, with 3 web server containers in each, running rootlessly by the cloud_user account. They've been prepopulated with a test text file at /test.txt that identifies which site and server we're accessing.

The nginx containers are configured as follows:

  • site1_server1: web server accessible on port 8081
  • site1_server2: web server accessible on port 8082
  • site1_server3: web server accessible on port 8083
  • site2_server1: web server accessible on port 8084
  • site2_server2: web server accessible on port 8085
  • site2_server3: web server accessible on port 8086

Good luck and enjoy!

What are Hands-on Labs

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Sign In
Welcome Back!

Psst…this one if you’ve been moved to ACG!

Get Started
Who’s going to be learning?