Protecting an HTTP Service Using HAProxy

1 hour
  • 2 Learning Objectives

About this Hands-on Lab

One common attack vector is our websites or web-based applications. HAProxy can help us fend off HTTP floods, Slowloris attacks, and more. In this lab, we’re going to get hands-on with HAProxy, using it to protect our HTTP connections. We’re going to secure our HTTP connections against HTTP floods, and block `curl` agents and clients by IP address. Upon completion of the lab, you will be able to configure an HAProxy installation to protect HTTP-based services.

Learning Objectives

Successfully complete this lab by achieving the following learning objectives:

Protect an HTTP Service with HAProxy
Running Basic HAProxy Server Tests

First, we need to add 2 entries to the /etc/hosts file on each of our client machines. We need these to point to the private IP address of the HAProxy server:

<HAPROXY SERVER PRIVATE IP>    www.site1.com
<HAPROXY SERVER PRIVATE IP>    www.site2.com

Before we get started with protecting our sites with HAProxy, let’s take a look at what a stock HAProxy configuration looks like when presented with a large number of requests.

  1. Open the web browser and connect to port 8050 on our public IP/DNS.
  2. Get the stats information for our HAProxy installation.
  3. Use ApacheBench (ab) to send a total of 100000 requests, with 100 concurrent requests to both www.site1.com and www.site2.com using https on each of our server instances.

    Checking our stats web interface, we can see the traffic coming in from both local instances of ApacheBench.

Defending Against HTTP Flood Attacks

We can set request rate limits to block a single agent by tracking the number or rate of requests from a single client using a stick-table.

  1. Edit the /etc/haproxy/haproxy.cfg file:
    • Add a new backend to put our stick-table in.
      • Create a sitck-table to track connections:
        • type ip
        • size 1m
        • expire 10m
        • http_req_rate(10s)
      • Add a line to our frontend block to feed the connection data to the per_ip_rates backend.
      • After the line you just added to your frontend, add a line to start denying requests with an HTTP 429 Too Many Requests response when the counters in the stick-table are over 10 requests per second.
      • Add a line to deny requests with an HTTP 500 Internal Server Error response for agents reporting as curl.
      • Add a line to deny requests with an HTTP 503 Service Unavailable response for agents in the /etc/haproxy/blocked.acl file.
  2. Modify the /etc/haproxy/blocked.acl file.
    • Add a line with the private IP address of the Client 2 host.
Test HTTP Attack Protection

Before we see what the effects of our changes are, let’s reset our stats and restart our ApacheBench instances.

  1. Kill ApacheBench on all 3 hosts.
  2. Restart the haproxy service.

Test Blocking Requests Based On Rate Limits

  1. To test our blocking requests, generate a new batch of traffic on the Client 1 instance only. Use ApacheBench (ab) to send a total of 100000 requests, with 100 concurrent requests to both www.site1.com and www.site2.com using https.
  2. Check the stats web interface again and see the effects of our changes.

Test Blocking Requests Using ACLs

We had blocked the curl agent and all requests from the Client 2 instance. Let’s check our work!

  1. Try using curl on the HAProxy host to load our sites.
  2. Try using wget on the HAProxy host to load our sites.
  3. Try using wget on the Client 1 host to load our sites.
  4. Try using wget on the Client 2 host to load our sites.

Additional Resources

We're under attack!

We have a lot of traffic on our websites, but not all of it is good news. HAProxy is doing a great job of handling the load, but it would be great if we could discard the bad traffic and keep only the good. Fortunately, HAProxy is up to the task!

Let's see how it's done!

You have been provided with three RHEL instances, one HAProxy/web server, and two client instances for testing. When the lab launches, you will be provided with login information for your three servers. I would open an SSH connection to each one, for ease in switching, as we will be working with all three lab servers directly.

When the lab starts, you will want to open an SSH connection to your three lab instances:

ssh cloud_user@PUBLIC_IP_ADDRESS

Replace PUBLIC_IP_ADDRESS with either the public IP or DNS of the instance(s). The cloud_user password has been provided with the instance information.

Entries for www.site1.com and www.site2.com have been created in /etc/hosts that point to 127.0.0.1. Additionally, SSL certificates for HAProxy have been generated in /etc/haproxy/certs/.

On our system, we have two sites, site1 and site2, configured, with three web server containers in each, running rootlessly by the cloud_user account. They've been pre-populated with a test text file at /test.txt that identifies which site and server we're accessing.

The nginx containers are configured as follows:

  • site1_server1: web server accessible on port 8081
  • site1_server2: web server accessible on port 8082
  • site1_server3: web server accessible on port 8083
  • site2_server1: web server accessible on port 8084
  • site2_server2: web server accessible on port 8085
  • site2_server3: web server accessible on port 8086

Good luck and enjoy!

What are Hands-on Labs

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Sign In
Welcome Back!

Psst…this one if you’ve been moved to ACG!

Get Started
Who’s going to be learning?