Skip to content

How to Setup Nginx as a Load Balancer

In the last article, we learnt how to install Nginx web server in CentOS/Redhat. In this article we will learn how to configure it for load balancing. Setting up Nginx as load balancer is not too difficult.

First of all we will create a new configuration file for load balancer in the Nginx configuration directory. I will use my favorite nano editor to accomplish this.

sudo nano /etc/nginx/conf.d/load-balancer.conf

In the load-balancer.conf you’ll need to define the following two segments, upstream and server, see the examples below.

# Define which servers to include in the load balancing scheme.
# It’s best to use the servers’ private IPs for better performance and security.

upstream backend {
server 10.1.0.101;
server 10.1.0.102;
server 10.1.0.103;
}

# This server accepts all traffic to port 80 and passes it to the upstream.
# Notice that the upstream name and the proxy_pass need to match.

server {
listen 80;

location / {
proxy_pass http://backend;
}
}
Then save the file and exit the editor.

Next you’ll need to disable the default server configuration you earlier tested was working after the installation. Rename the default.conf in the conf.d/ directory to something that doesn’t end with .conf, for example:

sudo mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.disabled

Restart Nginx using the following command

sudo service nginx restart

Check that nginx starts successfully. If the restart fails, take a look at the /etc/nginx/conf.d/load-balancer.conf you just created to make sure there are no mistypes or missing semicolons.

You should now be passed to one of your back-end servers when entering the load balancer’s public IP address in your web browser.

Load balancing methods

Load balancing with nginx uses round-robin algorithm by default, if no other method is defined, like in the first example above. With round-robin scheme each server is selected in turns according to the order you set them in the load-balancer.conf file. This balances the number of requests equally for short operations.

Least connections based load balancing is another straightforward method. As the name suggests, this method directs the requests to the server with the least active connections at that time. It works more fairly than round-robin would with applications where requests might sometimes take longer to complete.

To enable least connections balancing method add the parameter least_conn to your upstream -section as shown in the example below.

upstream backend {
least_conn;
server 10.1.0.101;
server 10.1.0.102;
server 10.1.0.103;
}

While round-robin and least connections balancing schemes are fair and have their uses, they however cannot provide session persistence. If your web application requires that the users are subsequently directed to the same back-end server as during their previous connection, you should use IP hashing method instead. IP hashing uses the visitors IP address as a key to determine which host should be selected to server the request. This allows the visitors to be each time directed to the same server, granted that the server is available and the visitor’s IP address hasn’t changed.

To use this method, add the ip_hash parameter to your upstream segment like in the example below.

upstream backend {
ip_hash;
server 10.1.0.101;
server 10.1.0.102;
server 10.1.0.103;
}

In a server setup where the available resources between different hosts are not equal it might be desirable to favour some servers over others. Defining server weights allows you to further fine tune load balancing with nginx. The server with the highest weight in the load balancer is selected the most often.

upstream backend {
server 10.1.0.101 weight=4;
server 10.1.0.102 weight=2;
server 10.1.0.103;
}

For example in the configuration shown above the first server is selected twice as often as the second, which again gets twice the requests compared to the third.

Load balancing with HTTPS enabled

Enabling HTTPS for your site is a great way to protect your visitors and their data.

Using encryption with a load balancer is easier than you might think. All you need to do is add an other server section to your load balancer configuration file which listens to HTTPS traffic at port 443 with SSL and set up a proxy_pass to your upstream segment like with the HTTP in the previous example above.

Open your configuration file again for edit.

sudo nano /etc/nginx/conf.d/load-balancer.conf

Then add the following server segment to the end of the file.

server {
listen 443 ssl;
server_name <domain name>;
ssl_certificate /etc/ssl/live/<domain name>/cert.pem;
ssl_certificate_key /etc/ssl/live/<domain name>/privkey.pem;

location / {
proxy_pass http://backend;
}
}
Then save the file, exit the editor and restart nginx again with

sudo service nginx restart

Setting up encryption at your load balancer while using the private network connections to your back-end has some great advantages.

As only your back-end servers have access to your private network, it allows you to terminate the SSL at the load balancer and thus only passing forward HTTP connections. It also greatly simplifies your certificate management as you can obtain and renew the certificates from a single host. With the HTTPS enabled you also have the option to enforce encryption to all connections to your load balancer. Simply update your server segment listening to port 80 with a server name and a redirection to your HTTPS port, then remove or comment out the location portion as it’s no longer needed. See the example below.

server {
listen 80;
server_name <domain name>;
return 301 https://$server_name$request_uri;

#location / {
# proxy_pass http://backend;
#}
}

Save the file again after making the changes and then restart nginx.

sudo service nginx restart

Now all connections to your load balancer will be served over encrypted HTTPS connection and requests to the unencrypted HTTP will be redirected to use HTTPS as well. This provides a seamless transition into encryption with nothing required from your visitors.

Health checks

In order to know which servers are available nginx’s implementations of reverse proxy includes passive server health checks. If a server fails to respond to a request or replies with an error, nginx will note the server as failed and will try to avoid directing connections to that server for a time.

The number of consecutive unsuccessful connection attempts within a certain time period can be defined in the load balancer configuration file by setting a parameter max_fails to the server lines. By default, when no max_fails is specified, this value is set to 1. Optionally setting the max_fails to 0 will disable health checks to that server.

If max_fails is set to a value greater than 1 the subsequent fails must happen within a specific time frame for the fails to count. This time frame is specified by a parameter fail_timeout, which also defines how long the server should be considered failed. By default the fail_timeout is set to 10 seconds.

After a server is marked failed and the time set by fail_timeout has passed, nginx will begin to gracefully probe the server with client requests. If the probes return successful, the server is again marked live and included in the load balancing as normal.

upstream backend {
server 10.1.0.101 weight=5;
server 10.1.0.102 max_fails=3 fail_timeout=30s;
server 10.1.0.103;
}

Using the health checks allows you to adapt your server back-end to the current demand by powering up or down hosts as required. Starting up additional servers during high traffic can easily increase your application performance when new resources become automatically available to your load balancer.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

x  Powerful Protection for WordPress, from Shield Security
This Site Is Protected By
Shield Security