City Cloud advanced guide: having fun with horizontal scaling. Part 2.

ozgurCity CloudLeave a Comment

Welcome back to our series of articles about horizontal scaling. On our first part we gave you an overview of what this is all about, starting with one website and ending up with it, divided between two servers.

Now, it’s time to take it up a notch. We will go over a simple load balancing scheme, using Varnish, a popular load balancer and proxy server.

Above and beyond

Since we already have the database and webserver separated, what’s the next step? There are several avenues but we’ve decided to go through the load balancing one.

The motivation behind this is being able to handle more connections at the same time. A saturated and slow site is usually the difference between a visitor browsing your site and coming back or closing the browser window and miss it completely.

This time we’ll add a new server, which fortunately doesn’t need to be that powerful, also add a new webserver and then configure the whole thing so our users transparently access the load balancer, which in turn distributes the load. There are many benefits to this approach, for one, the load balancer acts as a proxy between the user and the webservers, this helps us caching several static pages and even dynamic ones. The caching can be setup to meet our needs. Let’s also not forget about the automatic fail-over if one of the webservers were to fail, or the added security.

Cloning the webserver

Let’s get our hands dirty. Our first order of business is to clone our webserver which, if you followed the previous article, should have public and private IP addresses. Go ahead, turn off your VM and clone it (see below).

Cloning a VM

This will take a few minutes, mostly due to the fact that it will copy the entire virtual hard disk. Once the server has been copied, we need to take care of a few things.

For starters, we are left with only one network card, which in this case is all we need. You might have to reset the network status, just edit the following file:

# Use your own preferred text editor
vim /etc/udev/rules.d/70-persistent-net.rules

Once inside, delete all the lines there. They are setup everytime the operating system finds a new network card with a new Mac Address. This way we make sure new cards are assigned a valid IP address. Although, in this case we only need one with an internal IP. In that regard, you should have a card with the correct private VLAN and a local address in the same subnetwork as the other two servers. In our previous example, ours had 192.168.1.1 and 192.168.1.2, so you can safely assign it to 192.168.1.3.

One last additional step we must perform is to tell the MySQL server that we want to allow this new webserver to the database. So, go and get into the database server one more time. Here is a friendly reminder of the command you need to use:

# Remember to change the IP address to the new server private one.
mysql> grant all privileges on wordpress.* to 'username_here'@'192.168.1.3' identified by 'password_here';
# Let's make sure privileges are reloaded
mysql> flush privileges;

Up to this point, we have two webservers and one database server. Let’s go ahead and install the load balancer.

Installing the load balancer

In our example, we’ve decided to use Varnish Cache, a software based load balancer and caching solution, among other things. You can check all about it, in the official site. Other noteworthy balancers are:  nginx, haproxy and relayd (under OpenBSD).

Create a new VM, using the image named “Debian 6.0.2.1 64-bit” and a small profile, we don’t need more processing power than that. Wait a couple of minutes, that’s all it takes.

Once the server has been created, proceed with the usual; setup a secure password and add a new network card with our private VLAN. Setup this new network card with a local IP address, it can be anything as long as it’s in the same subnetwork as the other three servers. We have chosen to use 192.168.1.4 but usually the entry point is the .1 (192.168.1.1 in this case).

Now, let’s install Varnish, shall we? Get into the server as root, and execute the following commands:

wget http://repo.varnish-cache.org/debian/GPG-key.txt
apt-key add GPG-key.txt
echo "deb http://repo.varnish-cache.org/debian/ squeeze varnish-3.0" >> /etc/apt/sources.list
apt-get update
apt-get install varnish

Those commands belong to this official guide, since we want an up-to-date version installed. Now we are going to setup varnish with a simple load balancing scheme. This is the tip of the iceberg so make sure you read the official documentation for further options.

To be perfectly clear, what we are going to do is tell Varnish that there are two servers and they will be accessed in a sequential fashion. This means that the first request will go to the first server, the second to the second one, the third one back to the first one and so on. The benefits of this approach are that you can have as much servers as you’d like, actually improving performance and if one of those servers break, the system will automatically retire that from the pool until it’s fixed.

Well, go ahead and edit the main configuration file:

vim /etc/varnish/default.vcl

In there, add the following lines:

backend website1 {
        .host = "192.168.1.1";
        .port = "80";
        .probe = {
                .url = "/";
                .interval = 20s;
                .timeout = 3s;
                .window = 5;
                .threshold = 3;
        }
}

backend website2 {
        .host = "192.168.1.3";
        .port = "80";
        .probe = {
                .url = "/";
                .interval = 20s;
                .timeout = 3s;
                .window = 5;
                .threshold = 3;
        }
}

director test_round_robin round-robin {
        # Main website
        {
                .backend = website1;
        }
        # First mirror
        {
                .backend = website2;
        }
}

sub vcl_recv {

        set req.backend = test_round_robin;
}

Time for some clarifications. Varnish uses a flexible configuration format, where you can setup a lot of different rules depending on your site requirements. Each configuration is usually delimited by blocks, starting with { and ending with } (if you are programmer, you are probably familiar with this syntax).

The first block tells Varnish there is a backend called “website1”, and the host is 192.168.1.1. Remember that IP address? That was our main webserver. There are other parameters such as the port number (here we use the default one, port 80) or how frequent we want to check for this host to be alive, timeout, etc.

The second block, named “website2”, is similar to the first one but with a different target IP address (192.168.1.3, our cloned webserver). We then use another predefined block, called “director” and create a round-robin scheme, this is where we tell Varnish that each request should go over sequentially through all the servers we define.

Last but not least, we override the subroutine called “vcl_recv” and setup a simple instruction: “set req.backend = test_round_robin;”. This is the scheme we defined earlier.

And now, let’s make Varnish read the new configuration.

# Kill varnish process
pkill varnishd
# Instruct Varnish to start with a 392 Megabytes cache. Note that we can also change the configuration file if we want to.
varnishd -f /etc/varnish/default.vcl -s malloc,392M -T 127.0.0.1:2000

And that’s about it, there are definitely more complex examples and rules, but this is a basic configuration that will provide you with the reference to build upon. Let’s test our setup.

Our first test

Fortunately, testing this is a lot faster than setting it all up. If you have a domain and virtual host setup, use the same as if you would have accessed the site directly. If you have been following our example to the letter, then you should change your hosts file and edit the line where you put the wordpresssite.com dummy domain and put the load balancer public IP address there.

Then, it’s all a matter of going to your browser and using the domain. What should be the expected result? Well, basically the same site we setup earlier on part 1, our WordPress basic installation. But, how do we know Varnish is doing its magic backstage? There are plenty of ways, but to be sure let’s enter our two webservers on two separate terminals and locate the webserver access logs file.

If you are using our predefined image, the log file should be located under /www/wordpresssite.com/logs/access.log. You can see all lines being added with the following command:

cd /www/wordpresssite.com/logs/access.log
tail -f access.log

Do this with both servers and then refresh the page repeatedly. You will notice that there is an alternating request for each refresh. Congratulations, you just installed your first load balanced server farm. Worth mentioning is that the accesses will probably vary and many times repeat themselves, since accessing a complex site like WordPress has many internal requests (images, javascript, css files, etc) and many of them are also cached by Varnish, which is great since it will be delivered a lot faster, alleviating both the webservers and the database.

If you want to be really thorough about it, you can (carefully) disable the public network interfaces, all should still continue to work since Varnish is using the internal IP addresses. You can do this permanently but you will have to enter always through the load balancer from then on, which is something we intend to when trying to reduce the amount of exposed vulnerabilities to the outside world.

Conclusions

This is in no way a comprehensive guide but will give you an overview of the possibilities. From now on, you can add rules to Varnish to do whatever your whims are. Some examples? Sure, you can have a part of your site accessed on a specific server, you can have a more aggressive cache, add more servers automatically or many other options. The drive behind this is usually the same one, security and increased performance.

In our next installment we’ll show you how to test this, break it on purpose and put it back on so you understand how to react when things go awry. We’ll also tell you a little bit more about Varnish options.

Until next time!