On our previous Apica article, we talked about the control panel and how easy it’s to start firing tests against any given server.
Today, we are going to give you an example of a simple procedure to test three web servers, on different situations. And albeit this is not an exhaustive test, it could be useful as a starting point that you can use to start making your own real tests.
The objective of this test is to show you how you can perform organized tests against different servers but on similar conditions.
The results should give us a rough idea of how three known web servers perform under two different and very simple situations.
The results we picked are just a sample, you could use any other variables that are important to your specific site.
Cloud computing benefits
Doing tests against different hardware, a few years ago, was usually time consuming. Not everybody was lucky enough to have a variety of servers, so you needed to be careful while selecting physical parts to test, assemble the servers, set them up on a private network or datacenter and all the hassles involved.
Testing against different hardware is actually a requirement if you are, well, a hardware company. But if you are a software focused one, you want to deal with your system as much as possible and discard external problems. Of course, software runs on hardware so that’s really not a choice. Until just recently.
Cloud Computing, even around a few decades ago, got so mainstream that it would be a shame not to secure its benefits. Nowadays, with a service like City Cloud, you can fire up a few servers, set them up quickly and then discard them at will. This is great to make all kind of tests on the software side, you can scale vertically or horizontally, whatever the scheme, the choice is up to yours. And the best of all is that once you are done, you delete them, and the costs are reduced to zero, whereas in the past you would probably end up with idle servers unless.
Our test environment
Okay then, in order to have some point of reference, we need a set of standard parameters and servers to measure them against.
On this case, we took three web servers since they are one of the most common applications out there are today. We chose Apache 2 (prefork multiprocess module), Nginx and Lighttpd. Apache 2 is well known and full of modules for all kind of functionality. Lighttpd is more lightweight, it supports PHP and a lot of other functions but it’s usually used for static content. And last but not least, Nginx is both a proxy and a web server, also touted as lightweight and robust while handling thousands of connections at the same time.
So we also need a “victim” to test against, this is usually a website, an application or a combination. To keep it simple we went for the most basic website possible, a simple PHP script. One of them contains a single function called “phpinfo” (which gathers information from the web server and installed modules) and the other one has a random sleep function that gives us a 1 to 5 seconds processing time.
Now remember, that you would probably want to replace these scripts with your own complex website. The results will most likely be different but relevant to your needs.
With the servers already setup on City Cloud (remember, we made that possible in minutes! Here is a handy guide for you), we go ahead and as we described earlier, we create three Apica LoadTest classes, one for each case.
It’s the time to start pounding the servers. So, we use the following parameters using the corresponding Apica classes:
- 10, 50 and 100 concurrent connections
- 5 minutes test duration
- Other parameters as default
We can get inside the servers and see how they are faring, using top, while the tests are being conducted. This is not a precise measurement but it can always give you an idea.
Once the tests have finished, we are ready to gather our fresh data on Apica.
|Variable||Apache 2: Phpinfo||Lighttpd: Phpinfo||Nginx : Phpinfo||Apache 2: Sleep||Lighttpd: Sleep||Nginx: Sleep|
|Avg Network throughput 10 users||1.31 MBit/s||1.2 MBit/s||1.39 MBit/s||110.54 KBit/s||106.86 KBit/s||75.24 KBit/s|
|Avg Network throughput 50 users||6.47 MBit/s||6.19 MBit/s||7.34 MBit/s||544.02 KBit/s||536.06 KBit/s||89.07 KBit/s|
|Avg Network throughput 100 users||12.6 MBit/s||12.09 MBit/s||14.27 MBit/s||1.06 MBit/s||1.01 MBit/s||87.13 KBit/s|
Once we have the results, we can paste them into a spreadsheet and have a nice comparative table. In this case we chose the “Average Network throughput” variable for 10, 50 and 100 concurrent users. We could have just as easily taken any other variable that is relevant to us.
With this data we can already make a few conclusions. For instance, we know the phpinfo script should give us a constant throughput per request, and we also know that 50 users should be roughly 5 times, 10 users. So this gives us an idea that the network between the server and the testing facility (one of the Apica testing locations) is fine, and since we have the expected numbers, we can safely assume that the three web servers can handle at least up to 100 connections to that script.
On the other hand, the script file with the random sleep gives us expected results, or should I say unexpected results?. Which basically is, erratic behavior. Of course, in this case, using throughput would probably have not been the best variable choice. Nonetheless, sometimes erratic behavior mimics a real life situation better and maybe here, if we increased the duration of the test, the results would have been more accurate.
These numbers really won’t do you any good, since these two situations are not the ones that you are going to find on a production environment. But this will give you an overall idea on how to proceed.
Let us give you a couple of situations where this could be applied:
- You have a WordPress blog installed, and you need to test several simultaneous users.
- You have website and a database in the same server, and you would like to know how would your site improve if you move the database to a new server but you are unsure if the additional cost would be worth it.
- You have a new web game that scales horizontally, but you are not exactly sure how many users each farm can handle.
- Your site works flawlessly most of the time, but certain users are experimenting random problems. You can run continuous and long tests.
- You just changed a part of your website, you have a new database software and you don’t really know how this would fare on a real environment.
- And a slew more situations that you could think of.
As you can see, having access to a tool like this could save you a lot of headaches. Testing has really become a standard in systems that need to scale, and if you have been following Internet growth in the last few years, you already know that it could happen to any site, at any moment. It’s better to be prepared.