Some interesting bits about latency

ozgurCity CloudLeave a Comment

On a recent article we talked about server uptime, and how we can anticipate related problems. This time we are going to talk about another common concept related to networks in general: Latency.

Are you connected to the Internet? Do you access any far away website? Do you play online games? Do you have video conferencing? If so, network latency is something you should be concerned about.

Read on to find out a couple of interesting bits about it.

What is latency exactly?

Let’s go back to the basics. Latency is the measure of a delay on any given system, in our case we specifically care about “network latency”, which is the delay you get when connecting to a remote system, over a network. You can have a go at a more detailed explanation, here.

Every time you connect to a website, you send and receive packets over the Internet, which is nothing more than a big network of networks. Your packets will automatically and transparently travel to their destination, and while there are a lot of operations going on backstage we generally don’t have to worry about that. The end result? A frame of video, a web page with images in our browser, information for the position of a monster on a game or our latest e-mails.

Saying that this level of abstraction is important, is an understatement. Thanks to this clever way of disguising what’s going on internally, we can worry about what we do best in our line of work, spare time or education endeavors.

What we do worry about is the amount of latency we have, since that affects us directly. It’s sibling, bandwidth, is just as important (or more in some cases). Bigger bandwidth will give us more packets arriving at the same time and once the initial connection was established, latency is usually not a problem.

But for some types of services, it’s an issue. And that’s due to the way the underlying language of the Internet works (a protocol called TCP/IP). You see, using TCP when you send a package, you have to receive a confirmation that it has been received by the server and then, once it got back we can send a new one. There is another one less reliable one, called UDP but let’s not dwell into that territory.

Latency sources

So, what can specifically affect network latency? Even with the world complex network topology, we can assert exactly how and where any delays are coming from.

First of all, each time you connect to a server, your request travels through a series of nodes all over the world. First off, it goes to your ISP, then it finds the shortest available route from there. As you can imagine, the farther the server, the more nodes the packet has to travel. To add to the complexity, these nodes are controlled by different companies, on different countries, with different policies and hardware. Each one of these nodes add to the final latency.

Sometimes one cannot help to wonder about how all of this works continuously for millions of people, everyday.

submarine cables map

Above you can see a topology of submarine cables, from the site Submarine Cable Map. There are other types of connections, via Satellite for instance or through a ground cable.

Fortunately and most of the time (unless you are very unlucky with your ISP and location), this doesn’t represent a problem. But it’s not the only source of latency, we also have the endpoint hardware. That’s right, you can have the best connection to a server, you can even be sitting around the corner where the datacenter is located but if that specific server is misbehaving, you could have added latency or worse, packet loss. Hardware problems are usually related to a faulty cable, a saturated router, power issues of all kinds or many scary things that system administrators have nightmares about.

Another source of latency is the software. Again, you can have the best hardware and location against the datacenter but if the software is not able to process your requests quick enough, you will have some added latency that can vary greatly.

Last but definitely not least, light is a limitation. Yes, you heard that right. And the problem lies mostly on how to transport light and convert from digital signals to light through fiber optics and then back again. And even without that limitation, going half-around the globe has some noticeable milliseconds for sites that require the less amount of latency possible. Interesting read, right here on Wikipedia.

Worth mentioning is that all of this has an average network latency on normal conditions. One should expect spikes that could be related to unforeseen circumstances which could be a natural disaster, a denial of service attack, human error, a cat tripping on a cable somewhere or faulty software. Overwhelming, isn’t it?

Some examples

Let’s see some concrete examples of real-world latency. Following is a graph exhibiting different situations:

avg latencies

It’s interesting to note that it can vary greatly, from a few milliseconds to a few hundred. Of course, latency against Mars is something that, unless we are N.A.S.A., we don’t have to worry about.

Talking about some concrete usage, we can have a rough guide of which latency range may work on several common situations.

Regular websites

A regular website is your run-of-the-mill everyday website that you load up in the morning, or check regularly. In this case, latency is usually not an issue. The site could be located really far away, where the latency is higher than 500 ms and you would still probably not care about it. In this regard, ensuring that the demand is met with several servers is paramount. Extreme delays in this case are usually related to underwhelming hardware or faulty software.

Acceptable latency range: 100-800 ms.

Heavy websites

Heavy websites are those that have a great deal of assets, javascript files, images, etc. Latency in this regard could affect the loading of a site due to the fact that a lot of connections are being established against different servers. Heavy sites alleviate this issue by having mirror sites on several regions, through their own datacenters of third-party services (commonly referred as CDN or Content Delivery Networks).

Acceptable latency range: 50-400 ms.

Web-based remote systems

In this increasingly web connected world, it’s definitely not unheard of that your company uses a remote system via web. Chances are that you moved from a standalone application and you expect responsiveness, latency is tied directly to that. You will notice a high latency if you are used to get instant information from a screen action. Under this situation, the location of the datacenter matters.

Acceptable latency range: 30-300 ms.

Casual online game (facebook, web)

Games are a big part of the web now and Facebook and web games are played by millions of people all over the world. These games tend to be downloaded completely before playing (transparently, as a Flash application for instance) or played with soft-latency in mind. What this means is that the actions that you perform are in the order of seconds and because of that, developers have been clever enough to design the game experience around it, faking some actions on your browser and using the server as the final authoritative figure when deciding if that action was legal.

Acceptable latency range: 200-1000 ms

Action Games (First person shooters or games like D.O.T.A. 2 or League of Legends)

Action games are those where you need quick reflexes, fast aiming and split-second decisions to perform an action. Latency here is of the utmost importance since a few milliseconds could be the difference between losing or winning. And you can imagine the rage this provokes when you made sure you trained hard enough only to realize that you lost because you were farther from the server.

Companies try to solve this by setting up servers distributed all over the world, in order to give all players a fair chance.

Acceptable latency range: 10-150 ms.

Stock exchange

Imagine placing a sell transaction where thousands of dollars (or even more) are at stake. Now imagine that a competitor just bought the same as you, 10 ms before. One can see how extremely important latency is in this case. Naturally, is not that simple and just as important is the reliability of the transactions. Many companies even have their own private network directly connected to the servers, to make sure the information has not been tempered with and can travel as quickly as possible.

Acceptable latency range: 5-100 ms.

Remote administration: Linux

Managing remote servers (such as those in City Cloud) require a shell connection, most of the time through SSH. Since this is a text-based system and the commands are short and simple, we don’t require a really low latency. Of course, getting the character printed on the screen as fast as possible will make your life easier if you make a lot of mistakes but even so, that doesn’t present a big deal.

Acceptable latency range: 50-500 ms.

Remote administration: Windows

Administering Windows requires the use of Terminal Services, which is graphics based. In this regard, sending back as many frames of screen information to you is key to guarantee a smooth operation. Unfortunately, the higher the latency the worse it gets and while you can still operate a Windows machine with over 1 second of latency, it’s usually not recommended.

Acceptable latency range: 50-250 ms.

Conclusions

As you can see, we have reached a point where there is no place on Earth where we can not reach with our networks. We can manage a server in Antarctica and then play a game with our friends online, we can see the photos from a far-away website and then download a big file from half around the globe.

The challenges now are not aimed at reaching farther places but how do we connect as many people as possible and in turn. And even bigger challenges have to do on how to maintain that information neutral and accessible from everywhere. In other words, decentralization and democratization.

What all of this means in terms of latency is that it’s going to get lower and lower, eventually.