You may have heard motivational speakers and bloggers tell you that the only limitations are those in your mind. While that’s a nice sentiment, these people obviously aren’t familiar with some of the hindrances you can experience maintaining business IT network. Say hello to ‘latency’ – a limitation that we all need to tangle with.
We can’t eliminate latency all together – but we can reduce it to ensure it won’t detrimentally impact your business – so you can continue to push those boundless goals with minimal drag.
What actually is latency?
Although it’s a complex process, the idea of latency is fairly simple: it’s the delay between a request for a data transfer – and the actual data being transferred.
Commonly, it’s the reason our favourite TV shows freeze and spin when we’re trying to watch them. A low latency connection has a small delay (usually not noticeable for 99.9% of programs and applications) and a high latency connection has a larger delay (hindering most applications).
Why does latency occur?
To understand why latency occurs, it’s important to get to grips with a handful of other terms. The first of those is bandwidth.
Bandwidth measures the volume of data that can be carried over a network – and throughput is the measure of exactly how much data is travelling across that potential bandwidth. Now, to understand why those two factors matter, it’s worth having a quick and dirty understanding of how devices speak to each on a network – and that’s generally using a language referred to as TCP/IP (Transmission Control Protocol/Internet Protocol).
TCP/IP is a way of establishing a connection between two devices – and, to test that connection, a tiny packet of data is sent from the device that’s requesting data – and then pinged back. The amount of time it takes to come back is the speed that the on-going connection will then operate at – and, as such, that’s hindered by the amount throughput (or traffic) currently hogging the connections its relying on.
Now, slow moving traffic on your network starts to cause an on-going problem – as, when it gets to choke points (such a firewalls or routers) it’ll start to backup. As more and more data starts to back up in these ‘traffic jams’ – the only way a smooth connection can be maintained is if some data is dropped – good for traffic volume – but bad for the application that’s trying to make sense of the now-partial information its receiving.
What impact can latency have?
So, latency will slow your traffic down, set a low speed across your network – and even see to it that some of your data is completely discarded – but what does all this mean in real, business terms?
To get a full grasp of what latency can mean, it’s worth considering the worst-case scenario for each of these situations:
Slow data connections
At the very best, a slow data connection represents very poor value for money when you’re probably paying a lot for an internet connection. When slowed, real time applications will not run, and your end-users may struggle to access SaaS applications and central applications. The knock-on effect can be damaging – as you’ll see enormous declines in customer satisfaction when people do not receive prompt service.
When data is lost, even in small volumes, it can quickly lead to services dropping – or being ‘down’. This downtime can cost businesses huge amounts – especially when it’s a mission critical application that you use. Could you function without your core applications? And what would be the impact if you simply couldn’t?
There’s no single issue that comes up when you’re struggling with network latency, instead, how latency impacts you will be based around how important your systems are to the service you provide. If, like most businesses, IT forms the backbone of your company, then latency could take its toll.
What can you do about latency?
We know latency can cause problems – but what can you do about it? Well, the first thing to consider is how you’ll get a handle on whether or not it’s causing you any issues. It’s worth using a latency calculator to see what impact the problem may be having on your business – the good news is, most are absolutely free.
When you’ve got an idea of the speed of data transfer that you’re experiencing across your network, you should think about creating a list of applications and processes that you simply could not run your business without. What kind of applications are they? Do you rely on any ‘real-time’ applications – i.e. applications which see users interacting with one-another in real time?
You might also want to think about the real financial implication of losing important systems. Sure, there’s no real need to do this while they’re consistently up – but what about when they’re down unexpectedly and people are asking you what the impact on the business is – it’s useful to know this information in advance.
When you’ve identified the systems that are vulnerable, you should think about why the delay is occurring. If this is something that’s going to prove problematic, you might want to think about bringing a managed service provider into your business so they can cast expert eyes over your infrastructure. Many managed service providers will work with your business for a set fee each month, allowing you an amount of consultation and maintenance time.
It might feel like there’s little you or a managed network provider can do about latency, after all, your information is likely to go around the world at least once before it reaches its destination – but, in reality, there’s a lot to be done – as most latency issues fall within your 4 walls – rather that out there in a generally well-optimized world that handles trillions upon trillions of gigabytes of information every day. Start by putting a latency assessment at the top of your ‘things to do’ list – and, when you do, you’re taking a big step toward making sure your applications and end-users can get the job done – and get it done without delay.