Over at the CACM website, there’s an interesting and entertaining explainer about the complexities of buffer, bandwidth, latency and delay. Written by David Collier-Brown and titled ‘You Don’t Know Jack about Bandwidth‘, it’s well worth a read.
The issue is that customers (consumers / users) often interpret delays as ‘my Internet is slow’, even when their actual Internet link speed and ISP bandwidth are fine. The real problem could be that the content they’re accessing is hosted in an unexpected, distant location, creating a long delay path. If the distance is far enough, this delay becomes noticeable. Latency manifests in various ways but is closely tied to the issue of bufferbloat — the additional delay that occurs when queuing systems between the user and the destination come into play, particularly in a two-way communication protocol.
You can’t ignore the laws of physics, especially the speed of light. While people often know that light travels at 3 × 10^8 metres per second, that’s only true in a vacuum. When signals travel over electrical wires, they slow down to about 2.7 × 10^8 metres per second or roughly 97% of that speed. Through fibre optics, it’s even slower —around 2 × 10^8 metres per second — so a signal travelling at two-thirds the speed of light is actually expected.
The key point here is that distant locations are often much farther than people realize. For example, if your data packets are being served from Japan while you’re in Europe, the signal delay, even without any routers or switches along the way, is around 60 milliseconds. This is noticeably longer than the delay you’d experience receiving the same data from a nearby Content Delivery Network (CDN).
Sixty milliseconds might not sound like much, but it becomes noticeable when round-trip time comes into play — the time it takes to send data, receive an acknowledgment, and then send more. Over a long delay path, the cumulative effects of these protocols, along with additional delays from electrical-to-optical conversions, switching fabric, and routing, make the delay far more noticeable than it would be on a simpler, nearby path.
The article suggests that poor Internet performance is more likely due to latency issues than bandwidth limitations. For ISPs dealing with customer dissatisfaction over these delays, it highlights LibreQoS — a tool recently covered on the APNIC Blog — as a potential solution.
People used to say, ‘time is money’ but on the modern Internet, time also means delay. Is this a problem we can now solve?
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.