Measurement challenges in the gigabit era

By on 21 Jun 2018

Category: Tech matters

Tags: ,

1 Comment

Blog home

Now is an opportune time for those of us in the technical and research communities to explore the implications of the expanding deployment of gigabit per second (Gbps) end-user connection speeds (throughput capacity of an end-user’s connection) for large-scale measurement systems.

The subject is particularly important now that measurement systems, such as the excellent SamKnows system, are widely used to validate ISP speed claims. Such systems prompted a shift in some economies from ISPs loosely advertising ‘up to’ speeds, based on the highest theoretical speed obtainable, to one where ISPs measured and presented their delivery as ‘percent of advertised’ speeds at both peak and off-peak times.

The impact of effective measurement cannot be overstated. It is not uncommon for network operators to make operational and design decisions based on their understanding of: 1) How speeds are measured; and 2) The limitations and flaws of those measurement methodologies.

And of course, consumers also use web-based measurement tools to see for themselves how their service is performing.

As Internet speeds get faster, and certain customers use their home connections to support an ever-widening range of simultaneous connections, evolving the technical community’s approach to measurement may become a key component in helping to ensure that ISPs are delivering the best possible Internet experience.


Key points:

  • Many of today’s popular measurement systems were developed in the era of single-digit megabit per second access technologies and designed to measure at the lowest capacity link.
  • To be truly valuable to users, future testing should also reflect end-user behaviour.
  • An example of a next-generation speed test might be a set of roughly simultaneous tests to multiple, disparate destinations.

The future of measurement: not just what, but where

Many of the widely deployed measurement systems we have today were developed in the era of single-digit megabit per second (Mbps) access technologies, and they’ve been evolved to adapt to now double and triple-digit Mbps technologies through the use of larger file sizes, multiple TCP connections, expanded test server capacity and performance, and so on.

Traditionally, these systems tended to be designed based on an assumption that the lowest capacity link in any network environment was always the last-mile access network link, which in turn, was the primary focus of many measurement systems.

Systems were designed to measure at that lowest capacity link — where speeds were most likely to be closest to what customers would experience — in order to deliver the most accurate result. But as access capacity rises to several hundred Mbps to 1 Gbps, that assumption makes less sense.

Now, the lowest capacity link between the measurement servers and the end-users’ system can easily shift to some combination of interconnection points, the measurement servers themselves (see footnote 27 of 2016 FCC Measuring Broadband America Fixed Broadband Report and footnote 18 of 2015 FCC Measuring Broadband America Fixed Broadband Report), and associated data centre infrastructure, as well as in-home and end-user systems.

The last point may seem obvious — that in-home and user systems can be limiters. However, it is sometimes overlooked that these systems may only come with, at best, a 1 Gbps Ethernet interface and therefore that actual throughput will be somewhat below 1 Gbps in the real world, while some older systems may only be capable of a theoretical 100 Mbps. In other cases, such systems may be limited by storage read/write performance, memory or storage capacity, CPU performance, or other factors. This is exacerbated with Wi-Fi, where devices often lack sufficient radio performance to operate at higher speeds or the device is far from the access point, among other factors. All of these factors can significantly impact the speed measurements that the systems report on.

Looking to relatively lower capacity links outside of the home and ISP network, few measurement systems maintain sufficient infrastructure to enable accurate gigabit speed measurements. And, while a few measurement system operators may have resources to dedicate sufficient global server and data centre capacity to handle hundreds or more simultaneous 1 Gbps tests, the interconnection link between the measurement servers and the ISP network can also be a critical factor.

Such links can be a factor because a given measurement system operator rarely is a transit or backbone operator; it relies on a third party for transit. That third party has a normal economic incentive to maximize the use of their interconnection capacity, based on statistical multiplexing, and so has little incentive to maintain substantial excess capacity for just one of its many customers — in this case, the measurement system operator — to use several 1 Gbps test connections (absent some guaranteed/dedicated transit capacity agreement that would likely be uneconomical for a measurement system operator).

This isn’t to say that the interconnects are particularly overburdened, just that the interconnection capacity requirement for hundreds or more 1 Gbps tests is significant and that if such excess capacity existed on an interconnection link, it would likely be consumed by traffic from one of the many other users of that transit link.

So, we can see that the lowest capacity link can shift from the last-mile access network to interconnections, measurement servers, and measurement server data centres. If a key objective of a measurement system remains to gauge an ISP’s ability to deliver advertised speeds, then measurement systems will need to evolve in order to continue to measure aspects of the network over which the ISP has direct or indirect control.

Measuring based on end-user behaviour can improve network design

Modernizing measurement technologies also has the potential to deliver more relevant, useful information to end users, in a world where peak speed may no longer be the most important statistic. To the extent that we can make network measurements reflect actual end-user behaviour, measurement technologies will be imminently more useful.

It is easy to see how a test to determine if 5 Mbps or 25 Mbps is reliably delivered by an ISP is applicable to end users, as this reflects the speed of HD and 4K video streaming, for example. But as of today, the real-world value of a 1 Gbps connection lies less in the ability to deliver 1 Gbps speed to a single application (since there are no current applications outside of speed tests themselves that use a full gig) and more in the ability to distribute that capacity over numerous connected devices and applications for those customers who actually use 1 Gbps.

So, while there is certainly value for a user in being able to confirm that they are receiving the 1 Gbps service they pay for, knowing the speed of a 1 Gbps connection from one single point to another has limited functional value.

When your car mechanic checks to see how your car performs, in comparison, they test drive it on the street rather than on a race track or drag strip; measurements such as speed tests ought to reflect real-world uses.

To be truly valuable to users, testing of 1 Gbps should also reflect end-user behaviour, particularly if we want to arm consumers with performance statistics to which they can relate and upon which they can make purchasing and other decisions.

This is especially important because networks, devices, and other systems will naturally be optimized to perform such measurement tests as well as possible due to the strong disincentives to poor performance.

If measurements do not truly reflect end-user behaviour, and networks and related systems are optimized to perform well in that framework, then the result may be a network that performs in ways that may not actually benefit users to the extent that it otherwise might.

If, in addition to point-to-point tests, which will always have value, we are also able to develop tests that reflect how people might actually use the Internet with a multitude of devices and applications, then everyone in the ecosystem, from ISPs to end users, will reap the benefits. This is because the measurement system’s test results will focus on the things that directly correlate to a better end-user quality of experience for everyday usage of the Internet.

What might a next-generation speed test look like?

One approach might be a set of roughly simultaneous tests to multiple, disparate destinations. This might take the form of a supplemental test to the existing single destination testing that prevails today. For example, a 25 Mbps test to 40 different destinations or a 10 Mbps test to 100 destinations. In addition, tests from each end user location can be randomly distributed over time so as not to have them place undue load on servers at any given moment in time.

This approach has several key benefits:

  1. The multitude of competing connections is a better reflection of how some users actually use the Internet today. Those users have many devices, from PCs to gaming consoles and IoT devices, that are simultaneously using the network; the era of one PC placing all the demand on a connection over Ethernet is long past, and tests should reflect that.
  2. The size of each test more accurately reflects the typical demands placed on the network by applications, ranging from game and web page downloads to HD and 4K video streams, at 10 Mbps to 25 Mbps.
  3. There is no need for elaborate engineering and operation of super high capacity test server clusters that can handle hundreds of simultaneous 1 Gbps tests since the tests are distributed at more common workload levels that cloud services and other platforms can readily and routinely handle. This likely also helps to significantly reduce the cost of operating a measurement platform, as well as reduces operational and monitoring work.
  4. The need to maintain significant and unrealistic excess capacity at specific interconnection points goes away since the tests are distributed across many points of exchange.
  5. By spreading the test load across multiple destinations, which have varying Round Trip Times, ISPs that have subscribers far from key Internet exchanges (such as rural ISPs and ISPs with subscribers far from the coasts) get more equal footing compared to more centrally located ISPs.

Where to from here?

It seems the first step for the technical and research communities that focus on Internet performance measurement is to consider these points and think ahead a few years about how measurement platforms might evolve to remain a meaningful tool in the gigabit era.

In particular, operators of measurement platforms used by national regulators, and the ISPs that are the subjects of such tests, should evaluate the design of their throughput tests and the expectations created for end users resulting from the design of those tests. They’ve collectively done a good job to date, and they now have the opportunity to devise effective measurements for 1 Gbps speed tiers.

Now is the time to ask questions about the measurement of gigabit connections, while there is ample time to evolve the design of measurement platforms and systems so that next-generation tests become available as more users around the world subscribe to gigabit access network speeds.

Let me know what you think in the comments below.

The above is a summary of a lightning talk presentation I gave at the Internet Research Task Force (IRTF) meeting, which coincided with IETF 101.

Jason Livingood is Vice President of Technology Policy & Standards at Comcast.

Rate this article

The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.

One Comment

  1. Barry Greene

    “The last point may seem obvious — that in-home and user systems can be limiters.”

    This is an important overlooked point. I just upgraded connections in my home, using 5Ghz WIFI interconnect to bypass the really old ethernet switches in the home, and making sure I minimize the 2.4 GHz “community interference” overload from all the other WIFI units in the neighborhood.

    One item I didn’t see pointed out in the article is ‘measurements for results.’ I see this in the mobile space. There is performance measurements which when KPIs are not meet don’t lead to any meaningful action.

    We get too many researchers building performance measurement tools which show interesting information. When an Operations & Engineering look at the same data they ask “OK – now what? What is broke? How do we fix the problem.”

    As we get to Gbps speeds, focusing on measurements that lead to action is (IMHO) more important than the “this is cool data.”

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Top