LACNIC looks under the browser’s bonnet

By on 28 Sep 2015

Category: Tech matters

Tags: , , ,

Blog home

Some of the fine vehicles seen in Montevideo during my residency at LACNIC
Some of the fine vehicles seen in Montevideo during my residency at LACNIC

I have been lucky enough to spend three months at the LACNIC office, in residency with their Research and Development team. I am very grateful to Oscar Robles, LACNIC CEO for the opportunity to work with his R&D staff, and explore mutual interests.

For some time now, Agustin Formoso, from LACNIC research and development, has been looking at end user measurements for the LACNIC region using experiments in the browser recruited from website visits.

At APNIC 40’s Lightning Talk series he presented on his investigations with some interesting and surprising implications for everyone. The below SlideShare is a brief package from his presentation. There will be more expanded results available from LACNIC in the near future.

Agustin was interested in the variance in reported fetch times in region that he was seeing, and has been testing how different Operating Systems and Browsers perform doing fetches in JavaScript.

To do this, he has set up a test rig on the ‘selenium‘ web test environment, which many of us know for testing browser access to software systems. For instance, it’s been incorporated into  tests the APNIC Software Development Group does against new code for the MyAPNIC portal.

What Agustin has done is get a cloud-deployed instance of Selenium running with different OS and Browser combinations, and then test repeated fetches against his website, to see how the timings vary. And the answer is that while they vary quite a bit each time its run, when the individual tests are plotted there is a systematic difference between different OS and browser combinations, even running on the same testbed in the same location.

The timing variance for different OS and Browsers can be large

The timing variance for different OS and Browsers can be large

Although JavaScript itself is singly threaded, some of the variance in the fetch times may be a function of the basic threading models available in different platforms.

OSX systems are truly threaded and have core OS support for multi-threaded runtime execution. This means that some execution calls can happen asynchronously outside of the JavaScript execute cycle.

Since JavaScript is running inside the browser, the overheads of other browser function calls are likely to look different if this has been explicitly coded for.

Other systems run portable threads in userspace, but essentially emulate threading behaviour inside of a single process execution time. Clearly the impact of browser-level system overheads will be different in these cases.

There is a stark speed advantage to using firefox on OSX for javascript!

There is a stark speed advantage to using firefox on OSX for javascript!

There are also differences in the implementation of JavaScript inside the browser, and these may also account for systematic variances in execution speed depending on which browser or OS combination you run. Not surprisingly there is also a huge amount of variance in run-time caused by the difference between wall clock and execution clock: if your eyes aren’t on the browser tab in question, its quite likely the CPU allocates it less effective runtime to save on overheads.

Agustin was able to show that by adjusting for these variances, he could make a much better estimation of end user delivery time to fetch web assets from a range of sources worldwide, and so improve his model of delays in the LACNIC membership region.

The matrix of delays seen in the LAC region after adjusting for browser and os

The matrix of delays seen in the LAC region after adjusting for browser and OS

APNIC Labs, by comparison, collects the RTT (Round Trip Time) information from packet captures of the 1×1 measurements we see. These are a low level (usually, directly at the kernel/network boundary) indication of end-to-end delay, based on the timing gap between three critical initial packets establishing a TCP session – the initial SYN, the subsequent SYN-ACK, and the final ACK.

The final ACK  establishes the TCP session form a round-trip time interval, which confirms an open TCP session has been established. But they have no significant data overhead because they don’t carry any user data or wait for any user process interaction. Therefore, they have almost no time delay from the operating system. The round-trip time can be judged from the microsecond clock time differences between the receipt of the first SYN, and the sending of the SYN-ACK, and the receipt of the ACK.

Agustin’s model of delay is taken higher up in the browser and reflects typical end-user delays inside a browser, which includes HTTP session establishment, TCP overheads, OS and JavaScript runtime. However, it’s a mechanism for measurement which anyone can run on their own web.

It’s also useful to understand how these times can vary, and what proportion of the time delay can be ascribed to the OS and browser, leading to more certainty of the estimated network delay component.

This is an ongoing body of work and I am sure Agustin is going to get a lot more information out of this activity.

Rate this article

The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.

Leave a Reply

Your email address will not be published. Required fields are marked *

Top