Internet protocols are changing

By on 12 Dec 2017

Category: Tech matters

Tags: , , , , , ,

43 Comments

Blog home

When the Internet started to become widely used in the 1990s, most traffic used just a few protocols: IPv4 routed packets, TCP turned those packets into connections, SSL (later TLS) encrypted those connections, DNS named hosts to connect to, and HTTP was often the application protocol using it all.

For many years, there were negligible changes to these core Internet protocols; HTTP added a few new headers and methods, TLS slowly went through minor revisions, TCP adapted congestion control, and DNS introduced features like DNSSEC. The protocols themselves looked about the same ‘on the wire’ for a very long time (excepting IPv6, which already gets its fair amount of attention in the network operator community.)

As a result, network operators, vendors, and policymakers that want to understand (and sometimes, control) the Internet have adopted a number of practices based upon these protocols’ wire ‘footprint’ — whether intended to debug issues, improve quality of service, or impose policy.

Now, significant changes to the core Internet protocols are underway. While they are intended to be compatible with the Internet at large (since they won’t get adoption otherwise), they might be disruptive to those who have taken liberties with undocumented aspects of protocols or made an assumption that things won’t change.

Why we need to change the Internet

There are a number of factors driving these changes.

First, the limits of the core Internet protocols have become apparent, especially regarding performance. Because of structural problems in the application and transport protocols, the network was not being used as efficiently as it could be, leading to end-user perceived performance (in particular, latency).

This translates into a strong motivation to evolve or replace those protocols because there is a large body of experience showing the impact of even small performance gains.

Second, the ability to evolve Internet protocols — at any layer — has become more difficult over time, largely thanks to the unintended uses by networks discussed above. For example, HTTP proxies that tried to compress responses made it more difficult to deploy new compression techniques; TCP optimization in middleboxes made it more difficult to deploy improvements to TCP.

Finally, we are in the midst of a shift towards more use of encryption on the Internet, first spurred by Edward Snowden’s revelations in 2015. That’s really a separate discussion, but it is relevant here in that encryption is one of best tools we have to ensure that protocols can evolve.

Let’s have a look at what’s happened, what’s coming next, how it might impact networks, and how networks impact protocol design.

HTTP/2

HTTP/2 (based on Google’s SPDY) was the first notable change — standardized in 2015, it multiplexes multiple requests onto one TCP connection, thereby avoiding the need to queue requests on the client without blocking each other. It is now widely deployed, and supported by all major browsers and web servers.

From a network’s viewpoint, HTTP/2 made a few notable changes. First, it’s a binary protocol, so any device that assumes it’s HTTP/1.1 is going to break.

That breakage was one of the primary reasons for another big change in HTTP/2; it effectively requires encryption. This gives it a better chance of avoiding interference from intermediaries that assume it’s HTTP/1.1, or do more subtle things like strip headers or block new protocol extensions — both things that had been seen by some of the engineers working on the protocol, causing significant support problems for them.

HTTP/2 also requires TLS/1.2 to be used when it is encrypted, and blacklists cipher suites that were judged to be insecure — with the effect of only allowing ephemeral keys. See the TLS 1.3 section for potential impacts here.

Finally, HTTP/2 allows more than one host’s requests to be coalesced onto a connection, to improve performance by reducing the number of connections (and thereby, congestion control contexts) used for a page load.

For example, you could have a connection for www.example.com, but also use it for requests for images.example.com. Future protocol extensions might also allow additional hosts to be added to the connection, even if they weren’t listed in the original TLS certificate used for it. As a result, assuming that the traffic on a connection is limited to the purpose it was initiated for isn’t going to apply.

Despite these changes, it’s worth noting that HTTP/2 doesn’t appear to suffer from significant interoperability problems or interference from networks.

TLS 1.3

TLS 1.3 is just going through the final processes of standardization and is already supported by some implementations.

Don’t be fooled by its incremental name; this is effectively a new version of TLS, with a much-revamped handshake that allows application data to flow from the start (often called ‘0RTT’). The new design relies upon ephemeral key exchange, thereby ruling out static keys.

This has caused concern from some network operators and vendors — in particular those who need visibility into what’s happening inside those connections.

For example, consider the datacentre for a bank that has regulatory requirements for visibility. By sniffing traffic in the network and decrypting it with the static keys of their servers, they can log legitimate traffic and identify harmful traffic, whether it be attackers from the outside or employees trying to leak data from the inside.

TLS 1.3 doesn’t support that particular technique for intercepting traffic, since it’s also a form of attack that ephemeral keys protect against. However, since they have regulatory requirements to both use modern encryption protocols and to monitor their networks, this puts those network operators in an awkward spot.

There’s been much debate about whether regulations require static keys, whether alternative approaches could be just as effective, and whether weakening security for the entire Internet for the benefit of relatively few networks is the right solution. Indeed, it’s still possible to decrypt traffic in TLS 1.3, but you need access to the ephemeral keys to do so, and by design, they aren’t long-lived.

At this point it doesn’t look like TLS 1.3 will change to accommodate these networks, but there are rumblings about creating another protocol that allows a third party to observe what’s going on— and perhaps more — for these use cases. Whether that gets traction remains to be seen.

QUIC

During work on HTTP/2, it became evident that TCP has similar inefficiencies. Because TCP is an in-order delivery protocol, the loss of one packet can prevent those in the buffers behind it from being delivered to the application. For a multiplexed protocol, this can make a big difference in performance.

QUIC is an attempt to address that by effectively rebuilding TCP semantics (along with some of HTTP/2’s stream model) on top of UDP. Like HTTP/2, it started as a Google effort and is now in the IETF, with an initial use case of HTTP-over-UDP and a goal of becoming a standard in late 2018. However, since Google has already deployed QUIC in the Chrome browser and on its sites, it already accounts for more than 7% of Internet traffic.

Read Your questions answered about QUIC

Besides the shift from TCP to UDP for such a sizable amount of traffic (and all of the adjustments in networks that might imply), both Google QUIC (gQUIC) and IETF QUIC (iQUIC) require encryption to operate at all; there is no unencrypted QUIC.

iQUIC uses TLS 1.3 to establish keys for a session and then uses them to encrypt each packet. However, since it’s UDP-based, a lot of the session information and metadata that’s exposed in TCP gets encrypted in QUIC.

In fact, iQUIC’s current ‘short header’ — used for all packets except the handshake — only exposes a packet number, an optional connection identifier, and a byte of state for things like the encryption key rotation schedule and the packet type (which might end up encrypted as well).

Everything else is encrypted — including ACKs, to raise the bar for traffic analysis attacks.

However, this means that passively estimating RTT and packet loss by observing connections is no longer possible; there isn’t enough information. This lack of observability has caused a significant amount of concern by some in the operator community, who say that passive measurements like this are critical for debugging and understanding their networks.

One proposal to meet this need is the ‘Spin Bit‘ — a bit in the header that flips once a round trip, so that observers can estimate RTT. Since it’s decoupled from the application’s state, it doesn’t appear to leak any information about the endpoints, beyond a rough estimate of location on the network.

DOH

The newest change on the horizon is DOH — DNS over HTTP. A significant amount of research has shown that networks commonly use DNS as a means of imposing policy (whether on behalf of the network operator or a greater authority).

Circumventing this kind of control with encryption has been discussed for a while, but it has a disadvantage (at least from some standpoints) — it is possible to discriminate it from other traffic; for example, by using its port number to block access.

DOH addresses that by piggybacking DNS traffic onto an existing HTTP connection, thereby removing any discriminators. A network that wishes to block access to that DNS resolver can only do so by blocking access to the website as well.

For example, if Google was to deploy its public DNS service over DOH on www.google.com and a user configures their browser to use it, a network that wants (or is required) to stop it would have to effectively block all of Google (thanks to how they host their services).

DOH has just started its work, but there’s already a fair amount of interest in it, and some rumblings of deployment. How the networks (and governments) that use DNS to impose policy will react remains to be seen.

Read IETF 100, Singapore: DNS over HTTP (DOH!)

Ossification and grease

To return to motivations, one theme throughout this work is how protocol designers are increasingly encountering problems where networks make assumptions about traffic.

For example, TLS 1.3 has had a number of last-minute issues with middleboxes that assume it’s an older version of the protocol. gQUIC blacklists several networks that throttle UDP traffic, because they think that it’s harmful or low-priority traffic.

When a protocol can’t evolve because deployments ‘freeze’ its extensibility points, we say it has ossified. TCP itself is a severe example of ossification; so many middleboxes do so many things to TCP — whether it’s blocking packets with TCP options that aren’t recognized, or ‘optimizing’ congestion control.

It’s necessary to prevent ossification, to ensure that protocols can evolve to meet the needs of the Internet in the future; otherwise, it would be a ‘tragedy of the commons’ where the actions of some individual networks — although well-intended — would affect the health of the Internet overall.

There are many ways to prevent ossification; if the data in question is encrypted, it cannot be accessed by any party but those that hold the keys, preventing interference. If an extension point is unencrypted but commonly used in a way that would break applications visibly (for example, HTTP headers), it’s less likely to be interfered with.

Where protocol designers can’t use encryption and an extension point isn’t used often, artificially exercising the extension point can help; we call this greasing it.

For example, QUIC encourages endpoints to use a range of decoy values in its version negotiation, to avoid implementations assuming that it will never change (as was often encountered in TLS implementations, leading to significant problems).

The network and the user

Beyond the desire to avoid ossification, these changes also reflect the evolving relationship between networks and their users. While for a long time people assumed that networks were always benevolent — or at least disinterested — parties, this is no longer the case, thanks not only to pervasive monitoring but also attacks like Firesheep.

As a result, there is growing tension between the needs of Internet users overall and those of the networks who want to have access to some amount of the data flowing over them. Particularly affected will be networks that want to impose policy upon those users; for example, enterprise networks.

In some cases, they might be able to meet their goals by installing software (or a CA certificate, or a browser extension) on their users’ machines. However, this isn’t as easy in cases where the network doesn’t own or have access to the computer; for example, BYOD has become common, and IoT devices seldom have the appropriate control interfaces.

As a result, a lot of discussion surrounding protocol development in the IETF is touching on the sometimes competing needs of enterprises and other ‘leaf’ networks and the good of the Internet overall.

Get involved

For the Internet to work well in the long run, it needs to provide value to end users, avoid ossification, and allow networks to operate. The changes taking place now need to meet all three goals, but we need more input from network operators.

If these changes affect your network — or won’t— please leave comments below, or better yet, get involved in the IETF by attending a meeting, joining a mailing list, or providing feedback on a draft.

Thanks to Martin Thomson and Brian Trammell for their review.

Mark Nottingham is a member of the Internet Architecture Board and co-chairs the IETF’s HTTP and QUIC Working Groups.

Rate this article
Discuss on Hacker News

The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.

43 Comments

  1. Bill

    Wow, maybe there is hope for an encrypted and unencumbered internet for our future. This was a refreshing read, thanks for the post.

    And I suppose thanks to all the engineers at the IETF (et al) that are working their hardest to preserve a free-as-in-freedom and open internet.

    My only comment is that, although enterprises and banks etc all have a legitimate usecase for crippling the security of their employees and other client software, despite that legit use, it is still a crippling of security. Bending to such needs is how we got into the situation where all modern x86 chips around the world are necessarily remotely owned with no workaround or way to disable it. I ask that you do not acquiesce to these requests, that you do not allow a general way in the various protocols for networks to, as you describe it, “impose policies” on their users. That is nothing less than the destruction of liberty and openness on the internet, and it’s in everyone’s best interest to preserve that to the utter maximum extent possible. I’m not sure how to solve the problem where business legitimately need to control their own computers remotely, but it should not be by the crippling of public and global protocols.

    Thanks again for the years of hard, unrecognized service on these topics, and thanks for taking the time to share this post.

    Reply
  2. Kyle

    Thank you for looking at ways to engineer a robust open platform. Experience has shown we cannot rely on political action nor network benevolence to guarantee it. I echo Bill’s concerns about not acquiescing to the needs of “leaf” networks — if enterprise networks absolutely must “enforce policy” in certain ways, then they need to be responsible for deploying the intentionally-crippled hardware to their employees, rather than demanding the network itself be crippled so they can foist the cost of hardware onto their workers.

    Reply
  3. Jack Smith

    Think one of the most interesting is the Google network stack they developed for Spanner.

    It is a determinate network stack and kind of like a virtual circuit switch. Lowers cost for network gear with the smarts on the edges.

    Google has a paper that covers the protocol and highly recommend.

    Reply
  4. Sam

    “Finally, we are in the midst of a shift towards more use of encryption on the Internet, first spurred by Edward Snowden’s revelations in 2015.”

    Citation, please? The linked paper shows that SSL usage has been increasing steadily since at least 2013 (the earliest year it measured), and there’s no noticeable spurt that would correspond to Snowden. The paper doesn’t mention him at all.

    We’ve known about Room 641A for a decade. I don’t think Snowden’s reveal was a revelation for anyone who knew what HTTPS was, or had the means to implement it.

    Reply
    1. Evan

      I think it might be that Snowden et. al. pushing the idea of vulnerability on the mainstream has more end users desiring encryption/etc., motivating companies like Google that make a living collecting/storing user data to try and make their users feel safe. Like yeah a journalist in Iran has always been conscious of data security but mass surveillance pushed into the public eye more and more makes average Joe want to encrypt his email too.

      Reply
  5. Kathleen

    Several of the efforts to make it easier to encrypt, like, “Let’s Encrypt” got buy in for IETF standardization after Snowden and the publication of RFC7258. Other similar efforts had come before it, but timing and running code helped this one and it has had a substantial impact on the increase for web traffic. There was en EFF report and other statistics right after Snowden that showed a clear connection, but the efforts after that were motivated by Snowden can be seen in various IETF presentations.

    Reply
  6. Chip Sharp

    Mark,
    It would be very useful to include the status of each effort and whether it is standards-track or informational.

    Reply
  7. Jeroen

    The real reason for “moar TLS” is not Snowden (though it definitely helped and did good to release all that data: more paranoia at TLAs) but availability of common fast crypto hardware (read AES support in standard cpus and loadbalancer gear) and more importantly: that the large advertising networks did not want their ads to be changed/removed by inline filtering devices….

    Reply
  8. Toerless Eckert

    “Google Browser support protocols” would have been a more appropriate title to describe the majority of protocols discussed here.

    Of course Google never said “L’Internet c’est moi!”, like Louis XIV never said “L’état c’est moi!”, but it’s easy to assume they did.

    Reply
  9. Joe Mele

    In addition to this, why cant we move to a system where domain names are like crypto currency on a blockchain. so registrar and their fees are not needed. we all know the right owner of domain name would be in the block chain. a wallet would have public ip address for the domain to point to. and the wallet in the blockchain is the current owner of the domain. etc

    Reply
  10. Richard Kay

    @Joe Mele: “why cant we move to a system where domain names are like crypto currency on a blockchain”

    1. See Metcalfe’s Law – objects already have a popular globally unique naming system and it will be difficult for a new protocol to compete with the existing DNS network. Properties for a new protocol to be accepted have to add sufficient value to be used, e.g. see Tor/Onion routed hidden services.

    2. So who pays for the mining, and you think this is cheaper than the current system ? Also because protocols where 1KWh == 1 vote are bad for the planet, so won’t be used or accepted and will be actively blocked by those who care. Blockchain consensus systems which avoid wasted electricity involve central authority (e.g. a trusted CA signing node keys accepted by user community as contributing to network consensus). Having a central authority will have to be funded, hence no advantage over the current system.

    Reply
  11. wtarreau

    Hi Mark!

    We still need to find the right spot between policy enforcement and end-user security, otherwise we’ll end up preventing new protocols from being adopted. For example, networks *need* protections such as firewalls to prevent against remote attacks. By making protocols harder to inspect, we’re effectively making them harder to follow, and stateful firewalls will be harder to build, if at all (eg: QUIC doesn’t even allow to detect an ACK). This will cause leaf nodes to be permanently connected to the net without any form of network protection, thus reachable from outside on UDP from port 443. Nowadays most leaf nodes run behind NAT due to the shortage of IPv4 address space.

    Those of us having IPv6 at home know that their IPv6 nodes tend to be a bit less protected by being permanently visible from outside and absolutely need to be protected by a firewall. I fear that by making it harder to protect leaf nodes against attacks on future protocols, we’ll make IPv6 perceived as much more dangerous for leaf nodes, which itself will become a real pain as it will still not allow transparent end-to-end communications. I already predict that we’ll see a new class of malware deployed on leaf nodes using UDP because it’s impossible to differenciate their traffic from outgoing QUIC traffic, and that may more leaf nodes will permanently be part of botnets. For example many firewalls only allow one or a few packets from port 53 in response to an outgoing packet to prevent unauthorized traffic from claiming to be DNS traffic and allowing intruders to access an end point. Here it will not be possible from source port 443 since we’ll never know when a connection is terminated, if it was at all started.

    Thus my point is that we need to be very careful that building devices to protect infrastructure and end points is still possible without making it easy to build devices to inspect contents and communications.

    Reply
  12. Shawn Moore

    This a good read, and will be interesting on how things pan out. As mentioned ease of use will be important as I feel that’s reason most of us are still on IPv4.
    People are scared to even understand IPv6 address formatting because of the 128 bit addressing let alone implement it, even in a test environment.

    Reply
  13. Dan Foote

    I find it interesting that none of the comments that favor “unrestricted, unfettered flow of traffic that’s encrypted” because “it breaks the internet and security” fail to acknowledge that w upwards of 70% of all traffic now encrypted, so is the malware that infects machines. Expecting reliance on anti-virus alone makes little sense and not decrypting packets at the perimeter means that any sort of gateway anti-virus is also ineffective.
    Businesses must have the ability to examine & inspect traffic onto their networks or else they’ll be sitting ducks for future waves of malware, ransomware, et al.

    Reply
  14. Etienne

    All the changes listed still assume a top/down relationship, a provider/consumer model. Would be nice to have a “I want this file (UUID)” from consumer to be served by a computer as close as possible, to reduce network duplicate transmission. Allows mesh network, cryptographic signature might be served by initial server, download only once software updates, fonts, popular videos, …

    Reply
  15. Baldeep

    Coping with larger amounts of data it keep end users happy is key.

    User experience will help drive repeat visits and enable website owners to be rewarded when they offer services relevant to the customer.

    Reply
  16. Bongoman

    One thing that is not stressed enough IMO is that protocols are, at least in part, a common good: we must not only be able to implement them, but also observe, measure, analyse their behaviours, in order to improve them, and make various kinds of prediction.

    The opacification trend seems to be creating a fundamental shift from public to private of what started as a common good. ISTM that as a byproduct of the demand for more privacy we get an increase in privatisation. Not surprisingly, Google & the likes – whose core business is based on being able to creating VPNs from their datacentre to your data – are very content and supportive of this recent (western world) social trend.

    Are we building protocols in the interest of the current incumbents? Or are we building for the common good?

    Reply
  17. online college help

    It’s really interesting article! At this time I’m working as a writer, but I’d like to become a pro in telecommunications. Always I were interested in different protocols and web-clients. And I think that I’ll get really amazing perspectives in this field.
    So could someone advice me some useful topics like this?
    I’ll be grateful to any answer.
    Thank you that you are sharing such useful end rare content.

    Reply
  18. JoAnn Chateau

    As a WordPress blogger and general Internet user, I’ve suspected Internet protocol changes. I experience them as user-UN-friendly (needing to click-twice for everything, right-click works like left-click, difficulties in placing the cursor, etc.) This article is the only one I’ve found, so far, that comes near to addressing the issues I experience. However, it’s too technical for me to fully grasp, and I’m unsure that it relates to my personal Internet experience concern.

    Never-the-less, and for the record, from my viewpoint, I wonder if the changes are about touch-screen technology, prioritizing advertisers, or the result of pressure from Homeland Security.

    Whatever the reason, if using the Internet gets any more annoying, I’m going back to reading by firelight… for information AND entertainment.

    Reply
  19. TCP/IP protocol

    IP switching is one of the most used technique which routes data packets faster than the traditional method by using layer 3 switches.

    Reply
  20. Niyi

    Thanks for that, wtarreau. I believe that the greatest challenge facing the internet now is security. Designing protocols that are uninspectable makes me cringe.

    Reply
  21. Jasmine Mishra

    Hello,
    Thank you very much for this informative blog.You explain all the protocol’s very clear and understandable languages.This information is very helpful for all. Thank you so much…Keep sharing

    Reply
  22. nsoumyaranjan

    I’d love to be a part of group where I can get advice from other experienced people that share the same interest. If you have any recommendations, please let me know. Thank you.

    Reply
  23. Cognex

    The article was absolutely fantastic! Lot of great information which can be helpful in some or the other way. Keep updating the blog, looking forward for more contents.

    Reply
  24. CCNA COURSE IN DELHI

    Great! I read your blog first time and It is very informative post. Keep sharing more this type of valuable blog.

    Reply
  25. Prestige Park Grove

    Great article! The information presented here is well-researched and provides valuable insights on the subject.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Top