Why HTTP/3 is eating the world

By on 25 Sep 2023

Category: Tech matters

Tags: , ,

5 Comments

Blog home

Adapted from Sven Mieke's orginal at Unsplash.

The HyperText Transfer Protocol (HTTP) is a cornerstone of the Internet, helping to load web pages, stream videos, and fetch data for your favourite apps.

Last year a new version of the protocol, HTTP/3, was standardized by the Internet Engineering Task Force (IETF), the organization in charge of defining Internet technologies. Since then, HTTP/3 and the related QUIC protocol have seen a rapid uptake on the public web. The exact numbers depend on the source and measurement methodology, with HTTP/3 support ranging from 19% to 50+% of web servers and networks worldwide.

Because these new protocols are heavily used by large companies such as Google and Meta, we can safely say that a large chunk of current Internet traffic already uses HTTP/3 today. In fact, the blog post you’re reading right now was probably loaded over HTTP/3!

In this series, I’ll provide some context on what problems HTTP/3 solves, how it performs, why it’s seen such swift adoption, and what limitations it is still working to overcome.

Why do we need HTTP/3?

A network protocol describes how data is communicated between two entities on the network, typically the user’s device and a web server. As there are many different companies building software for the web, the protocol needs to be standardized so that all this software can be ‘interoperable’, that is, they can all understand each other because they follow the same rules.

In practice, we don’t use a single protocol but a combination of several at the same time, each with its own responsibilities and rules (Figure 1). This is to make things flexible and reusable — you can still use the exact same HTTP logic, regardless if you’re using Wi-Fi, cable, or 4G/5G.

Figure 1 — The protocol stack for HTTP/2 and HTTP/3, showing how multiple protocols are combined to deliver the full Internet functionality.
Figure 1 — The protocol stack for HTTP/2 and HTTP/3, showing how multiple protocols are combined to deliver the full Internet functionality.

Many of the original protocols for the Internet were standardized in the 80s and 90s, meaning they were built with the goals and restrictions of those decades in mind. While some of these protocols have stood the test of time, others have started to show their age. Most problems have been solved by workarounds and clever tricks. However, it was clear something would have to change. This is especially true for the Transport Control Protocol (TCP), which ensures your data reliably gets across the Internet.

Why TCP is not optimal for today’s web

HTTP/1.1 and HTTP/2 rely on TCP to successfully do their job. Before a client and server can exchange an HTTP request/response, they must establish a TCP connection.

Over time, there have been many efforts to update TCP and resolve some of its inefficiencies — TCP still loads webpages as if they were single files instead of a collection of hundreds of individual files. Some of these updates have been successful, but most of the more impactful ones (for example, TCP multipath and TCP Fast Open) took nearly a decade to be practically usable on the public Internet.

The main challenge with implementing changes to TCP is thousands of devices on the Internet all have their own implementation of the TCP protocol. These include phones, laptops, and servers, as well as routers, firewalls, load balancers, and other types of ‘middleboxes’. As such, if we want to update TCP, we have to wait for a significant portion of all these devices to update their implementation, which in practice can take years.

The QUIC solution

This became a problem to the point that the most practical way forward was to replace TCP with something entirely new. This replacement is the QUIC protocol, though many still (jokingly) refer to it as TCP 2.0. This nickname is appropriate because QUIC includes many of the same high-level features of TCP but with a couple of crucial changes.

The main change is that QUIC heavily integrates with the Transport Layer Security (TLS) protocol. TLS is responsible for encrypting sensitive data on the web — it’s the thing that provides the S (secure) in HTTPS. With TCP, TLS only encrypts the actual HTTP data (Figure 2). With QUIC, TLS also encrypts large parts of the QUIC protocol itself. This means that metadata, such as packet numbers and connection-close signals, which were visible to (and changeable by) all middleboxes in TCP, are now only available to the client and server in QUIC.

Figure 2 — Encryption differences between TCP+TLS and QUIC. QUIC encrypts much more than just the HTTP data.
Figure 2 — Encryption differences between TCP+TLS and QUIC. QUIC encrypts much more than just the HTTP data.

Furthermore, because QUIC is more extensively encrypted, it will be much easier than it was for TCP to change it or to add new features — we only need to update the clients and servers, as the middleboxes can’t decrypt the metadata anyway. This makes QUIC a future-proof protocol that will allow us to more quickly solve new challenges.

Of course, this extra encryption is good for the general security and privacy of the new protocol too. While TCP + TLS are perfect for securing sensitive personal data, such as credit cards or email content, they can still be vulnerable to complex (privacy) attacks, which have become ever more practical to execute due to recent advances in AI. By further encrypting this type of metadata, QUIC is more resilient to sophisticated threat actors.

QUIC also has many other security-related features, including defences against Distributed Denial of Service (DDoS) attacks, with features such as amplification prevention and RETRY packets.

Finally, QUIC also includes a large amount of efficiency and performance improvements compared to TCP, including a faster connection handshake (see Figure 3), the removal of the ‘head-of-line blocking’ problem, better packet loss detection/recovery, and ways to deal with users switching networks (I’ll go into more detail on this in my next post).

Figure 3 — QUIC has a faster connection setup, as it combines the ‘transport’ three-way handshake with the TLS cryptographic session establishment, which in TCP+TLS are two separate processes.
Figure 3 — QUIC has a faster connection setup, as it combines the ‘transport’ three-way handshake with the TLS cryptographic session establishment, which in TCP+TLS are two separate processes.

We didn’t need HTTP/3; what we needed was QUIC

Initially, there were attempts to keep HTTP/2 and make minimal adjustments so we could also use QUIC in the lower layers (after all, that’s the whole point of having these different cooperating and reusable protocols). However, it became clear QUIC was just different enough from TCP to make it HTTP/2-incompatible. As such, the decision was made to make a new version of HTTP, just for QUIC, which eventually became HTTP/3.

HTTP/3 is almost identical to HTTP/2. They mainly differ in the technical implementation of the features on top of QUIC or TCP. However, because HTTP/3 can use all of QUIC’s new features, it is expected to be more performant when loading web pages and streaming videos. In practice, it’s especially this aspect that has led to HTTP/3’s rapid adoption.

In my next post, I’ll go into more detail on a common connectivity problem you’ve most likely experienced and how QUIC can help reduce calls and videos from cutting out when your mobile device changes from using Wi-Fi to cellular connectivity.

Robin Marx is a Web Protocol and Performance Expert at Akamai.

This post was originally published on the Internet Society’s Pulse Blog.

Rate this article
Discuss on Hacker News

The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.

5 Comments

  1. Tim

    “TCP still loads webpages as if they were single files instead of a collection of hundreds of individual files”

    In other words, the web that we wanted, versus the web that we got: tracking and ads. HTTP was built by users for users. HTTP/3 was built by companies for companies.

    Reply
  2. Mr Mr

    Sadly, Tim, you seem to be right. These protocols are optimized for ad delivery. One should be able to flag websites and companies that use them, so we can block them on uBlock Origin and Pi-Hole.

    Reply
  3. Bayar

    I enjoyed reading this article; really informative. I am looking forward to your next one, which discusses QUIC`s feature against DDoS attacks. Thank you!

    Reply
  4. Dirk

    Tim, sites in the 90s were mainly build by an individual HTML file, with just a few image files and often with embedded CSS.

    As transfer speeds has risen, everyone adds more data. More image files, videos, sounds, music… CSS files became much more complex. As like JavaScript and dynamically loaded chunks of HTML content for flexibility.

    You do not want everything transferred at every click, so devs started to split sites into small segments, which loads when they are needed.

    Imagine transferring a whole movie in one file like its 2003. You have to transfer incredible amounts of data, just to decide, that you don’t wanna watch this movie today. So media like videos where split up into tiny segmented streams, to transfer them fragment by fragment, cache and later delete them, when you are at another sequence.

    Just to watch an YouTube Video you have thousands of tiny lazy loaded files. Every erased byte safes so much energy and processing power in the long run, if you multiply it by billions of users.

    If you host your own web applications, your users and your own budget will appreciate it. Not a single line of tracking included.

    Also it is an awesome feature, that almost the whole UDP payload is encrypted. Especially in countries, where your traffic is monitored and restricted.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Top