The HyperText Transfer Protocol (HTTP) is a cornerstone of the Internet, helping to load web pages, stream videos, and fetch data for your favourite apps.
Last year a new version of the protocol, HTTP/3, was standardized by the Internet Engineering Task Force (IETF), the organization in charge of defining Internet technologies. Since then, HTTP/3 and the related QUIC protocol have seen a rapid uptake on the public web. The exact numbers depend on the source and measurement methodology, with HTTP/3 support ranging from 19% to 50+% of web servers and networks worldwide.
Because these new protocols are heavily used by large companies such as Google and Meta, we can safely say that a large chunk of current Internet traffic already uses HTTP/3 today. In fact, the blog post you’re reading right now was probably loaded over HTTP/3!
In this series, I’ll provide some context on what problems HTTP/3 solves, how it performs, why it’s seen such swift adoption, and what limitations it is still working to overcome.
Why do we need HTTP/3?
A network protocol describes how data is communicated between two entities on the network, typically the user’s device and a web server. As there are many different companies building software for the web, the protocol needs to be standardized so that all this software can be ‘interoperable’, that is, they can all understand each other because they follow the same rules.
In practice, we don’t use a single protocol but a combination of several at the same time, each with its own responsibilities and rules (Figure 1). This is to make things flexible and reusable — you can still use the exact same HTTP logic, regardless if you’re using Wi-Fi, cable, or 4G/5G.
Many of the original protocols for the Internet were standardized in the 80s and 90s, meaning they were built with the goals and restrictions of those decades in mind. While some of these protocols have stood the test of time, others have started to show their age. Most problems have been solved by workarounds and clever tricks. However, it was clear something would have to change. This is especially true for the Transport Control Protocol (TCP), which ensures your data reliably gets across the Internet.
Why TCP is not optimal for today’s web
HTTP/1.1 and HTTP/2 rely on TCP to successfully do their job. Before a client and server can exchange an HTTP request/response, they must establish a TCP connection.
Over time, there have been many efforts to update TCP and resolve some of its inefficiencies — TCP still loads webpages as if they were single files instead of a collection of hundreds of individual files. Some of these updates have been successful, but most of the more impactful ones (for example, TCP multipath and TCP Fast Open) took nearly a decade to be practically usable on the public Internet.
The main challenge with implementing changes to TCP is thousands of devices on the Internet all have their own implementation of the TCP protocol. These include phones, laptops, and servers, as well as routers, firewalls, load balancers, and other types of ‘middleboxes’. As such, if we want to update TCP, we have to wait for a significant portion of all these devices to update their implementation, which in practice can take years.
The QUIC solution
This became a problem to the point that the most practical way forward was to replace TCP with something entirely new. This replacement is the QUIC protocol, though many still (jokingly) refer to it as TCP 2.0. This nickname is appropriate because QUIC includes many of the same high-level features of TCP but with a couple of crucial changes.
The main change is that QUIC heavily integrates with the Transport Layer Security (TLS) protocol. TLS is responsible for encrypting sensitive data on the web — it’s the thing that provides the S (secure) in HTTPS. With TCP, TLS only encrypts the actual HTTP data (Figure 2). With QUIC, TLS also encrypts large parts of the QUIC protocol itself. This means that metadata, such as packet numbers and connection-close signals, which were visible to (and changeable by) all middleboxes in TCP, are now only available to the client and server in QUIC.
Furthermore, because QUIC is more extensively encrypted, it will be much easier than it was for TCP to change it or to add new features — we only need to update the clients and servers, as the middleboxes can’t decrypt the metadata anyway. This makes QUIC a future-proof protocol that will allow us to more quickly solve new challenges.
Of course, this extra encryption is good for the general security and privacy of the new protocol too. While TCP + TLS are perfect for securing sensitive personal data, such as credit cards or email content, they can still be vulnerable to complex (privacy) attacks, which have become ever more practical to execute due to recent advances in AI. By further encrypting this type of metadata, QUIC is more resilient to sophisticated threat actors.
QUIC also has many other security-related features, including defences against Distributed Denial of Service (DDoS) attacks, with features such as amplification prevention and RETRY packets.
Finally, QUIC also includes a large amount of efficiency and performance improvements compared to TCP, including a faster connection handshake (see Figure 3), the removal of the ‘head-of-line blocking’ problem, better packet loss detection/recovery, and ways to deal with users switching networks (I’ll go into more detail on this in my next post).
We didn’t need HTTP/3; what we needed was QUIC
Initially, there were attempts to keep HTTP/2 and make minimal adjustments so we could also use QUIC in the lower layers (after all, that’s the whole point of having these different cooperating and reusable protocols). However, it became clear QUIC was just different enough from TCP to make it HTTP/2-incompatible. As such, the decision was made to make a new version of HTTP, just for QUIC, which eventually became HTTP/3.
HTTP/3 is almost identical to HTTP/2. They mainly differ in the technical implementation of the features on top of QUIC or TCP. However, because HTTP/3 can use all of QUIC’s new features, it is expected to be more performant when loading web pages and streaming videos. In practice, it’s especially this aspect that has led to HTTP/3’s rapid adoption.
In my next post, I’ll go into more detail on a common connectivity problem you’ve most likely experienced and how QUIC can help reduce calls and videos from cutting out when your mobile device changes from using Wi-Fi to cellular connectivity.
This post was originally published on the Internet Society’s Pulse Blog.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.