The challenges ahead for HTTP/3

By on 23 Oct 2023

Category: Tech matters

Tags: , ,

3 Comments

Blog home

If you’ve followed my series of posts on the benefits of HTTP/3 (QUIC) compared to HTTP/2 (TCP), by now you might think that these new protocols seem too good to be true. They improve performance, boost security/privacy, and are the perfect bedrock for future-proof experimentation and improvements in the years to come. Still, the protocols have also gotten plenty of pushback, for several reasons, which I’ll discuss in this final post.

It only helps the big

A common critique of HTTP/3 and QUIC is that they primarily benefit the big players and companies (for example, Google and Meta), who often control one or even two of the endpoints (for example, Google controls popular services such as YouTube and search, as well as Chrome, the most used web browser). As such, encrypting traffic so it’s no longer visible or usable for intermediate parties isn’t a problem for these companies — they only work at the endpoints anyway. However, entities such as Internet Service Providers (ISPs), hosting companies, or governments, who operate in the ‘middle’ of the network, understandably feel left out by this.

In some cases, the discussions are relatively innocent. For example, network providers complain that it’s difficult to measure QUIC performance or troubleshoot issues with it on their network. They lobbied long and hard at the Internet Engineering Task Force (IETF) to finally get a single bit assigned in the QUIC packet header (called the ‘spinbit’) that allows them to get some visibility into the protocol’s behaviour. Sadly, to properly use the spinbit in the networks, it needs to be supported and set by the endpoints, and, for example, Google has (so far) refused to implement it in Chrome or on its servers, citing (in my opinion dubious) privacy reasons.

In other cases, the problems are somewhat more impactful. QUIC’s heavy encryption also means it’s more difficult to firewall, as it’s more difficult to discern real connections from fake ones, and important signals such as connection closures can no longer be directly observed. While firewalls can (in my opinion) still do a significant part of their job for QUIC in practice (even things like deep packet inspection (DPI) and Transport Layer Security (TLS) interception are still possible), this requires significant changes to their implementations, something which many firewall vendors have been slow to do. As such, many vendors recommend disabling QUIC entirely for now, as browsers will automatically fall back to HTTP/2 over TCP in this case.

In a few cases, however, the discussions are on quite fundamental topics. For example, many parties claim that the new protocols will make it easier for criminals to evade specific law enforcement measures. Or, similar yet somewhat less egregiously, make it easier for children to evade setups such as parental controls and content filtering. However, these discussions are mostly not centred around QUIC or HTTP/3 directly but on the so-called ‘Encrypted DNS’ and ‘Encrypted Client Hello’ efforts. These two technologies allow further obfuscation of the websites users are trying to visit.

To clarify, the Domain Name System (DNS) translates a website name to an IP address. The Server Name Indication (SNI) in the TLS Client Hello allows users to choose a specific website name if a single server hosts many of them, which is common. Both of these are often done unencrypted so network intermediates can read and manipulate or block websites they deem undesirable (they are also often used to provide features such as zero-rating for popular services, a thorn in the eye of net neutrality proponents).

Both Encrypted DNS and Encrypted Client Hello are not QUIC or HTTP/3 specific, however, as they can (and will) be used with TCP and HTTP/2. This is mainly of concern for protocol purists and not a convincing argument for those worried about their increasing loss of control on ‘their’ networks. The discussions here are mostly of an ethical nature, with protocol designers generally pushing for maximum individual freedom of end-users and governments and other parties wanting to keep some control (see also encrypted messaging apps and potential backdoors for law enforcement). Still, participants in the IETF are not completely deaf to these concerns and there have been proposals that would allow these features to be disabled on certain networks.

All this is just the tip of the iceberg

Many opponents of these efforts claim that they serve some nefarious intent by Google and others (since QUIC was originally developed at Google) to ‘take control’ over the Internet somehow. For these claims, at least, I can say they’re completely false.

While Google started QUIC, it was transferred to the IETF, where many other engineers helped drive its design to what it is today. Sure, many of those engineers work at other big companies such as Apple or Meta. However, others come from companies such as Mozilla (known for its end-user advocacy) and academia (like myself).

Furthermore, all IETF work is done in the open, and all documents and artifacts are publicly available. This does not mean there was absolutely no corporate agenda behind some decisions or proposals, but it’s a far cry from being collusion.

Still, while there is no evil masterminded plot in these setups, it cannot be denied that the Internet is becoming more and more centralized, with much of the Internet’s critical infrastructure and services controlled, and hosted by a small number of large companies. This is also why HTTP/3 has seen such a large relative uptake, as these dozen or so companies represent an outsized piece of the Internet traffic distribution.

I believe this evolution has pros and cons, as there are certainly performance and privacy benefits for end users. On the other hand, these privacy benefits end as soon as the data arrives at these companies’ servers, and maybe in the long term, those privacy risks will turn out to be larger than those we might suffer from network intermediaries.

HTTP/3 and QUIC are here to stay

Weighing the benefits and downsides, I believe HTTP/3 and QUIC are here to stay. They have been designed to be future-proof and easily evolvable in the coming years and decades. Even today, they are used as the basis for more specialized protocols such as WebTransportMASQUE, and Media-over-QUIC, which offer further benefits for specific use cases.

Whether or not the push towards more end-user control and/or additional centralization of the web continues will, I think, largely depend on the actions governments decide to take. We’ve previously seen impactful legislation and measures for these topics (for example, the GDPR and the Great Firewall of China), and more are being discussed.

Whether the IETF contributors turn out to be freedom fighters or corporate tyrants will be for history to decide. But in the meantime, we have some cool new technology to play with in HTTP/3 and QUIC. Take that as you will ;).

Robin Marx is a Web Protocol and Performance Expert at Akamai. 

This post was originally published on the Internet Society’s Pulse Blog.

Rate this article

The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.

3 Comments

  1. superkuh

    “It is difficult to get a man to understand something, when his salary depends on his not understanding it.” – Upton Sinclair

    And this is the tldr for every article like yours I’ve read on why HTTP/3 is okay despite all it’s ecosystem of deficiencies. You literally cannot host a visit-able website that some person you haven’t met can access on HTTP/3 without getting the continued third party approval of a incorporated entity. HTTP/1.1 allows this easily and regularly. And that’s ignoring that any site who’s cert lifetime expires becomes inaccessible. HTTP/3 sites will be very fragile and short lived unless constantly monitored (no problem for corps, big problem for human people).

    You can say, “Oh, that’s the HTTP/3 implementation’s fault, the spec allows self signed certs”. But when I talk to the HTTP/3 implementation devs they say, “Oh, that’s QUIC’s fault it requires encryption, we don’t implement that on our level.”

    Both sides are enabling the other. And pretty soon when the big browsers drop HTTP/1.1 support for “our safety” it’ll be impossible to host a visitable website as a human person. You’ll always have to get approval from some corporation. And this, yes, will break LAN stuff. But even worse it will make the web a corporate only place. No more human person’s websites.

    And you guys are just fine with this because your salary depends on it. You don’t care if you break the web and make it a hub and spoke model. Just as long as the money keeps flowing and commercial transactions are secure and the DRM protected multi-media keeps flowing.

    Reply
  2. superkuh

    I’ve run my own mailserver for the last 11 years. I have a little bit of an understanding. Please don’t use personal attacks when you lack an argument based on reality. Mailservers use more self-signed certs than CA based certs. It’s a totally normal standard practice. If you’d ever touched a mailserver you’d know that.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Top