DNS privacy has become an increasingly important topic for online security and privacy. As the use of the Internet continues to grow, so does the amount of personal data being transmitted and collected through various online services.
However, implementing DNS privacy measures can have unintended consequences that impact various aspects of the Internet, such as impeding Internet measurement, consolidating service provision, making abuse mitigation harder, and even increasing the risk of censorship at scale.
Privacy is a fundamental human right. It is difficult to define but as it’s important for network engineers to understand and consider, some RFCs have attempted to address the topic, such as RFC 6973 and RFC 8280.
There have been recent gains for user privacy at the DNS level using DNS-over-HTTP (DoH) and DNS-over-TLS (DoT) as well as ongoing work on emerging protocols such as DNS-over-QUIC (DoQ) and Oblivious DoH (ODoH) that are already seeing deployment, and yet more work is needed to protect user privacy with practices like padding timing and size of requests (RFC 8467).
This post will look at how Internet protocols impact privacy, and the inherent tensions that arise when solving the problem of user privacy in the DNS.
Internet research via measurement is important to the public interest because:
- Knowledge of network performance and operation is critical for empowering users as consumers; and
- Monitoring how the Internet behaves helps detect censorship and other events that might impact human rights.
In an unencrypted world, researchers can also analyse data to see if ISPs are living up to their privacy promises. It can be argued that web privacy measurement studies have played an important role in highlighting privacy abuses on the Internet, for example, OpenWPM, a web privacy measurement framework and PageGraph, an instrumented browser for more in-depth website behaviour measurements.
However, there are two ways in which the right to privacy and Internet research through measurement are at odds. Measurements can compromise user privacy (draft-learmonth-pearg-safe-Internet-measurement), and while measurement can play a democratic role in performance monitoring, there are known privacy concerns.
The second clash is the opposite: Internet measurement becomes more difficult when user data is more private.
For example, DoH makes research into DNS-based censorship using a method for measuring DNS manipulation much more difficult. However, presumably, the subjects’ traffic in such a study wouldn’t be subjected to censorship if they were using DoH, which is a win overall.
Another positive for Internet research is that DoH/DoT provision might actually be an improvement given how OONI used DoH as a trusted source to compare DNS poisoning techniques. There are also some client-side measurement tools that incorporate DoH/DoT DNS resolution like NetBlocks’s hackathon experiment, which preserves the measurement utility and keeps user DNS lookup data private.
Consolidation of Internet service provision is a major source of public debate regarding DNS privacy. It is widely considered not in the public interest for Internet traffic to be consolidated into only a few providers at any level of the stack transport, intermediate services, or applications.
Many applications and services that run on the Internet are increasingly becoming centralized. This includes browsers, which, aside from a device’s operating system, is the software most users rely on to use the Internet. This is an issue of special importance for DNS privacy measures because an overwhelming majority of Internet users access content on the Internet by first querying domain information through their browsers. Browser-based DNS privacy measures such as DoH leverage this universal behaviour but it shifts protocol preferences into applications at the risk of resolver traffic diversity.
The primary concerns with centralized DNS are data mining, law enforcement and intelligence agencies gaining access to information, conflicts in jurisdictional privacy laws, and creating single points of failure and targets. Interestingly, new DNS privacy techniques complicate the consolidation question even further. Oblivious DNS/DoH is designed to prevent one single party learning everything about the user (both who is accessing the content, and what the content is). But a fallout of this is that there is now even less incentive for a DNS resolver to offer this service, thus increasing consolidation. The oblivious relay is essentially a third-party-for-hire, which relies on ‘too big to block’ tactic for privacy. This is unlike DoH/DoT, which incentivized DNS resolvers to offer their own encrypted resolvers in order to get browsers’ DNS queries, thus improving the state of privacy on the web. ISPs ended up offering their own encrypted DNS resolvers.
Civil society has wrestled with the tension between DNS privacy and decentralization directly. Reports by Open Rights Group (ORG) and the Electronic Frontier Foundation detail the ways in which implementation of DoH/DoT trade off the public interest in a decentralized Internet as the cost of user privacy, and yet still come out in favour of the use of these DNS privacy tools.
Ideally, we would want the user to decide whom they trust for resolving the DNS, but many do not have the technical expertise to make an informed decision. This leads us to the importance of default settings and meaningful user agency in user agents like browsers and other applications. How do we empower users to change that default and help them make a meaningful decision respecting their own privacy threat model?
Governments and corporate competitors have all cited centralization as a primary concern with DNS privacy measures, but perhaps there are roles for consolidators when centralization provides useful functions. One such function is easily deploying privacy enhancements via software to as many end users as possible in all corners of the globe, such as DoH provision in major browsers and DoT provision in dominant operating systems.
Handling abuse on the network is important for the health of the network and in the public interest, Although privacy enhancing technologies can have the unintended consequence of making abuse mitigation harder, this may be an acceptable trade-off in some cases.
Losing the ability to mitigate abuse on a network is a loss in the public interest. This has knock-on effects through other layers as well, such as when dealing with moderation of abusive content, not just abusive network behaviour. The DNS cannot be used to block or filter the content.
Another consideration is the ways an overly complicated and interdependent grouping of private DNS protocols might lead an operator or implementer to make mistakes. To that end we note that both Google and the IETF have published best privacy practices for DoH.
Similar to concerns with Internet research and measurement using the DNS, tools to mitigate abusive behaviour and placing legitimate limits even at the content layer may require those tools to implement and serve the DNS in a privacy preserving way that is compatible with user choice.
For our purposes, Internet shutdown can be defined as an intentional disruption of access to and usability of the Internet at-scale for an entire region. Internet shutdowns violate the rights to free expression, access to information, and assembly. Internet shutdowns, either temporary or longer-term, can negatively impact the economy and other social and cultural life.
The threat of ubiquitous DNS privacy raises the stakes significantly for censorship at-scale. Whereas before some content might be blocked or filtered for some users, now protocol-based shutdowns drastically affect vastly more people and more information.
The questions for researchers and civil society advocates alike is what happens when the DNS is no longer the easiest way to censor content and spy on users? And what happens when taking those capabilities away leads to blanket shutdowns whether end-users have even chosen to use them?
Should users be aware of what the DNS settings are for each app they use on their phone? As an opaque service required to allow a user to access the Internet, the DNS has traditionally been a concern for operating systems, and applications have typically relied on the system for DNS lookups. Microsoft and Apple both recently announced that their operating systems will ship with encrypted DNS APIs, so app developers on these platforms can now enable encrypted DNS for app-specific lookups even if the system resolver doesn’t use encrypted DNS or uses a different resolver.
How comfortable are users faced with this choice? As discussed in the Consolidation section, there is an issue of all the DNS traffic now going to the preconfigured DNS resolver in a browser. The counterargument is that this setting is exposed to the user so they could change their resolver. But is this true user consent if the user does not understand the role of DNS and the complicated threat model around it?
In case of an Internet shutdown that relies on the DNS as the blocking mechanism, it is fairly easy to ask users to switch their DNS in the system setting — it is a one-time action that is done in a centralized place. But if every app uses its own DNS settings, this act becomes very complicated. Major browsers currently expose the encrypted DNS resolver option to users — what if other apps don’t? The counterargument here is that most users never understood the DNS anyway and won’t be able to change the system DNS settings either. For savvy users, nothing changes (other than the difficulty of having to troubleshoot every app’s DNS lookups).
There are four main tensions faced by advocates for DNS privacy.
For measurement and abuse it’s harder to conduct Internet research and mitigation, respectively, when metadata such as the DNS is made more privacy preserving. But there are net gains when trusted, highly available and reliable DNS privacy provision becomes ubiquitous. Ubiquity also solves the problem of consolidation on its surface, but the hidden and knock-on effects of early leadership by a small number of DNS privacy providers are not yet well understood.
To gain a better understanding, further research is required to delve deeper into these four trade-offs and to explore potential tensions that have not yet been considered.
Mallory Knodel is the Center for Democracy & Technology’s Chief Technology Officer. She is a member of the Internet Architecture Board and the co-chair of the Human Rights Protocol Considerations research group of the Internet Research Task Force. Mallory is also on the advisory committee for the Open Technology Fund.
Shivan Sahib, who co-authored this post, is a privacy engineer with an interest in privacy-respectful standards and is active in the IETF and W3C, co-chairing groups responsible for Oblivious HTTP and privacy research. Shivan is currently on the Advisory Council of Open Tech Fund’s Information Controls Fellowship.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.