The introduction of encrypted DNS is a natural step in the process of securing the Internet, but it has brought a considerable amount of controversy, because it removes a means of control for network operators — including not only enterprises, but also schools and parents. The solution is to move control of these services to the endpoints of communication — for example, the users’ computers — but doing so has its own challenges.
Secure communication: The new default
In the past half decade, a tremendous amount of effort has been put into securing Internet communications. TLS has evolved to version 1.3 and various parts of the Web platform have been conditioned to require a secure context. Let’s Encrypt was established to lower the barrier to getting a certificate, and work continues to make secure communication easy to deploy, easy to use, and eventually the only option. For example, QUIC, a new transport protocol, only offers encrypted communications.
These changes were not solely motivated by the Snowden revelations; efforts already underway were accelerated by the realization that insecure communications were being exploited at a large scale, and that without action they would continue to be vulnerable — whether that be to a government (benevolent or otherwise), a criminal, or other attacker.
This effort also continues implementation of one of the foundations of the Internet’s architecture, the end-to-end principle. As per Wikipedia:
In networks designed according to this principle, application-specific features reside in the communicating end nodes of the network, rather than in intermediary nodes, such as gateways and routers, that exist to establish the network.
Encrypting as much communication as possible helps to keep the Internet flexible. Networks often make assumptions about the protocols that run across them based upon current traffic, with the side effect of making changes to those protocols very difficult to deploy (referred to as ‘ossification‘ in the industry).
Network interventions
These changes have not come without friction. As communications become more opaque, a number of parties that have previously interposed various policies using the network have found that it’s less possible to do so.
- Commercial enterprises often impose various policies using network interventions. For example, they scan for viruses, perform data loss prevention, and monitor user activity.
- Schools, prisons and parents sometimes use network interventions to control what parts of the Internet those under their charge have access to, and monitor their activity.
- Some governments use network interventions to impose policy for access to network resources as well, often to prevent access to services or content that is illegal in their jurisdiction.
In each of these cases, the intervention is taking advantage of some latent aspect of the Internet’s protocols. While such monitoring or filtering was not an intended use, the protocols’ designs allowed it by nature. Effectively, these interventions exploit a loophole — one that the protocols neither sanction nor guarantee will always be available.
A fairly smooth start
Recent efforts to secure communication initially focused upon application protocols such as HTTP. Because HTTPS was already defined and used on the Internet, this was an evolution rather than a revolution; whereas before, a typical website using HTTPS was a bank or an Internet retailer, the changes discussed above spread its use to what is now the vast majority of the Web.
As a result, while there was some concern expressed by some about the lack of visibility into these data flows (especially by networks that were concerned with a loss of caching efficiency), but the security goals were well-understood and so this is now how the Internet works.
Subsequently, some network operators have expressed concerns about the standardization of the new transport protocol QUIC, because it encrypts signalling that was previously sent in the clear by TCP. They held that such information helps them operate their networks, in particular, in estimating things such as latency issues being experienced by connections traversing their networks.
Read How much of the Internet is using QUIC?
After much discussion, the spin bit was added to QUIC to address these concerns by allowing endpoints to choose whether to expose this information. Although still somewhat controversial, this represents a successful consultation with an affected community, and the development of an appropriate response — one that satisfies their needs while preserving the security properties desired.
Enter encrypted DNS
More recently, people have become concerned about the security of the Domain Name System (DNS). While Domain Name Security Extension (DNSSEC) provides integrity and authentication to DNS results, it does not provide confidentiality, and its integrity and authentication mechanisms typically don’t extend to the client.
The DNS is often used as a way to control and monitor users in the network interventions mentioned above. By recording the names that a client requests, the network can build a profile of their activity — by returning false results, it can make it harder to find and contact Internet resources.
The problem is that there’s no way to only allow appropriate interventions in the DNS. While some might be legitimate and even desired by the user, a web browser (or mail client, or app) can’t distinguish between a DNS response that’s generated by a safety filter that a child’s parents have installed, a criminal sitting in a coffee shop, or an authoritarian government that’s looking for dissidents.
In other words, if a service such as the DNS is going to be used to impose policy, it needs to be authenticated; both so that you can verify who is providing the service, and also so that if the service isn’t available, you can take further action (for example, disabling access until a content filter is available again) without exposing yourself to attacks (for example, denial of service).
This led to the design of DNS over TLS (DoT) and then DNS over HTTPS (DoH). DoT encrypts DNS traffic, and DoH goes one step further by effectively hiding it inside of encrypted web traffic, thereby making it difficult to observe, block or modify.
Importantly, they were not developed with an assumption that the interventions listed above are illegitimate. Rather, it was done with the knowledge that it’s not possible to distinguish between a DNS resolver imposing a service wanted by the user (or the owner of the computer) and one that’s attacking them, because of the way that DNS is configured (typically, from the ambient network environment).
They also don’t provide a new way of circumventing these controls; that has existed for some time using Virtual Private Networks (VPNs) of various flavours, as well as some forms of DNS proxies. Some of these workarounds are even available as browser extensions. What they do is make sure that average, non-technical users have the same protections available.
The shortcomings highlighted by DoH
DoH, in particular, has turned out to be more controversial than other efforts to increase the use of TLS.
Some security products, for example, use DNS as a way to identify requests from viruses and malware. DoH circumvents such services if they’re interposed by the local network’s chosen DNS server.
Some economies use the DNS to block various kinds of content. DoH can be used to circumvent such filters.
Network operators are also alarmed by the lack of visibility and control brought by DoH.
In each of these cases, the DNS was assumed to be a viable and appropriate way to achieve these goals, even though it was not designed to allow such interventions securely, and even though they can be circumvented (often easily). Because there isn’t a need to configure every application and device to use them, network interventions such as DNS filters are an attractive (if problematic) way to impose policy.
Read DOH! DNS over HTTPS explained
What would be a secure and appropriate way to meet these goals then? Through all of this, the mantra of those who advocate securing the Internet has been to move control to the endpoints, so that there is some assurance that the user is aware of the interventions, and that whoever owns the computer controls them.
However, this assumes that it’s easy to do so. While it may be technically possible to perform such functions on a user’s laptop or mobile device (in the sense that they’re general-purpose computers), the facilities that modern operating systems offer for such mechanisms are currently lacking.
Commercial enterprises are the best-served by current operating systems, using a thicket of mechanisms that fall under the umbrella term ‘device management’; for example, Mobile Device Management (MDM) for laptops and mobile phones.
Likewise, some consumer-facing operating systems are offering improved controls for parents. For example, Apple’s iOS offers fine-grained remote control and monitoring of application use on a managed iPhone or iPad. Microsoft, likewise, has greatly improved the parental controls in Windows over time.
In time, these mechanisms will accommodate the need for DoH configuration, with appropriate controls to assure that it is respected by well-behaved applications.
The path forward
Like VPNs and similar circumventions, DoH is a genie that cannot be put back in its bottle. Even if the browser vendors can be convinced that it is a bad idea (and there is little evidence that they will be swayed), it’s trivial for any technically competent programmer to write a DoH proxy to run on their machine, and a DoH service to co-locate with a website. DoH exists; it was possible even before it was specified and can’t be wished away.
More to the point, encrypted DNS addresses critical security issues in the current Internet architecture. Any weakness left will be exploited by attackers, making everything we do online vulnerable.
Instead, there needs to be renewed focus on the ability for a user to manage their computer, whether that be a corporate IT department, a school’s administrator, a parent, or just someone who wants to have more control over what they’re exposed to online.
That means a lot of careful thought and discussion with operating system vendors; they are the custodians of the computer’s view of the world, and can provide the appropriate abstractions to applications, users and administrators.
For example, a computer might be configured to disallow certain kinds of content. Several already do, thanks to their ‘software store’ model of installing applications. Those constraints should apply not only to the applications installed, but also to the content viewed with the web browsers on the system — even if they were not provided by the operating system vendor. That implies a significant amount of coordination between these parties.
What that coordination looks like is still unclear, but it could mean a system-wide DoH service, configured by the system’s administrator, that imposes the desired resources. In other words, the implementation of these services need not move to the endpoints — just the control. This empowers users and helps assure that networks focus on what they do best — unmodified packet delivery. It also assures that they don’t unintentionally cause ossification in the Internet.
The technically inclined will note that it’s impossible to completely disallow all ‘offensive’ content, even if you can identify it; a user (for example, a precocious child, or just one with technically inclined friends) can find a workaround on a system that has even a sliver of capability. The point, though, is to put the tradeoff between capability and risk where it belongs— with the person who owns the computer.
It will also be important to assure that the endpoints themselves don’t become vulnerable to new kinds of attacks once they have these more powerful capabilities. Furthermore, the relationship between ownership of a computer and control over it deserves more careful thought, since many people are using personal computers on behalf of their employers or other party that might want control over it, and vice versa. This extends to the ‘software store’ model of computing; if that becomes a control point, care needs to be taken to assure that it does not concentrate power inappropriately.
All of this requires collaboration across industry, including operating systems vendors, network operators, protocol designers, application developers, and end users, including their advocates. Fostering that collaboration will require trust and goodwill between these parties — things that seem to be in short supply in the discussions to date.
Thanks to Martin Thomson, Joseph Lorenzo Hall, Patrick McManus and Daniel Kahn Gillmor for reviewing this article.
Mark Nottingham is a member of the Internet Architecture Board and co-chairs the IETF’s HTTP and QUIC Working Groups.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.
What do you think about services like nextdns.io that let end users control their dns policies?