2019 has been the year of encrypted DNS. After a long time in which the architectures and the functioning principles for mass DNS resolver platforms seemed well established, the deployment of the DNS-over-HTTPS (DoH) protocol in browsers has proposed new ways of providing DNS to consumers and challenged many ongoing practices.
Part of the DoH-related conflicts, however, seem to come from a different understanding of what the DNS is and what it is meant to do. Mozilla, for example, openly claimed that the DNS is not an appropriate network monitoring and control surface for the Internet — a view that, for different reasons, ISPs and governments loudly challenged. Often, people have been talking over each other, having a hard time understanding each other’s views.
In my opinion, this is the effect of people having in their minds very different use cases, requirements and even conceptualizations of the DNS, which leads to misunderstandings. So, do we really know what the DNS is today?
Defining the DNS depends on who you ask
No RFC holds a clear and final definition of ‘the DNS’ and its purpose. The founding standard, RFC 1034, describes it without defining it. Even much more recently, ‘terminology’ RFCs do not provide a clear answer. In general, people seem to agree that the DNS maps names to IP addresses and in the end, when asked to describe it, most people speak of ‘a distributed database’.
Indeed, the DNS supports several applications — not just the original name-to-IP-address conversion — that explicitly use it as a readily available, distributed, fast and very reliable database. All email authentication protocols, for example, store some information in the DNS under the form of TXT records, and DNS resolution is used to query the database and retrieve the information necessary to validate incoming messages.
DNS-as-a-database seems to be the concept that the designers of DoH, and especially the people that came up with the early deployment models, had in their minds. Their deployment model was based on changing the resolver that the user had been using, assuming that the new resolver would provide the same responses when queried. After all, if the DNS is a distributed database, whatever resolver you query, you should get the same replies.
The problem, however, is that this is not true for several other DNS use cases, and it has not been for at least ten years. Nowadays, there are multiple DNS-based practices and services that rely on the fact that a specific resolver is used, and on the principle that different resolvers will give different replies to the same query and the same user.
For example, many corporate networks, for their security and for their intranet services, rely on the fact that users connected from within the enterprise’s premises will receive different, customized DNS responses from the local resolver. Some networks use names that do not belong to the public DNS hierarchy, and that only the local resolver recognizes; others will associate different IP addresses to the same public name, depending on whether they are queried from inside or outside the corporate network.
Several content delivery mechanisms and CDNs rely on the DNS. Each resolver, when queried for the name of the content-delivering server, will provide a different IP address, pointing to the topologically nearest and fastest content repository. Often, that address will just depend on where the local resolver itself is, as a good, privacy-friendly approximation of the user’s actual IP address.
Increasingly, resolvers are also responsible for network security. Operators block the resolution of malware and phishing websites to make them inaccessible even by mistake, and detect the algorithm-generated domain names used by botnets to stop their proliferation and prevent them from working. Several ISPs also use the resolver to provide customizable filters to their users, for example, for parental control or to block tracking advertising.
Finally, there are laws and court orders that, economy by economy, require providers to prevent the resolution of certain domain names to make the associated content unavailable, for culturally variable sets of undesirable content (unlicensed gambling, child sexual abuse material, hate speech and copyright violations among others).
In all these cases, if the application suddenly uses a resolver different from the local one, all these features and services get disrupted. It is thus clear that in these cases the DNS does not behave like a distributed database, but like a more sophisticated network direction system.
The problem: resolvers ain’t the same
In a way, the DNS behaves like the policeman in the middle of a busy crossroads, giving different directions to different people not just depending on their destination, but also on who they are, where they come from, which traffic rules are in force at the time and how jammed the various roads are.
This is not an abstract discussion on words: it has consequences on deployment policies. For all uses of this kind, it is paramount that applications respect the user’s choice of resolver, and in many cases, that they also prioritize the local network’s resolver, as the local network will become insecure if they open up an encrypted DNS channel to a different resolver outside of its security perimeter. This is what, for example, Mozilla has been addressing by allowing local resolvers to use a ‘canary domain’ to signal that encrypted DNS to outside resolvers should not be enabled by default, though this is just a temporary stopgap measure.
This paradigm helps explain several points of attrition, including the big one: Why are people arguing so passionately about applications changing the default choice of resolvers, if in the end all resolvers just query the same database and provide the same responses? The answer is now clear: because they do not. Nor it would be possible to get back to the original database model — there are just too many services and too many networks relying on these uses.
This also explains why some people are starting to see DNSSEC as optional. If, in the end, the choice of resolver is paramount and your resolver becomes the authority on where you should go next, having a secure, authenticated channel to your resolver already provides very good security — and in some cases, when the resolver applies filters or relies on local-only TLDs, its responses would not be valid under DNSSEC anyway.
Indeed, this paradigm puts additional emphasis on the role of DNS resolvers, which need to be complex, smart, secure, and always available. Fortunately, the technology has evolved to the point where this is not a problem; the policies and the concepts around the DNS just need to follow.
Vittorio Bertola is Head of Policy & Innovation at Open-Xchange.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.