Where did DNSSEC go wrong?

By on 5 Jul 2024

Category: Tech matters

Tags: , ,

Blog home

Reflecting on ‘Calling time on DNSSEC?‘ by Geoff Huston (APNIC Blog, 28 May 2024) it is undeniably true that economics are against DNSSEC’s adoption. Although DNSSEC has been deployed through most of the Top-Level Domains (TLDs) of the global public DNS namespace, and despite the financial incentives provided by registries and the availability of free, open-source code bases and toolsets, the economic benefits have not been favourable. The trouble with DNSSEC is more than economics; perhaps the design of DNSSEC works against its adoption.

DNSSEC’s architecture was originally laid down in the 1990s. The Internet was still in its ‘best effort’ era, connected hosts and protocols were insecure, and there were few best practices for countering and recovering from malicious activity. While the DNS namespace had begun to take shape, there was no significant commercial hosting or provisioning at the time. Tool-wise, there was only one dominant open source platform available.

DNSSEC began with three technical objectives to protect data as the data flowed through the DNS system:

  1. Authentication of data — the data is what is configured.
  2. Integrity of data — the data is complete.
  3. Authentication of negative answers — securing ‘no answer’.

A few guiding principles and observations from this early environment were used to discuss how the objectives would be met and how the decisions made influenced the design of DNSSEC.

‘We can’t trust hosts with private keys’

Digital signatures were the approach chosen to provide authentication and integrity for DNSSEC. At the time of this design, host security was unacceptably poor. The private key (or keys) used to generate signatures had to be held in off-network computers relying on removable media to bridge the air gap.

Thus, the protocol design required all responses to be precomputed, with no response data generated based on a query. This is acceptable for positive results where there is a direct match but there are three other kinds of responses in the DNS. There are negative responses, synthesized responses, and redirected responses.

A precomputed, catch-all negative answer seemed a likely approach but is subject to a replay attack. DNSSEC chose an approach that required sorting the data in a zone and then revealing ranges of data so that the querier could determine what it wanted did not exist. The downsides of this are larger responses and revealing more information than is needed. That alone is significant enough to have spurred an updated approach to negative answers, involving larger and confusing responses.

Precomputed responses for synthesized queries, commonly called wildcards in DNS, as well as for redirected responses involving CNAME and DNAME resource records, require DNSSEC to reveal the work done by the responder to the querier. This is in addition to the problems involving precomputed negative answers.

An alternative to this is on-the-fly signing, entrusting online servers with private keys to generate signatures. While not an automatic fix, this is done in practice and has proven feasible but requires all name servers to be under one operator’s control. In practice, integrating an on-the-fly signing approach into a protocol that assumes precomputed answers is complicated. This complexity suggests that some redesign of the protocol is warranted.

‘The job of a zone administrator’

RFC 1034 refers to a ‘zone administrator’, leading to a mindset of a singular entity responsible for a zone. Along with a general mantra of ‘my network, my rules’ in operating a network, this focus created the thought that the zone was a self-contained entity, singularly responsible for its security. Spread throughout the protocol are mentions of ‘local policy’ whenever there were operational choices to be made.

This hindered the role of the parent-child delegation relationship in DNSSEC design. As the DNS simplified the delegation structure, DNSSEC followed suit. The result is a design that has not worked in practice.

It was anticipated that DNSSEC adoption would occur from the bottom up, as zone administrators recognized and acted upon its benefits. It was assumed that DNSSEC would grow upwards in the namespace, with the root possibly being signed last, once the technology had been fully tested and achieved high reliability. An important element was the use of Trust Anchors (TAs), especially in cases where a zone was signed but its parent zone was not.

Today, DNSSEC is experiencing exactly the opposite. Initially, there were a few pioneering TLDs before the root zone was signed in 2010. Now, over 90% of the root, TLD, and other infrastructure zones are signed, while estimates for signing down in the tree hover under 10%. Currently, the only TA widely managed is for the root zone.

A stronger bond between parent and child zones could potentially resolve many of the issues facing DNS provisioning or registration today. This includes facilitating dynamic refresh of security credentials (keys) and smooth transfer of control from one zone administrator or backend operator to another, especially considering how and why DNSSEC has grown top-down. For this reason, the efforts in the IETF to develop a record called DELEG are important to DNSSEC.

‘Contact with the parent ought to be minimized’

Related to the previous section, this mantra focuses on the exchange of cryptographic key information between child and parent zones. This has led to the creation of the Secure Entry Point (SEP) cryptographic role, leading to the notion of a Key Signing Key (KSK) role and a Zone Signing Key (ZSK) role.

The creation of the two key roles was based on observations in workshops; when DNSSEC required the parent to change records based on child key changes, the parent may be slow to respond. To enable more flexibility in managing key changes, a ZSK that could be changed without requiring updates from the parent was introduced.

However, having two keys instantly doubles the workload of a key manager. The introduction of these two roles has also made explaining DNSSEC (via slides or presentations) much more challenging. Despite the established validation algorithm, there are few strict rules governing the use of these two roles, which can complicate implementation. While most operations adhere to specific practices with these roles, the protocol definition and DNSSEC-supporting software cannot assume common practices universally.

It is possible to run DNSSEC with just one key in a zone, such a key is called a Common Signing Key (CSK). This key can be changed as frequently as needed, simplifying management, especially with advancements in automated provisioning. This approach has evolved after DNSSEC initially faced challenges with rigid practices.

‘All zones are equal’

When DNSSEC was in development, .com was large and filled nearly, if not completely, with delegations. .com was so large that the first-ever DNSSEC signer had to play some tricks to sign the zone. As tempting as it seemed to treat .com and potentially other large zones as ‘special’, in DNSSEC design the prevailing thought was that a simpler, streamlined, every-zone-is-the-same protocol design should be the goal.

A critical aspect that was initially overlooked in DNSSEC design was the unique position of the root zone. Unlike other zones that have parent zones to establish the security of public keys, the root zone stands alone and cannot rely on a parent for validation. This necessitates the use of TAs — preconfigured, trusted keys available in all DNSSEC validators. Managing TAs, especially for the root zone, remains challenging. In hindsight, ensuring widespread distribution and management of TAs for the root zone should have been a primary consideration from the outset.

Developer vs operator

After initial development, a series of small DNSSEC workshops were held, inviting those who ran DNS servers to participate. For five years, these events were held in many parts of the world to draw experts to help shape the DNSSEC design. The results of this informed the current definition of DNSSEC.

The initial plan for DNSSEC development was solid. However, the participants involved were not necessarily representative of today’s operational landscape. While they were experienced in running DNS services, many were also adept at protocol shaping and possessed programming skills, which allowed them to navigate complexities without solely relying on available tools. This dynamic has led to overconfidence in the protocol’s design, often suggesting that improved education or better tooling is the primary need.

There is a need to recognize the low adoption statistics as a critique of the development of DNSSEC, as opposed to assuming it is a case of more promotion or education. While there are operators unaware of DNSSEC, or who see it as too daunting, there are many operators who are well aware of DNSSEC and have reasons for not deploying. Those reasons need to be collected, addressed, and used in any path going forward.

Assumptions about cryptography

Many of the anticipated scenarios for the use of cryptography in DNSSEC were inspired by stories or legends of how cryptography had been employed in military or government contexts. At the time, there was limited public knowledge about cryptography, so these stories and legends served as crucial sources of requirements and inspiration for DNSSEC development.

Judging by those early use case scenarios, today’s operators tend to adopt a minimalist approach, using as few keys as necessary, typically sticking to one algorithm at a time. This approach helps minimize operational complexity and reduces the size of DNS responses, which is critical in optimizing network performance. Operators also prioritize default cryptographic parameter settings provided by software packages, balancing the need for security with practical implementation considerations. Studying how DNSSEC makes use of cryptography should lead to approaches that are compatible with the commercial use of the Internet, ensuring both security and efficiency.

Why revisit the past?

These observations may not alter the path towards a secure DNS, they may just serve as historical lessons. They might also guide the design of future DNSSEC replacements or inspire approaches to enhance security within existing systems. However, they highlight potential design pitfalls and could inform efforts to make DNSSEC more operator friendly.

Rate this article

The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.

Leave a Reply

Your email address will not be published. Required fields are marked *