In 1988, Paul Mockapetris and Kevin Dunlap published a paper called ‘Development of the Domain Name System’.
The paper describes motivations for the design of the DNS, capturing the environment of the time, and the implications of the design decisions in the early phases of what could be called operations.
Thirty years on, rereading the paper casts a backdrop for further development of the DNS as we know it today.
The future had different plans for the DNS
What’s at first surprising is that Domain Names, as a name space, and the DNS, as an operating system, were not solely built for each other.
The Domain Names emerged from the needs of early inter-networks, a novelty of the time. The DNS was built to automate the management and use of Domain Names as well as be extensible for use by other naming systems long since forgotten.
In the 1980s, technologies were competing: those that relied on a unified model withered, while those that adapted to the ecosystem thrived. In a time of tumultuous technological advancement, intentionally malicious actions were not worth the trouble.
Conspicuously absent is concern over intentional system disruption, that is, ‘bad actors’. These days, a lot of attention is paid to abusive actions taken regarding the DNS, evidenced by marketing literature offering solutions to DDoS as well as offering parental controls. In the 1980s these were simply not considered, not foreseen, not addressed, and thus absent from the paper.
Mockapetris and Dunlap’s paper contains sections on root servers, surprises, successes, and shortcomings, as well as other comments that stand out today. From those sections, these themes seem to be most pertinent in current work and events:
- Root server load is driven more by the design and tuning of query retry algorithms than human activity. Diving into this, one can find a seminal paper from 2002 (see below) that estimates only 2% of all root server load is “genuine”, the other 98% is attributed to errant or malicious traffic. More surprisingly, since that paper, there have been very few follow ups to monitor this.
- Over time, the DNS has become confused with Domain Names and the global public Internet. There are still multiple inter-networks and the DNS technology is used in niche environments. When standards of operation are considered, it is important the DNS protocol remains free of operational assumptions. This permits the reuse of DNS software across the global public Internet and private networks, allowing for the use of Domain Names in ways that do not require the DNS to publish and carry the information.
- The transport protocol picked for the DNS, the User Datagram Protocol (UDP), was the best option at the time, but is fraught with issues, including being a major ingredient for DNS-based DDoS events. Only recently has attention turned to finding a more suitable transport arrangement.
- Finally, the paper hints at concern over the ability of an ever-growing population to manage the DNS — as the population grows, experience thins. Suggestions are made to add features that would make the end to end more manageable, including features that are now controversial. These include disclosure of a server’s software and version, remote inspection of configuration settings (like DNSSEC ‘trust anchors’) or even remote updates of software in operations by the developer as opposed to the operator.
The paper is a good read for anyone that works and develops the DNS today. Sadly, for those seeking entertainment, the paper lacks laughably incorrect future statements (as in Thomas J. Watson’s “I think there is a world market for about five computers“).
Hopefully the ‘look back’ will spur interest in the paper and lessons applied in future work.
Edward Lewis is a Senior Technologist in ICANN’s Office of the CTO.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.