Running code at IETF

By on 20 Aug 2021

Category: Tech matters

Tags: ,

Blog home

There was an interesting discussion about a proposal in a Working Group session at the recent IETF 111 meeting. Not unusual. But this proposal was that the Working Group should require at least two (presumably, independently developed) implementations of a Working Group draft before the Working Group would consider the document ready for submission to the Internet Engineering Steering Group (IESG). Before progression to publication as an RFC.

What’s going on here?

Some background to ‘running code’

To provide some background to this discussion we should cast our mind back to the 1970s and 1980s and the industry’s efforts at that time to define an open network standard. At the time, the computer vendors had separately developed their own proprietary networking protocols, built to widely differing architectural principles and intended to meet very diverse objectives. IBM used SNA, Digital had DECnet, Apple had Appletalk and Wang had the delightfully named WangNet, to name just a few.

While this worked well for larger computer vendors, customers became increasingly frustrated with this implicit form of vendor lock in. Once they had purchased a mainframe from one vendor, they were locked in to purchase all their IT infrastructure from the same vendor, as other vendor’s equipment simply would not work with the installed equipment.

Customers were trapped. Vendors knew that and, at times, ruthlessly exploited that condition by charging a premium for peripherals such as terminals, printers, and data entry systems. Customers wanted to break apart their IT environment and source peripherals, mainframe systems, and indeed their entire networks, as separate transactions. What they needed was a common standard for these components so that several vendors could provide individual products that would interoperate within the customer’s network.

This effort to define a vendor-neutral common network environment was taken up through a program called Open Systems Interconnection (OSI), a reference model for computer networking implemented by the International Organization for Standardization and International Electrotechnical Committee (ISO/IEC).

The program’s intentions were doubtlessly worthy and laudable, despite sluggish support from some of the larger computer vendors. It produced an impressive stack of paperwork, held many meetings in many fine places, and surely the participants enjoyed many fine dinners. However, in terms of the technology it managed to define, its outcomes were a little more mundane.

Read: DNS talk @ IETF 111

At the transport level, two competing and mutually incompatible technologies were incorporated into a single OSI transport standard. At the application level, there was an incomprehensible jumble of text that did not lend itself to working code, or even consistent human interpretation!

There were many problems inherent in the OSI effort, including (notably) the unwillingness of existing vendors to drop their proprietary platforms and embrace open and vendor-neutral technology.

Another issue was that the standards-making process attempted to resolve differences through compromise, making everyone equally unhappy with the outcome. Take, for example, the process where the relevant Asynchronous Transfer Mode (ATM) standards body was unable to decide between a 32 and 64-octet payload. Eventually, they compromised to the rather odd call payload size of 48-octets, which when coupled to a 5-octet header, resulted in an ATM cell size of a rather bizarre value of 53-octets.

The community working on the development of the Internet protocols at the time were increasingly dismissive of the OSI efforts. There were glaring disconnects between the various optimistic statements of public policy in promoting open systems and OSI (such as the Government OSI Profile (GOSIP) profiles adopted by public sectors in many national environments). The practical reality was that the OSI protocol suite was simply undeployable, and the various implementations that existed at the time were fragmentary in nature. Perhaps more concerning was that it was altogether dubious whether these various OSI implementations interoperated in any useful way!

The IP effort was gaining momentum at the same time. Thanks to a US Defence Advanced Research Project Agency (DARPA) funding project, an implementation for the TCP/IP protocol stack on Unix was available as an open source package from the folk at Berkley. The result was a startlingly good operating system. It was Unix coupled with a fully functional, and amazingly versatile, networking protocol in the form of IP, for essentially no cost at all.

There was a period of evident rift between policy and practice in the industry. Many public sector procurement processes were signed up to GOSIP, as a means of inviting vendors to commit to OSI services. At the same time, many vendors and customers were embracing TCP/IP as a practical and fully functional technology and assisting these same public agencies in writing excuses as to why GOSIP might be fine for others, but was inappropriate for them, especially when considering the agency’s particular circumstances. 

The IP folk couldn’t understand why anyone could sign up to non-functional technology, while the OSI folk, particularly those in Europe, couldn’t understand why anyone could be led to committing into a common networking technology that was wholly and completely controlled by the United States. The links between the initial program instigators of IP, the US’s DARPA, and the implicit involvement of US government itself were an anathema to some folk, who took to calling the protocol ‘DoD IP’ as a direct reference to its US military origins. The IP folk were keen to avoid an overt confrontation at a political level and through the late 1980s, as IP gained traction in the global research community, they were consistent in calling the Internet ‘an experiment’ in broad scale networking technology. The intention was furthering the development of OSI into a deployable and functional technology platform.

Things came to a head in mid-1992 when the Internet Architecture Board (IAB) grappled with the then topical question of scaling the Internet’s routing and addressing issues. They published a now infamous statement that advocated further development of the IP protocol suite by using the Connectionless Network Service (CLNS) part of OSI. This sparked a strong reaction in the IP community and resulted in a reversal of the stated direction to use CLNS in IP. It also resulted in a complete restructuring of the IETF itself, including the IAB! There was also a useful period of introspection to determine why IP was becoming so popular and what the essential differences were in the standardization processes used by the technical committees in ISO/IEC and the IETF.

Dave Clark of MIT pithily summarized the IETF’s mode of operation at the time in his A Cloudy Crystal Ball/Apocalypse Now presentation at the July 1992 IETF meeting: “We reject kings, presidents, and voting. We believe in rough consensus and running code.”

The first part of this statement is a commentary on the conventional process where delegates to the standard meeting vote on proposals and adoption by a majority vote. Dave expressed the notion that it’s not which company or interest that you want to represent at a meeting that counts in the IETF. It’s whether what you are proposing makes sense to your peers! It was a message to the IAB at the time implying that making direct pronouncements about the future direction of IP was simply not a part of the emerging culture of the IETF. The second part of the statement, namely “running code”, was a commentary about the standards process itself. The issue of whether the IETF had reverted to voting over the intervening years, or not, is a topic for another time. Here let’s look at the “running code” bit in a bit more detail.

The IETF standards process

The entire concept of a ‘standard’ in the communications sector is to produce a specification for a technology that allows different folk to produce implementations of the technology. Those implementations should interoperate with each other in a completely transparent manner.

An implementation of a protocol should not have to care in the slightest if the counterpart it is communicating with over the network is a clone of the standard or an implementation generated by someone else. It should not be detectable, and it should certainly not change the behaviour of the protocol. That’s what ‘interoperable’ was intended to mean in this context.

There were a few other considerations about this form of ‘industry standard’ namely that the standard did not implicitly favour one implementation or another. A standard was not intended to be a competitive bludgeon where one vendor could extract an advantage by making their technology the ‘standard’ in an area. Neither was it intended to be a tombstone of a technology where no vendor was willing to implement a standard because they were unable to make money as it was no longer current or useful anymore.

However, “running code” (at the time of OSI) expressed a more fundamental aspect of a technology specification. The standard was sufficiently self-contained that a consumer of the standard could take the specification and implement the technology it described, to produce a working artefact that interoperated cleanly with any other implementation. The specification did not require any additional information to be able to produce an implementation. This is the longer version of the intent of “running code”.

What this means in practice is described at length in RFC 2026:

“…an Internet Standard is a specification that is stable and well-understood, is technically competent, has multiple, independent, and interoperable implementations with substantial operational experience, enjoys significant public support, and is recognizably useful in some or all parts of the Internet.”

RFC 2026

However, the reader should be aware of a subtle shift in terminology here. The statement in RFC 2026 is not referring to a published RFC document or a working draft that has been adopted by a IETF Working Group. It’s referring to an ‘Internet Standard’. As this RFC describes, there is a track that a specification is expected to progress through, from a Proposed Standard to a Draft Standard to an Internet Standard. This process was updated in 2011 with the publication RFC 6410, which recognized there was considerable confusion of the exact role of the Draft Standard within the process, illustrated by the observation that remarkably few specifications were moving from Proposed to Draft Standards. So, RFC 6410 described a two-step process of the ‘maturation’ of an Internet Standard. The stages in the IETF standards process are:

  • Proposed Standard: A Proposed Standard specification is generally stable, has resolved known design choices, is believed to be well understood, has received significant community review, and appears to enjoy enough community interest to be considered valuable. However, further experience might result in a change or even retraction of the specification before it advances. Usually, neither implementation nor operational experience is required for the designation of a specification as a Proposed Standard.
  • Internet Standard: A specification for which significant implementation and successful operational experience has been obtained. An Internet Standard (which may simply be referred to as a Standard) is characterized by a high degree of technical maturity and by a generally held belief that the specified protocol or service provides significant benefit to the Internet community.

What’s missing from RFC 6410 is the specification of a Draft Standard. The earlier RFC 2160 included a requirement for “running code”, specifically that: “…at least two independent and interoperable implementations from different code bases have been developed, and for which sufficient successful operational experience has been obtained.”

It appears that the IETF had learned to adopt a flexible attitude to “running code”. As RFC 6410 notes: “Testing for interoperability is a long tradition in the development of Internet protocols and remains important for reliable deployment of services. The IETF Standards Process no longer requires a formal interoperability report, recognizing that deployment and use is sufficient to show interoperability.”

This strikes me as a good example of ‘Well, yes, but no!’ A form of evasive expression that ultimately eschewed any formal requirement for running code in the IETF standards process.

RFC 6410 noted that: “The result of this change is expected to be maturity-level advancement based on achieving widespread deployment of quality specifications. Additionally, the change will result in the incorporation of lessons from implementation and deployment experience, and recognition that protocols are improved by removing complexity associated with unused features.”

How did all this work out? Did anyone listen? Let’s look at the numbers to find out.

RFCs by the numbers

At the time of writing in August 2021 we appear to be up to RFC 9105 in the series of published RFCs. However, some 184 RFC numbers are currently listed as Not Issued. There are a further 25 number gaps in the public RFC document series, leaving 8,896 documents in the RFC series.

Of these 8,896 RFCs, 331 are classified as Historic and 887 of the earlier RFCs (before November 1989) are marked with an Unknown status. 2,789 are Informational, 300 are classified as Best Current Practice, and 522 are Experimental. The remaining 4,832 RFCs, or 54% of the entire body of RFC documents, are Standards Track documents.

Of these 4,832 Standards Track documents some 3,806, or 79% of the Standards Track collection, are at the first stage, namely Proposed Standard. A further 139 documents are Draft Standards and have been stranded in this state since 2011 since the publication of RFC 6410.

Just 122 RFCs are Internet Standards. To be accurate, there are currently 85 Internet Standard specifications, each of which incorporates one or more component RFCs from this total set of 122 RFCs. That’s just 2.5% of the total number of Standards Track RFCs. Almost one half of these Internet Standard specifications were generated in the 1980s (47 RFCs that are a Standard, or part of a Standard have original publication dates in the 1980s or earlier), just 21 in the 1990s and 28 in the 2000s. A further 26 RFCs were published as Internet Standards in the 10 years since RFC 6410 was published in 2011.

This strongly suggests that many IETF participants apply their energy to get a specification into the Standards Track at the first (Proposed) level. They are not overly fussed about progressing the document any further once it reaches this initial Standards Track designation. This strongly suggests that, for many, there is no practical difference between a Proposed Standard and a full Internet Standard.

If the objective of the IETF is to foster the development of Internet Standards specifications, then strictly speaking it has not enjoyed a stellar record over its 30-year history. These numbers suggest that if the broader industry looks behind the subtleties of the RFC classification process, and it probably does not, then the Proposed Standard certainly appears to be more than sufficient. Therefore, distinctions related to formal validation of “running code” is a piece of largely forgotten IETF mythology.

Working Groups and running code

So, the formal requirement for running, and interoperable, code has been dropped from the IETF standards process. For some IETF Working Groups however, a form of requirement for implementations of a proposed specification is still part of the process. In Inter-Domain Routing (IDR), where there are 24 active drafts in the Working Group, it is a common practice to request implementation reports of draft specifications as a part of the criteria advancement of a draft through the Working Group! Although, this appears to be applied in various ways for various drafts.

In other cases, such as DNSOP (DNS Operations) there has been pushback from DNS software vendors against feature creep in drafts in this space (known as the ‘DNS Camel’ after a now infamous presentation at IETF 101 that decried the feature bloat that was being imposed on the DNS.

Read: The DNS Camel…

The response from some vendors is not to implement any DNSOP Working Group drafts (there are 17 active documents in the Working Group at present) and await their publication as a Proposed Standard as a precondition of any code development.

At IETF 111 there was a discussion in SIDROPS (Secure Inter-domain Routing Operations) to introduce an IDR-like requirement for implementations as some form of precondition for a draft to progress to RFC publication. Although, in the context of an operations Working Group (as distinct from a protocol development Working Group), the intent of such a move by SIDROPS is probably only to add to the levels of confusion rather than add any clarity!

It appears that Working Groups in the IETF have varying positions on what “running code” actually means and whether any requirement should be folded into the Working Group’s processes. Perhaps that spectrum of variance within the IETF reflects a deeper level of confusion about what we mean by “running code” in the first place.

Some Proposed Standards have already had implementations and some level of interoperability tested by the Working Group before publication as a Proposed RFC. Some do not. And the RFC itself does not record various processes used by the Working Group to get the specification to the state of a proposed standard. No wonder folk get confused!

What do we mean by “running code” anyway?

This question lies at the heart of the conversation.

On the one hand, the phrase was intended to be a summation of the original set of criticisms of the ISO/IEC effort with OSI. If an organization generates its revenue by selling paper copies of standards documents, which was common at the time, then producing more paper-based standard specifications was how the organization continued to exist. At the time, this situation degenerated into simply writing technical specifications for technologies that simply did not exist, or ‘paperwork about vapourware’.

The IETF wanted to distinguish its efforts in several ways. It wanted to:

  • Produce standard specifications that were freely available and available for free.
  • Produce specifications that were adequately clear, so that they were able to guide implementers to produce working code.
  • Produce specifications that were adequately complete, so that multiple independent implementations based on these specifications could interoperate.

At the same time, the IETF implicitly wanted a lot more than just this ‘elemental’ view of standard specifications. It was not particularly interested in a disparate collection of specifications that met the objectives of each specification supporting interoperable implementations. It wanted an outcome that the collection of such specifications described a functional networked environment.

I guess this larger objective could be summarized as a desire to produce a collection of specifications that each could support running code. Taken together, the collection could support a functional networked environment that supported running packets! This ‘running network’ objective was an intrinsic property of the 1980s vendor-based proprietary network systems and was the declared intention of the OSI effort. Consumers could therefore purchase components of their networked environment from different vendors and put them together to construct a functional network when it suited. The parts had to work together to create a unified and coherent whole, and the IETF certainly embraced this objective.

However, the IETF thinking evolved to take on a grander ambition. With the collapse of the OSI effort in the early 1990s it was clear that there was only one open network architecture left standing, and that was IP. So, the IETF added a further, and perhaps even more challenging ambition to the mix. The technology specified through the IETF process had to scale. It had to be fit for use within the Internet of today — and tomorrow. The specifications that can be used for tiny deployments involving just a couple of host systems also should be applicable to vast deployments that span millions and even billions of connected devices.

When we think about the intended scope of this latter objective, the entire exercise of producing specifications to support “running code” became a lot more challenging as a result. The various implementations of a specification had to interoperate and play their intended role in supporting a coherent system. And be capable of scaling to support truly massive environments. Are we capable of predicting such properties from the technology specification? No one expected the BGP protocol to scale to the extent that it has. On the other hand, our efforts to scale up secure network associations have been consistently found wanting. Scalability is a hard property to predict in advance.

At best, we can produce candidate technologies that look viable in such a context. Ultimately, we only know the specifications can meet expectations when evaluated in the light of experience. If the intended definition of an Internet Standard is a specification that has all these attributes, including scalability, then at best it’s a specification that is an historical document that merely blesses what has already worked. However, it has little practical value to consumers and vendors looking to further refine and develop their digital environment.

Read: More notes from IETF 111

Maybe it’s the case that Proposed Standard specifications are good enough. They might scale up and work effectively within the Internet. They might not. We just don’t know yet. The peer review process in the Working Group and in IESG review performs a basic sanity test, proving (hopefully) that the proposed specification is not harmful, frivolous, contradictory, and appears safe to use.

Maybe that’s enough. Perhaps that’s as much as the IETF could, or should, do.

A standard specification, no matter how carefully it may have been developed, is not the same as a cast-iron quality assurance that the resultant system in which it is used can perform. Such a specification cannot guide a consumer through a narrowly constrained single path when there’s a diverse environment of technology choice. A standard, is at best, part of an agreement between a provider and a consumer that the good or service being transacted has certain properties. If many consumers prefer to use a particular standard in such agreements, then producers will provide goods and services that conform to such services. If they do not, then the standard is of little use. So, in many ways the role of a standard specification, and determining if it is useful, is ultimately a decision of the market, not the IETF.

Perhaps the most appropriate aspiration of the IETF is to produce specifications that are useful to consumers. If consumers are willing to cite these specifications in the requirements for the goods and services they purchase, then producers will be motivated to provide goods or services that conform to these specifications.

Running code?

And what about “running code”?

Maybe RFC 6410 was correct in dropping a formal requirement for multiple interoperable implementations of a specification. Judging whether a standard is sufficiently complete and clear enough to support the implementation of running code is a function that the market itself is entirely capable of performing. Front-loading the standards development process with implementations of various proposals can be seen as an additional cost and effort in the standardization process.

On the other hand, “running code” is a useful quality filter for proposed specifications.

If it is sufficiently unclear that a specification cannot support the development of interoperable implementations, then it’s not going to be a useful contribution and it should not be published. No matter how many hums of approval or votes on the IESG ballots to endorse a specification as a Proposed Standard, if the specification is unclear, ambiguous, woefully insecure, or just dangerous to use, then it should not be a Proposed Standard RFC. And if we can’t use the specification to produce running code, then the specification is useless!

Personally, I think that there is a place for “running code” in today’s IETF, and it should be part of the peer review process that guides a Working Group’s deliberations on advancing a candidate proposal to a Standards Track RFC.

And while I’m talking about my wish list, maybe we should drop all pretense about some subtle distinction between Proposed and Internet Standards. Just call them all ‘Standards’ and be done with it!

Rate this article

The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.

Leave a Reply

Your email address will not be published. Required fields are marked *

Top