The Internet Architecture Board (IAB) has published RFC 8890, The Internet is for End Users, arguing that the Internet Engineering Task Force (IETF) should ground its decisions in what’s good for people who use the Internet and that it should take positive steps to achieve that.
Why does this need to be said? Is it going too far? Who else could they favour, and why should you care? As the author of the RFC and a member of the IAB that passed it, here are my thoughts.
How the Internet is made
The IETF plays a central role in the design and operation of the Internet. Formed in the 1980s by the engineers who created several of the Internet’s core technologies, it is the primary venue for documenting the technical design of the Internet and has overseen the development of protocols like TCP, IP, DNS and HTTP.
Companies, governments and other organizations don’t officially participate in the IETF; people only represent themselves. There isn’t even a concept of membership. Instead, decisions about specifications are made by ‘rough consensus’ — instead of formal voting, the IETF tries to find the best solution for each issue, based upon the ideas, comments and concerns brought to it.
‘Best’ doesn’t mean the choice that has the most companies supporting it; it’s the one that has the best technical arguments behind it. Working Group chairs measure that consensus; if someone thinks they got it wrong, they can appeal it through a chain of authorities selected for their experience.
Or, in the words of the unofficial IETF credo: ‘We reject kings, presidents and voting. We believe in rough consensus and running code.’
Technical or political? Both
Naturally, most IETF decisions are about technical details; what bytes should go where, how a server reacts to a client, and so on. Because of this, participants often tell themselves that their decisions aren’t ever political; that any such concerns are on ‘layer 8’ — referring to the stack of seven abstractions commonly used for network protocols — and therefore nothing to be concerned about.
[T]he running code that results from our process (when things work well) inevitably has an impact beyond technical considerations, because the underlying decisions afford some uses while discouraging others.
The Internet is for End Users
However, the barrier between the bits on the wire and political matters has turned out to be leaky, like most abstractions are. Sometimes the ability to send information (or the prevention of it) has real-world consequences that take power from some people and give it to others. Likewise with the ability to see what other people are saying, and to control the format in which they speak.
So, in a world that is increasingly intertwined with the Internet, it’s becoming more difficult to maintain the position that the design of Internet protocols doesn’t have a political dimension. All decisions have the possibility of bias; of advantaging or disadvantaging different parties.
For example, the recent standardization of DNS-over-HTTPS (DoH) pitted advocates for dissidents and protestors against network operators who use DNS for centralized network management, and child safety advocates who promote DNS-based filtering solutions. If the IETF were to only decide upon technical merit, how would it balance these interests?
Another example is the Encrypted Client Hello proposal, which closes a loophole that shares the identity of HTTPS sites you visit with anyone listening. China has reportedly started blocking connections that use it, so in a purely technical sense, it will not work well because some networks block it. Should the IETF stop working on it?
Yet another example: how should the IETF handle a proposal to allow networks to decrypt or intermediate HTTPS connections? Is that ok if it’s merely technically sound? What if the user agrees to it, and what does ‘consent’ mean? Many such proposals have been made but not approved. Why?
If the IETF’s decisions affect the design of the Internet, and the Internet is political, the IETF’s decisions are sometimes political too. However, its decision-making processes presume that there is a technically correct answer to each problem. When that decision affects people and power in the actual world, rough consensus and running code are insufficient.
Over the years, these questions have become increasingly urgent, because it isn’t viable to make decisions that have political outcomes but explain them using only technical arguments. That endangers the legitimacy of the IETF’s decisions because they can be viewed as arbitrary, especially by those on the losing side.
Many rule-making systems — whether they be courts, legislatures or standards bodies — establish legitimacy by taking a principled approach. A community that agrees to principles that are informed by shared values can use them to navigate hard decisions. It’s reasonable, then, to consider the principles that the IETF uses to guide the development of the Internet.
The Internet’s principles
The Internet as we know it is a product of the times in which it grew up. The rule of law and liberal values of equality, reason, liberty, rights, and property were all dominant in the political landscape of post-war America and Europe, where most early Internet developments took place.
“The IETF community wants the Internet to succeed because we believe that the existence of the Internet, and its influence on economics, communication, and education, will help us to build a better human society.”
A Mission Statement for the IETF
The Internet has implicitly embedded those values. You don’t need a license (or even to disclose your identity) to use it, and its technical underpinnings make it difficult to impose that requirement. Anyone can publish, and as a result, it takes significant effort for a government to restrict the rights to speech and free assembly on it — again due in part to the way it is designed. There isn’t any central authority that decides what networks can or can’t attach to the Internet. You can create a new Internet application without getting permission first (just like Tim Berners-Lee did so long ago when he created the Web). Such openness is often cited as critical to the Internet’s success.
However, implicit values can drift, as new people get involved in the work, and as new challenges are faced. This is especially true when the resulting decisions can have profound effects on both profits and societies. So, over the years, the IETF community has made progress in documenting explicit principles that can guide decision-making.
For example, RFC 7258 Pervasive Monitoring Is an Attack established IETF consensus that it’s bad for the Internet to allow widespread, covert monitoring and that therefore the IETF would design its protocols to mitigate this risk — a technical argument with both political motivation and ramifications.
Likewise, RFC 6973 documents Privacy Considerations for Internet Protocols, and RFC 8752 reports on an IAB workshop that explored the power dynamics between news publishers and large online platforms.
Or, consider the end-to-end principle, which states that features for applications should reside in the end nodes (for example, your computer or phone) rather than in the network.
The IETF community will (I hope) continue to document and explore such principles, informed by shared values. RFC 8890 makes a small contribution to this, by asking the IETF community to favour end users of the Internet — in other words, actual people — over other stakeholders that ask for their needs to be met when there’s a conflict.
We can’t just stick to the technology
If the IETF didn’t have any underlying principles, it would just focus on technology. A few detractors of The Internet is for End Users have said that’s exactly what it should do. That means that it would publish any proposal, no matter what its effects, provided that it was technically sufficient.
Some standards bodies already operate in that fashion; as long as you can get a few people (or more often, companies) together and agree on something, you can get it published. The IETF does not; it often declines to adopt proposals.
“When the IETF takes ownership of a protocol or function, it accepts the responsibility for all aspects of the protocol, even though some aspects may rarely or never be seen on the Internet.”
A Mission Statement for the IETF
That ability to refuse is meaningful to many people inside and outside the organization; the IETF standardizing something implies that it is good for the Internet, and has been reviewed not only for technical suitability but also adherence to the principles that inform the Internet’s design. In other words, the IETF is a quality filter; if a specification achieves consensus there, it gets deployed more broadly and easily (although it’s not a guarantee of success by any means) because people know it’s had that scrutiny.
Abdicating that role just to avoid thinking about and applying principles would not be good for the Internet. While it might gain a few participants eager to take advantage, it would lose many — including many of those who are still invested in the Internet as a force for good in the world.
Doing so would also affect the legitimacy of the IETF’s role in Internet governance in many eyes. There is a long history of scientists and engineers being concerned with how their work affects the world and its potential for misuse. The IETF has continued this tradition and should do so openly.
Getting comfortable with the IETF’s power
If the IETF is making decisions based upon what it thinks is good for users, is it setting itself up as being some sort of governing body, taking power away from governments and other institutions? Isn’t it dangerous to leave such important matters in the hands of the geeks who show up, and who don’t have any democratic legitimacy?
First of all, a reality check. IETF decisions are only about documents; they don’t control the Internet. Other parties like equipment and software vendors, end-users, platform and network operators and ultimately governments have a lot more say in what actually happens on the Internet from day-to-day.
What the IETF has is a proven ability to design protocols that work at scale, the ability to steer a proposal to align with its principles, and a reputation that gives its documents a certain amount of gravitas. These draw those parties to the IETF as a venue for standardization, and their power flows into the specifications it endorses — especially when a protocol has momentum, like HTTP or IP. It doesn’t work the other way around; if an IETF standard doesn’t catch on with implementers and users, it gets ignored (and many have).
The Internet is for End Users argues that this soft power should be explicitly acknowledged so that participants are more conscious of the real-world ramifications of their decisions.
DoH is an interesting case study. It got the IETF mentioned in the UK Parliament by Baroness Thornton, a child safety advocate concerned about DoH’s use to bypass DNS-based controls mandated by UK law:
’[T]here is a fundamental and very concerning lack of accountability when obscure technical groups, peopled largely by the employees of the big internet companies, take decisions that have major public policy implications with enormous consequences for all of us…’
Baroness Thornton
However, DoH was designed, contributed and ultimately deployed by participants from web browsers, not the IETF, who cannot stop vendors from deploying a protocol it doesn’t approve of (as is often said, ‘there are no standards police.’). The IETF’s contribution was putting the specification through its process to assess its technical suitability and adherence to established principles.
Did the IETF create a better Internet when it approved DoH? There’s a lot of disagreement about that, but what has upset many is that DoH was a surprise — the IETF standardized it without consulting some who it was likely to affect. Here, the IETF could have done better. The Internet is for End Users argues that such consultation is important, to assure that the people writing and reviewing the protocols understand how they will be used, and how they will impact users.
In the case of DoH, better communication between the technical community (not just big tech companies) and policymakers would clarify that relying on the DNS to impose filtering was a bad assumption, in light of the principles underlying the design of the Internet.
“From its inception, the Internet has been, and is expected to remain, an evolving system whose participants regularly factor new requirements and technology into its design and implementation. Users of the Internet and providers of the equipment, software, and services that support it should anticipate and embrace this evolution as a major tenet of Internet philosophy.”
The Internet Standards Process
However, that consultation does not translate to giving other parties a veto over Internet protocols. The UK government or any other external authority should not be able to stop the IETF from creating a particular standard, or to hold it directly accountable; that would be a radical break from how the Internet has been developed for over thirty years. Because of the global nature of the Internet, it wouldn’t be possible to pursue a bilateral or regional style of governance; decisions would have to be sanctioned by every government where the Internet operates. That’s difficult to achieve even for vague statements of shared goals; doing it for the details of network protocols is impractical.
Who, then, is the IETF accountable to? Besides the internal rules that assure that the standards process runs in a way that’s accountable to the technical community, ultimately the IETF is accountable to the Internet; if it strays too far from what vendors, networks, users, and governments want to do, it will lose relevance. As with any platform, the Internet is a beneficiary of the network effect, and if the IETF leads it in a direction that’s unacceptable in too many places, the Internet might fragment into several networks — a risk currently on the mind of many people.
Finding what’s best for end users
There are also bound to be situations where what is best for end users is not obvious. Some will claim that giving other parties power — to filter, to monitor, to block — is in the interest of end users. How will the IETF make those decisions?
Unsurprisingly, this has already happened; it refused to standardize wiretapping technology, for example. In doing so, it applied technical reasoning informed by principles and the global nature of the Internet; designing Internet standards to suit the laws of one or a few economies isn’t appropriate.
That is echoed by The Internet is for End Users, which says:
[W]hen a decision improves the Internet for end users in one jurisdiction, but at the cost of potential harm to others elsewhere, that is not a good tradeoff. As such, we effectively design the Internet for the pessimal environment.
Even with careful thought, technical acumen and broad consultation, the IETF is bound to get some decisions wrong. That’s ok. RFC stands for Request for Comments, and in that spirit, sometimes the comments — evidenced by adoption and deployment as well — tell us to revise our specifications. That spirit of humility is still very much alive in the IETF community.
Next steps
The Internet has been with us for almost 40 years and has contributed to profound changes in society. It currently faces serious challenges — issues that require input from policymakers, civil society, ordinary citizens, businesses and technologists.
It’s past time for technologists to both become more involved in discussions about how to meet those challenges, and to consider broader views of how the technology they create fits into society. Without good communication, policymakers are prone to making rules that don’t work with the technology, and technologists are prone to creating technology naïve to its policy implications.
So at its heart, The Internet is for End Users is a call for IETF participants to stop pretending that they can ignore the non-technical consequences of their decisions, a call for broader consultation when making them, and one for continued focus on the end user. Ultimately, end-user impact is at least as important as the technical considerations of a proposal, and judging that impact requires a more serious effort to understand and incorporate other non-technical views.
The Internet is for End Users is an IAB document; it doesn’t have IETF consensus. As such, it doesn’t bind IETF decisions, but it is considered persuasive because the IAB has a mandate to consider issues affecting the Internet architecture.
On its own, then, it has limited effect. I view it as a small contribution to a larger body of principled thought that can help inform decisions about the evolution of the Internet. It’s up to all of us to apply those principles and develop them further.
Thanks to Martin Thomson and Eric Rescorla for reviewing this article.
Adapted from original post which appeared on Mark Nottingham’s Blog.
Mark Nottingham is a member of the Internet Architecture Board and co-chairs the IETF’s HTTP and QUIC Working Groups.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.
I support the principles described, but I think it’s too easy for a reader to boil this down to “the balance is always drawn in favour of the person holding the client device”. That would be too simplistic; section 2 of the RFC explains that indirect end users need to be favored too. Examples given are parents, people shown in pictures and people present in a room with IoT sensors. Designing protocols to identify (and understand the needs of) such people is much harder than simply honoring what the client wants. It will take significant effort, but it’s right that it should be attempted.
0) This is a very timely article. Trained as an engineer, I knew by heart that a new design needs serious large scale field trials before released as a product, especially when there are competing proposals. Simply going through debates, no matter how much, is not enough. The end-users should have the chance to cast their votes for arriving at the decision. For a worldwide communication system such as the Internet, this is a very challenging matter, because there is only one environment that is fully used daily. During the past few years, we have been working on a possibility for dealing with such a need, triggered by the IPv4 address pool depletion. While preparing a basic demonstration of our scheme, we got involved with an online discussion about the state of IPv6. The finding was rather intriguing, thus it serves as the preamble of the below report:
1) This is an analysis thread started by an Ericsson AB researcher shortly before his retirement. Based on publicly available statistics, IPv6 did not seem to have taken over as much of the Internet traffic carried by IPv4 as the general public had been led to believe.
http://www.circleid.com/posts/20190529_digging_into_ipv6_traffic_to_google_is_28_percent_deployment_limit/
2) Below is a demonstration of our approach that everyone can replicate from a home network. It describes our proposed architecture that eliminates CG-NAT while expanding assignable IPv4 address pool, including mitigating certain current issues, addressing ITU’s CIR (Country-based Internet Registry) proposal, etc.:
https://www.avinta.com/phoenix-1/home/RegionalAreaNetworkArchitecture.pdf
3) We are keenly aware that our approach is rather unorthodox. However, please consider the proposed architecture as creating a full spherical-layer of cyberspace consisting of RANs (Regional Area Networks), between the current Internet proper and the subscriber premises. Each RAN is defined around one reusable 240/4 netblock. Regarded as a private / independent environment, much of the existing Internet protocols, conventions, restrictions, etc. may be repurposed from a revised perspective in the RAN, thus opening up a lot of possibilities, including multiple isolated domains each serving as a large scale test bed for field trials of new designs.
Hope these can provide some material for furthering the dialog.
Abe (2020-08-31 10:10 EDT)
I agree that identification of such indirect users is both difficult and important. I’d add enterprises to the list of such users, given that substantial numbers of people are using devices provided to them by their employer, often via its own network or VPN.
The RFC downplays the importance of governments despite, in democracies at least, the government representing the citizens of a country and having ultimate responsibility for their safety. As digital sovereignty gains more traction I believe that it will be increasingly difficult to ignore the wishes of governments.