Thoughts from IETF 106

By on 27 Nov 2019

Category: Tech matters

Tags: , , , , , , ,

1 Comment

Blog home

The most recent meeting of the Internet Engineering Task Force, IETF 106, was held in Singapore earlier this month, and as usual, there were many Working Group meetings.

This post is not an attempt to cover all of these meetings or even anything close to that. I have been highly selective and picked out just the items that I found interesting from the sessions I attended. 

IEPG

Some 25 years ago, the Internet Engineering and Planning Group, or IEPG, had a role and a purpose and even an RFC (RFC 1690). These days it has become a small and selected overture of the IETF’s areas of current interest for those who arrive a day or so early to the meeting and want to ease into the week. Lately, its themes have centred on BGP and the DNS and the IEPG session at the latest IETF meeting was no different.

On the BGP side, there was FORT, a new open-source RPKI validator. As far as the diversity of platforms and implementations goes, it’s always better to have diversity as compared to a monoculture based on a single implementation, so this is a good thing to see. I haven’t tried FORT myself yet, but if it is as good as NLnet’s Routinator —and it certainly seems to be there—this will be a very neat RPKI validation tool indeed.

The larger picture of BGP route origin security is looking better, with NTT’s Job Sniders providing a summary report that showed uptake of networks filtering out announcements of route objects that were considered to be invalid in the context of the SIDR secure routing framework. 

RPKI and ROAs are relative newcomers to the inter-domain routing environment. Before then there was extensive use of various forms of Internet Routing Registries (IRRs). This use of IRRs also has a 25-year history and over this time they have proliferated and accumulated a fair share of aged cruft. 

Recent efforts to improve the IRR space include a cross-reference to the RPKI to identify and remove cruft from the IRR environment. This is not quite as challenging as it may sound, in that the valid RPKI-signed ROA set can be used to filter out older contradicting IRR objects. But whatever the degree of difficulty needed to perform this automated filtering of IRR data, the results is that the IRRs appear to have been revived and once more can be a useful tool to filter out unintended and possibly deliberate forms of abuse of the Internet’s routing system.

On the DNS side, the folk from Japan’s JPRS presented on their recent study into DNSSEC validation failure. They used both active probing and passive measurements in this study. They noted that DNSKEY queries increased when the DNSSEC information failed to validate at the recursive resolver, but for lower-level domains, this may not be so readily detectable. This appears to imply that if you want to monitor a signed domain name for validation integrity, then it seems that active monitoring is a more reliable approach.

Private namespaces continue to be an issue for the DNS. The actions in the IETF that resulted in a special use names registry were somewhat controversial at the time and it continues to carry through to today. 

An earlier proposal in the IETF for a space that readily identifies private-use namespaces under a common TLD of .alt did not get very far, and a subsequent proposal within the ICANN community to support the case for the designation of .internal as a common private use TLD has not received widespread enthusiastic support either. There is some confusion over the IETF’s role as the custodian of the RFC 6761 Special Use names registry and ICANN’s role as the name policy community for generic TLDs. 

The best way to resolve such areas of confusion in the DNS is to divert the conversation to the choice of the actual label to use! We obviously need a better name for such a generic TLD, and this presentation advocated the use of .ZZ for such a purpose. Whether it will be used at all is an unknown factor here, and the continuation of a broad variety of undelegated names for what appears to be private context continues largely unabated. 

I’m unconvinced that .alt.internal.zz or any other label would provide a useful resolution to what appears to be an ongoing issue in the DNS, but there is no doubt that as the beauty contest for the ‘right’ label continues we may well fixate on the choice of a label value and forget about why we wanted it in the first place!

Finally, I presented on the handling of NXDOMAIN answers in the DNS. The observation is that end-user queries for a non-existent domain name are amplified within the host and network systems, and a single initial transaction of trying to resolve the IP address of a domain name is transformed to close to three queries on average at the authoritative server.

The reasons for this amplification appear to lie in aspects of Dual-Stack Happy Eyeballs behaviours in stack end systems, DNS re-query timers and DNSSEC validation time lags, UDP packet duplication, aberrant DNS load balancers. Underneath all of these is the prevalent attitude that it’s quite okay to hammer the DNS mercilessly. 

One of the weaknesses of the distributed functionality of the Internet is that we can create unintended outcomes where the intended benefits in one activity—here a desire for responsive systems that do not create user-visible delay—creates imposed costs in another domain, namely aggressive timers and liberal re-transmit practices that create extraneous load in the DNS recursion environment.

Congestion Control Research Group

The most radical, and perhaps the most successful, part of the Internet Protocol Suite has been the TCP transport protocol. It removed the need for hop-by-hop reliability and replaced it with a single mechanism to manage both data integrity and flow control that was intended to result in efficient network loading. 

TCP turned out to be perhaps the most critical part of the entire protocol environment. It has been around for some 45 years now and you might think we know all there is to know about how to make it work well. Turns out that’s just not the case and work on managing adaptive flows across a network is still a topic of innovation and evolution.

Google’s BBR protocol is being revised with BBRv2. My experience with BBRv1 has been very satisfying, but only in a purely selfish manner! 

Read: BBR, the new kid on the TCP block

BBRv1 can be very unfair. I’ve seen my BBRv1 sessions clear out then occupy large expanses of network bandwidth for itself, which is another way of saying that this protocol can be prone to starving simultaneous loss-based TCP of network resources. 

For widespread adoption in the Internet, BBR needed to maintain its profile of low buffer impact, but at the same time behave more fairly to loss-based TCP sessions that make more extensive use of network buffers. Part of the refinements of BBRv2 is improved internal processing paths to reduce the CPU usage of BBR to match that of Reno, CUBIC and DCTCP. Google have also adjusted auto-sizing and ACK processing. 

Part of the rationale for these changes to BBR is a shift in the way we use the Internet. The days of maximising throughput for large data sets over constrained paths are pretty much over. Today the Internet is far more about adaptive bitrate video, RPCs and web pulls. Such network use tends to be throttled by the app, not the network, which means that the congestion window within the transport session is not the limiting factor. What this implies is that when the session is not congestion window-limited, then ACK processing can be simplified enormously, implying lower processing overheads in such cases. 

BBRv2 also changes the Linux ACK behaviour removing a limitation on ACK generation, improving the integrity of the feedback signal of the receive window. BBRv2 will not exceed an explicit loss rate target and will take into account Explicit Congestion Notification (ECN) feedback from the network, where it exists. It is probably not the last word on congestion control in network sessions, but it illustrates that this remains an area of research and technology evolution. 

Facebook described COPA, a congestion control algorithm that is also delay-sensitive. The algorithm uses a 10-second sliding window to establish the minimum RTT and compares the current standing RTT against this minimum RTT. The higher the variance of this current RTT from the minimum RTT the smaller the congestion window, which, in turn, will throttle the sending rate. 

Earlier experience with delay-based congestion control algorithms (such as TCP Vegas from 1994) have found that this technique quickly starves the session, so COPA uses a so-called competitive mode to detect other buffer filling flows and reduces the sensitivity of the algorithm to downward pressure from these other flows.

In this family of delay-based congestion control algorithms is also LEDBAT. Work on a revision of LEDBAT was presented with LEDBAT++. Changes include a shift from one-way delay to simpler round trip time measurements, a multiplicative decrease, a slower slow start approach and smaller increase amounts.

The reason for all this renewed study of delay-based congestion control seems to lie in the effort to design an efficient, high-speed congestion control protocol that minimizes any dependency on deep buffers within the network.

V6 Operations

I am never sure what to make of these IPv6 sessions. 

Was the designed-by-committee IPv6 specification so broken in the first place that 25 years later we are still finding fundamental protocol problems and proposing yet more changes to the protocol? Or, are we just nit-picking at the specification because we haven’t been told to down tools and just stop work? Or has something else changed? 

Of the three choices, I suspect it’s the latter, namely a change in the environment. IPv4 was designed in an era of mainframe computers and aspects of the protocol design reflect that environment. IPv6 is not all that different from IPv4 in many respects. It may have been designed at a time when desktop personal computers proliferated but mobility was yet to achieve today’s absolute dominance of the environment. 

The mobile industry has had no time for the IETF’s Mobile IPv4 or Mobile IPv6 protocols that attempted to maintain an IP address binding that survived dynamic attachment to various host networks. Mobile devices simply ignore this and instead use a dynamic association of IP addresses on an opportunistic basis. 

This dynamic association raises some issues in protocol behaviour. One of the issues is neighbour discovery, where a new host performs a router solicitation and configures a global unicast address, but this does not imply that the router has performed a comparable operation to put the host into its neighbour cache. It waits for incoming traffic to perform this operation. If the incoming traffic is part of a packet burst, the burst is effectively discarded. Should this be fixed in the IPv6 specification? Probably!

There was a time in the refinement of the IPv6 specification when we told ourselves that the new protocol would have such a large address space that everything could be uniquely addressed in IPv6, irrespective of its connected status to the public Internet. We seem to have gone in an entirely different direction and the demise of scoped addresses, coupled with the entry of Unique Local Addresses (ULAs) poses a few questions that are still unresolved. 

If a manufacturer wants to pre-configure a local-only use IPv6 device than how can this be done? A solution proposed here was to preconfigure the device to only accept address configuration (SLAAC or DHCPv6) if the address is a ULA address, and require manual intervention to broaden the scope to a global unicast address for the device. 

Again, it’s not easy to tell if this is a good response to a known problem or just more nit-picking at the protocol. 

Application behaviour considering DNS

Over the past couple of years, the IETF has worked on two privacy initiatives in the context of stub-to-recursive DNS queries, namely DNS-over-TLS (DoT) and DNS-over-HTTPS (DoH). These tools are intended to replace the platform-defined stub resolver code and cloak the open UDP query response protocol inside a session layer that provides integrity and encryption. 

Read: DOH! DNS over HTTPS explained

In theory, DoH is a superset of DoT, as both use TLS as the channel platform and DoT places DNS queries and responses directly into this TLS stream with DoH providers a further wrapper of HTML headers.

In practice, there is a fork in this DNS road, as DoT appears to have been seen as a platform tool, replacing the host platform’s UDP port 53 channel, where DoH is seen as an application module, and implementations have been built for Firefox, Chrome, CURL and OkHttp. It’s this latter scenario which was the focus of this session.

Part of the issue here is the redirection of DNS queries, with applications opting to use non-local DNS resolvers. In response to various objections of a relatively invisible form of DNS query redirection, there has been an exploration of the concept of so-called ‘canary domains’, which are intended to be locally scoped privately defined names whose presence halts an otherwise automatic use of DoH by an application. 

The motivation here is to designate a ‘safe’ domain that can be locally defined and its definition is intended to trigger a DNS policy action. Purely as an example, if the domain name it.is.ok.to.use.doh.com resolves, then the application should switch to use DoH. 

In the context of the Firefox browser, the canary name is currently use-application-dns.net. If this name results in a negative response (NXDOMAIN or NO DATA) then Firefox will not automatically use DoH. 

There is some desire to generalise this approach and share a common canary domain name for all applications that want to check if the local infrastructure is unwilling to permit the automatic use of DoH.

There are some fascinating questions about this work, some of which appear to question the basic architecture and assumptions about the DNS as we understood it. Is the DNS one universal substrate where any recursive resolver can query about any defined name? Or are we defining customised spaces where only particular resolvers may be used to query for particular names? 

Here there is a possible inference that if this recursive resolver is not used, then the name will not resolve at all. Should DOH recursive resolvers deliberately add a policy, in the sense of not providing a response for some classes of domain names, or performing NXDOMAIN substitution for other names? How should such a policy be described and how could applications or end-users make use of it? From there we head into so-called ‘resolverless DNS’ where the DNS is provisioned as a ‘push’ service so that applications are provided with answers to queries that were never asked in the first place! 

We also head into the area of speculation about assuming the presence of a client HTML application, where the application is capable of accepting more than simple application/dns-message HTML messages, allowing the application and the DoH resolver to conduct an exchange of metadata about the resolver’s function. Obviously, such a meta-conversation is not an option in DoT.

There was this historical concept of the DNS as a ‘universal’ namespace. Indeed, this universality could in and of itself define the Internet, where the use of a common and consistent public namespace essentially defines a common domain of discourse that is the public Internet. We are heading in a different direction though where this is heading. How quickly remains to be seen.

DPRIVE Working Group

It seems odd to say it, but I believe that it’s unclear whether adding privacy constructs into the DNS has been beneficial or disastrous for the DNS as a whole. It’s certainly been disruptive to the DNS architecture as we used to understand it. 

Whether these changes increase end-user privacy or merely provide largely meaningless obfuscation remains to be seen. So far, we’ve seen initial efforts to improve channel security with specifications of the stub to recursive encrypted channels (DoH, DoT and DoQ), as well as eliminating DNS verbosity (Qmin). 

Read: DNS Query Privacy

These secure encrypted channels provide a mechanism for the application environment to pass over the top of the ISP-provided DNS resolvers and use resolvers of the application’s choice. Rather than being essentially trapped into using the DNS name resolution environment provided by the local network infrastructure these encrypted channels can act as tunnels through this infrastructure. 

This broadened choice in DNS name resolution has exposed the observation that the DNS is not a homogenous environment. Some resolvers censor names, others perform substitution of NXDOMAIN responses, or perform substitution in responses. Some resolvers perform no query logging, while others not only log all queries but then sell the profile of the queries and queriers. If the user expectation is privacy and integrity, then even with a secure channel are there still points of weakness. 

Some recursive resolvers monetize query logs, using the IP address of the stub resolver as a key to an end user’s identity. Not all recursive resolvers are the same and perhaps users would be well served if their queries for certain domain names were steered towards particular recursive resolvers.

This forms part of the thinking behind an approach of using the DNS to guide applications, where a DNS name can, with the use of a particular resource record, nominate a recursive resolver to use via an encrypted channel. And in the case of a TLS session using encrypted SNI in the TLS startup, another field within the same resource record can provide the public key to use with encryption of the SNI field to connect to this recursive resolver. All this is DNSSEC signed, although given that this is effectively a directive to the stub resolver this DNSSEC-signing makes sense only when the stub resolver is performing DNSSEC validation itself. 

If you wanted to fracture the DNS and create names that only resolve in certain contexts and not in others I guess you would do exactly this, all in the name of good DNS privacy, which seems more than a little disingenuous to me. On the other hand, this elevation of the DNS into application contexts raises the question of how much of the DNS you need to take with you. If the DNS already provides authoritative server information then why not provide access methods in this same area and ditch the entire concept of intermediation?

The question here is whether we want to preserve the current architecture of the DNS and just add some magic coating to prevent inspection and interception, or whether we are willing to contemplate the application level models of privacy in a larger model of content distribution? This should be an interesting discussion.

Rate this article

The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.

One Comment

  1. Jen L.

    >”Was the designed-by-committee IPv6 specification so broken in the first place that 25 years later we are still finding fundamental protocol problems and proposing yet more changes to the protocol? Or, are we just nit-picking at the specification because we haven’t been told to down tools and just stop work? Or has something else changed? ”

    Well, a bit of everything. I mean IPv6 is not the only protocol designed 25 years ago. But it’s the only one I know which has been idle until very recently. Other protocols were evolving naturally as their use cases and the Internet itself were changing.
    In case of IPv6 we have just *started* getting real operational experience with smth which was sitting on the dust shelf for 25 years – because operating dual-stack network is *not* getting operational experience with IPv6 really. I’d expect v6ops@ getting more busy as people start turning off v4 and find out that “IPv6 is not what it seems”.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Top