IETF 99, Prague: Thoughts from the IEPG meeting

By on 25 Jul 2017

Category: Tech matters

Tags: , , , ,

Blog home

The IEPG is an informal gathering that meets on the Sunday prior to IETF meetings, the most recent of which was in Prague.

It’s hard to classify the IEPG meetings that occur at the start of the IETF week. Many years ago they had a role in allowing network operators to talk to other operators about what they were seeing and what they were thinking about.

Those days are long since over, and today the IEPG meetings present an opportunity for an eclectic set of diehards to listen to an equally eclectic collection of presentations that wander over much of the topics of today’s Internet, without any particular common theme or filter.

Using TLS to retrieve web objects

George Michaelson presented on a study he had done to identify if there was a pool of users who were incapable of using transport security services (TLS, which is essentially an encrypted transport session service) to retrieve web objects.

These days there is a strong push to place all content on secure servers, and rightly so, but the niggling doubt remains as to whether we are leaving some users and some equipment stranded if we were to say “this content is accessible only over a secure session”. The exercise was one of attempting to measure the level to which users showed a behaviour that was consistent with being unable to load a web object over a secure transport session.

The problem is that, in this case, you are not measuring a behaviour — you are measuring the absence of a behaviour. Measuring the absence of a signal is hard, and in this case, there are many reasons why a user may not fetch a web object, particularly when executing a script embedded in an online ad.

The result is that the signal that a web fetch has failed to occur is a very noisy signal. This presentation showed the extent to which advanced statistical tools can assist in trying to extract a signal that would otherwise be buried within the normal levels of noise.

Automating DNSSEC key management

Some weeks ago I tried to get my domain name registrar to add DS records to my name record so that I could enable DNSSEC on the name. Unfortunately, the conversation did not go far, as the registrar does not support this. But why should I have to make this a conversation with the domain registrar anyway?

RFC 7344 and RFC 8078 describe a method to automate the entire process. The child publishes the intended DS and DNSKEY records associated with the new key in the signed child zone, and the parent may subsequently collect this data, either by periodic polling or in response to an explicit push notification. All well and good, but where are the tools to support this automation?

The CZ.NIC folk have released a couple of tools that support this automation of DNSSEC key management. FRED, their open source registry tool, will periodically poll for the CDNSKEY record and, if found, commences the process of uploading the key and installing the new DS record. The KNOT resolver supports KSK rollover with automated CDNSKEY submission.

There is still some concern that the population of DNSSEC-signed zones is relatively small, and tools such as this that try and make the entire process of key management simpler can only be applauded.

Distribution of queries to authoritative name servers

Configuring name servers is often a black art. Are more name servers better? Or is it a wasted effort? Should the name servers be widely distributed over the net? Is it better to configure name servers behind the same IP addresses, using anycast, or use an explicit list of servers in a unicast manner?

Giovane Moura reported research that is looking at the distribution of queries to authoritative name servers using RIPE’s Atlas system to probe into an experimental name server configuration.

The first result they found is that most recursive resolvers are seen to query all of the authoritative name servers over time. Resolvers do not simply latch on to one server and ignore the rest. Some 60% to 70% of resolvers have what they call a “weak” preference for a single authoritative name server. A smaller cohort of recursive resolvers, some 10% to 30% or so, appear to have a “strong” preference for a single authoritative name server. The preference for the relatively faster authoritative name server is stronger when the authoritative name servers are closer to the recursive resolver, as measured by RTT.

The conclusion is that while resolvers will query all authoritative name servers over time, queries will aggregate on the name server that is seen to be closest to the resolver.

So how can one improve performance of authoritative name servers?

This study suggests that rather than leaving the entirety of the selection process to the DNS, better results are achieved by letting the routing system pick the “closest” authoritative name server. The recommendation in this study is to use anycast on all of the authoritative name servers.

Validating issues with KSK roll

The Key Signing Key (KSK) of the root zone of the DNS will be rolled on 11 October 2017.

There are a number of unknowns in this exercise, so we can’t tell exactly what problems may arise. There are potential problems with the larger DNS responses sizes associated with the various phases of the introduction of the new key, but previous measurements point to an expectation that this will have a marginal impact on the overall DNS system.

The larger unknown is the issues with resolvers that have used manually managed keys and fail to apply the key update at the right time.

The Root Canary project is intended to perform close observation on those resolvers that appear to perform DNSSEC-validation, looking for signals that might indicate some problem being experienced with the KSK roll.

The preliminary results indicate that there is not a lot of support for new crypto algorithms these days and some errors about the way resolvers handle unknown algorithms (returning SERVFAIL rather than an insecure response).

More specifics in BGP

I reported on the stats of more specifics in BGP, a subject which I wrote on recently.

The basic conclusion is that, as expected, traffic engineering more specifics tend to show higher levels of routing instability, while overlay more specifics are the more prevalent.

The surprising result is that IPv6 has some notable routing stability issues, and unstable prefixes in IPv6 show up to 100 times the update levels as compared to IPv4. This obviously merits further investigation.

Rate this article

The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.

Leave a Reply

Your email address will not be published. Required fields are marked *

Top