Overcoming the challenges of IPv6 support in BIND

By on 3 Jun 2020

Category: Tech matters

Tags: , , ,

Blog home

Over the past 20 years, Internet Systems Consortium (ISC) and the BIND development team have worked hard to enable the deployment of IPv6 on the Internet.

We have prided ourselves on participating in the development of the Internet Engineering Task Force (IETF) standards and on implementing them faithfully, as reference implementations. As a key part of the Internet infrastructure, we have always felt that it was important for BIND to have solid support for IPv6, even as the Internet as a whole has struggled to support it.

Since its introduction a quarter-century ago, there have been four main challenges with IPv6 support in BIND:

  1. Operating system support for IPv6
  2. Client IPv6 connectivity issues
  3. Server IPv6 connectivity issues
  4. IPv6 and packet fragmentation

Operating system support for IPv6

BIND 8 supported IPv6 record types (AAAA, pronounced ‘quad-A’, and the now-obsolete A6 record) but didn’t use IPv6 as a transport. When BIND 9 was written, one of the design goals was to include support for IPv6 transport. As an application, BIND relies on the operating system to provide transport services.

Back in 2000, when BIND 9 was first released, IPv6 was still a relatively new draft standard. Members of the IETF, concerned about the issue of IPv4 address exhaustion, had created the new standard several years earlier, but IPv6 support in server operating systems was mixed, to say the least. Some platforms had no IPv6 support at all, while others had only pre-RFC 2133 IPv6 implementations.

ISC’s engineers tried to insulate BIND 9 from exposure to the wide variations in operating system support for IPv6 by declaring and defining the IPv6 structures in BIND itself. We mapped the pre-RFC 2133 structures in some of the platforms to their RFC 2133 counterparts, to isolate the problem of dealing with non-compliant systems to one part of the code.

For example, BIND had to find out how each platform provided lists of interfaces, as we needed to be able to bind to each interface to be able to send back UDP reply traffic with the correct source address. The IPv4 interface input-output controls (ioctls) were extended or replaced by each vendor in a different manner and were not well-documented. In Linux this information was in the file system, which created additional issues for people running named chrooted.

In 2019, we decided it was finally safe to remove these BIND workarounds for pre-standard IPv6 operating systems. These workarounds are still present in our long-term support version of BIND, the 9.11 branch, but are gone from 9.16 and subsequent versions. There is now widespread, standardized support for IPv6 transport in LINUX and UNIX variants.

BIND 9.16 and later versions can only be built with on systems with support for IPv6 in the operating system. (It is still possible to run BIND on a server that has no IPv6 interfaces configured, but support for IPv6 structures must be present in shared libraries and header files in order for BIND to build.)

Client IPv6 connectivity issues

By default, BIND 9 sends answers with both IPv6 and IPv4 addresses, when both are present. This provides the client with all the needed information on reaching the authority, but it can cause problems when the client tries to reach the authority over IPv6 and doesn’t have end-to-end IPv6 connectivity.

Before IPv6, most systems were single-homed to the IPv4 network interface. With the addition of IPv6, systems were very often multi-homed, and many clients didn’t handle it well.

A lot of systems now had an IPv6 network interface, but no functioning IPv6 connectivity. Sometimes, the operating system didn’t receive the negative ICMP reports that should have informed them that their IPv6 network was not working. They might have gotten an IPv6 address back from their BIND resolver and sent a request over IPv6, but then not received an answer because their IPv6 connectivity wasn’t working.

Eventually, after the client software ‘timed out’ it would retry the query. To the end-user, this could introduce significant delays and make the DNS seem unresponsive.

There were requests to ‘turn off’ IPv6 support. We reluctantly added a setting in BIND to filter out quad-A answers to prevent this problem.

Obviously ‘turning off IPv6’ was not going to help encourage wider deployment of IPv6, and this negative feedback loop was going to make it impossible to make the transition to IPv6.

The Happy Eyeballs strategy was developed to address user dissatisfaction with IPv6 due to two different underlying problems: one was the fact that some routers (many in homes) had IPv6 support and an IPv6 network interface, but no actual connectivity to the Internet over IPv6; the other was that, even where end-to-end IPv6 connectivity existed, it was often a slower path, because the Internet as a whole didn’t have nearly as much IPv6 support as it had IPv4 support.

Read: Happy Eyeballs – promoting a healthy IPv4 and IPv6 coexistence

To address the first problem, clients implemented the Happy Eyeballs approach, which was to query over IPv4 and IPv6 in parallel, to shorten the response time perceived by the user. This way, in case the IPv6 query was unsuccessful, the client would get an answer via IPv4 without having to wait for a timeout.

Affirmative action for IPv6

People working on the Internet were concerned that IPv6 would never be adopted if it was slower than IPv4, so there was also an effort to give some ’affirmative action’ to IPv6, to advantage IPv6 in order to spur adoption.

BIND 9 uses an algorithm to select which server to query, based on the smoothed round-trip time (SRTT) to the server. BIND prefers servers with a lower observed round-trip time, although it occasionally retries a slower server in case it has become more responsive.

In BIND 9.11, we added a ‘Happy Eyeballs’ bias towards IPv6, artificially raising the SRTT for IPv4 paths so that IPv6 paths would be preferred. Since then, if there are two servers that are equally reachable via IPv4 and IPv6, BIND almost always uses IPv6 to connect.

Server IPv6 connectivity issues

After Network Address Translation (NAT 64) took hold, driven in part by the US cellular market, a companion technique, DNS 64 (RFC 6147), was developed.

DNS 64 addresses the problem of an IPv6-only connected client that needs to connect to an authoritative server that is reachable only by IPv4. When the client queries BIND 9 over IPv6, BIND resolves the name, receiving only an IPv4 address for the target authoritative server. BIND synthesizes a quad-A record for the target server by adding a prefix, and returns that to the IPv6 client. The IPv6 client can then contact the IPv4 target server via NAT.

This works today because the BIND server, in this scenario, is presumed to have both v4 and v6 connectivity to the remote authoritative servers. This is a reasonable assumption because a shared network resource like a DNS resolver is likely to have good connectivity. However, in the future, it is very likely that there will be more resolvers that connect to the Internet over IPv6-only links. In this case, BIND will need to contact these remote servers via IPv6 only, even where the remote server has only an IPv4 address.

Currently, BIND does not synthesize an IPv6 address for its own use in recursing, only for the response to the client. This is an enhancement under development now.

ISC is also trying to move the entire DNS 64 feature out of BIND 9 itself and into a BIND ‘hooks module’. This is because the whole DNS 64 feature set is a transition mechanism, and not a very elegant one. It requires a separate NAT, for one thing, and too much topological information in too many places to be easy to deploy or maintain. It also ‘breaks’ DNSSEC. We think that in the long run, encapsulation strategies that offer IPv4 over IPv6 as a service are more maintainable.

IPv6 and packet fragmentation in the DNS

IPv6 handles packet fragmentation differently from IPv4. In 1996, IPv4 was vulnerable to fragmentation reassembly attacks, which caused some operators to filter fragmented packets. IPv6 attempted to avoid that by moving the fragmentation and reassembly from the network to the end hosts.

In theory, whether fragmentation is necessary is determined by looking at the largest packet the end-to-end path can handle. This is called PMTUD (path maximum transfer unit discovery).

DNS uses fragmented UDP all the time, especially for DNSSEC (the DNSSEC signatures make the packets larger), so DNS packets are often fragmented whether for v4 or v6.

Most DNS servers do PMTUD, sending a DNS packet over UDP to see whether it succeeds or receives an ICMP retry message. This process is fragile because some networks filter out ICMP messages.

Since the DNS server doesn’t maintain state about prior messages, it has to wait for the client to timeout and retry the query, before retrying at a different packet size after receiving an ICMPv6 PTB message. This obviously adds delay.

BIND 9 is more proactive; it skips the PMTUD and simply fragments packets at the network MTU, per IPV6_USE_MIN_MTU in RFC 3524, whenever that is available. However, across the DNS system, determining and setting maximum packet sizes continues to be an issue. The 2020 DNS Flag Day effort intends to reduce fragmentation by resetting the MTU.

Development continues

BIND developers have invested a lot of time and effort over the past 20 years trying to promote and facilitate the use of IPv6 in the DNS.

Some of the workarounds added to BIND to accommodate partial IPv6 adoption have complicated the software, and we are eager to remove these as soon as the transition is ‘over’.

The problems with operating system support are largely resolved, or at least the situation is vastly improved. However, the problem of end-to-end connectivity over IPv6 is still with us. There are transition technologies available to address this, but providing IPv4-as-a-service network connectivity over IPv6 may be a better general solution.

Mark Andrews is a software architect at ISC and has been the go-to person for anything DNS protocol- or BIND 9-related, for as long as ISC has existed.

Rate this article

The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.

Leave a Reply

Your email address will not be published. Required fields are marked *

Top