During the group meeting, we listened to four presentations; three of which were the normal kind, reflecting on a mix of interests we’re often seeing in the IETF currently (see my summation for these presentations below). What I want to highlight in this post is the last presentation made by John Scudder on the basic underpinnings of how routing “works” in hardware and software.
This presentation was fascinating for several reasons. It shows how constant the technological “wheel of life” is, where successive generational changes in computing get deployed into roles, and then become sufficiently complex to be re-implemented by sub-assemblies, which start off as specific engines, but then become generalized computers in themselves. John showed how possible it is to view routing as a process, which can be done by generalized hardware, or by specialized pipelines, which are typically implemented in an FPGA .
When you appreciate that a pipeline is chosen because of its speed, simplicity and cost consequences, subsequent decisions about what contents of a packet (or sequence of packets) are looked at have huge consequences for fast packet processing. In a pipelined process of packet handling, a typical architecture (see John’s presentation) looks exclusively at the initial header bytes, and lets a parallel process read the payload into memory, to be moved out of the router once the header has been used to decide which output path to take. This process is hard coded at the FPGA level: what bits to read and how many is mostly pre-determined. If a protocol designed at a higher level of abstraction expects subsequent data to influence routing, it’s immediately broken the ‘fast path’ choice: it cannot be done in the FPGA pipeline. Therefore, this packet is forced to be reviewed by slower, more complex processing decisions higher in the router’s architecture.
This fundamental problem has been around ever since we went to special purpose hardware, and optimized routing. But it has become topical because IPv6 includes the concept of Extended Headers (EH), which are observed as being poorly supported.
John made quite direct observations that if the community wants the flexibility of services from concepts like EH (which are random, and can come and go, or be present or not) then they need to be prepared to engage with hardware design engineers. A contrary observation is that if people want innovation, they need to be prepared to escape the straitjacket of current hardware, and implement solutions, which are necessarily challenging to the technology of the day.
This is the wheel of life. We have to live it, and we have to try and escape it too. Or maybe just move it along the road a bit!
Other presentations at the IEPG
Comparing IPv4/IPv6 measurements from RIPE Atlas – Emile Aben,
This presentation looked at the relative performance of IPv4 and IPv6 – work Emile has done continuously for some time – but this time presented from the view of the RIPE Atlas “Anchors”, which provide a consistent platform to both sides of the test and remove considerations of which OS and application-stack is being used.
Big data based security applications – Giovane C. M. Moura
This was a brief presentation from SIDN, which discussed their deployment of a ‘big data’ collection and analysis framework. The design is interesting in part because it’s a “clean” model: with a sound theoretical model of abstracting the core data into a model, which is then handled on top of a hadoop-backed infrastructure. SIDN invested in sufficient terms to get the data abstraction model to work in their domain of interest – the DNS. Much of this investment is now likely to be in the public eye (github or some other method) and so we can leverage this work.
DNS over TCP – what might it look like – João Damas
João presented work he and Monica Cortes-Sack have done measuring the likely consequence of TCP used in place of UDP for DNS queries. João and Monica have insight into real query load coming into a large ISP in Spain, and took query logs and captures, replaying them over TCP DNS and analyzing the consequences to the system.
There are interesting insights into the extent to which opportunistic reuse of TCP could be possible:
- Highly likely for a small number of heavily-used resolvers and clients
- Very unlikely for the vast majority of single-shot clients who query and then are not seen again for some time; and then a
- Middle ground where there are small ‘trains’ of queries from resolvers, which could exploit the LRU model of holding just enough state, to avoid surplus connections.
João and Monica have more work they can do, which will help inform the debate about DTLS/TLS (which are necessarily TCP), or retention of UDP (which was discussed in another WG meeting as a very early draft from ISC).
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.