RIPE 77: DDoS defence, zombie routes, machine learning and more

By on 17 Oct 2018

Categories: Events Tech matters

Tags: , , ,

Blog home

RIPE 77, happening this week in Amsterdam, is one of the largest RIPE community meetings I’ve been to; with over 800 registered participants from 68 economies.

Interestingly, the RIPE NCC asks attendees for their ASNs when they register, which totalled 290 unique instances. This count suggests at the corporate affiliation level, there are less Internet resource holders than in the past, relatively speaking, but it is still a huge participation in the RIPE region policy and governance discussion.

We’re two days into a five-day meeting — not including the two days prior where many attendees participated in side meetings for the CENTR group of ccTLDs and the DNS-OARC — and already we’ve touched on a range of topics in technology, law and society.

Read some of the highlights from the DNS-OARC workshop.

Here are a few of the moments I captured from my time here thus far.

DDoS defence in the terabit era

This was a really good, tight presentation from Arbor Networks’ Steinhor Bjarnason on DDoS threats in the wild.

The use of Universal Plug and Play (UPnP) and Memcached is now ubiquitous. UPnP makes it easier to get services through home NAT gateways and it’s meant to be configured solely on the ‘inside’. But alas, increasingly, router vendors are slipping up and exposing the control ports to the ‘outside’ making it trivial to get inside an otherwise protected NAT.

Memcached is a software technique that speeds up calls to back-end data services when you have a website, for instance, making queries to SQL and repeatedly asking for the same, relatively invariant data. Memcached will serve it from a local cache and both speed up the reply and avoid load on the database. Unfortunately, it also acts as a giant traffic amplifier if it is exposed to the ‘outside world’ and will then reflect attacks outbound, with high velocity and size.

APNIC has a specialist cybersecurity team who are undoubtedly familiar with these attack vectors. Arbor has eyes worldwide and Steinhor did a nice job of summarizing the risks, and the extent to which it is now a vector for an attack at scale.

The principle problem here is that the ability to generate ‘amplified’ traffic has two roots; either you go after consumer devices in the wild, or you find huge bandwidth at big machines, and get bigger lumps of traffic. This pair of exploits covers off on both!

Zombie routes

The RIPE NCC’s Emile Aben gave an informative lightning talk on his recent observation and study into the problem of zombie routes. This is a similarly recurring topic in BGP history —  ‘ghost’ routes and BGP ‘wedgies’ preceding it — but with the potential to be a new variant, or newly seen for the emerging generation of routing-active people. His presentation generated some interaction at the microphone, particularly from ISPs who see the same issue.

Isolario BGP-MRT data reader

Isolario is a research group that we’ve featured before on the APNIC Blog, who focus on BGP and routing analysis.

Lorenzo Cogotti’s presented on a software ‘refresh’ they have undertaken, reviewing core code libraries to read the Multi-Threaded Routing Toolkit (MRT) format data that is ubiquitous in BGP analysis — it’s the neutral point format that most BGP collections use to archive data about the states of BGP.

There has been a huge increase in MRT data size. The older tools aren’t coping well; they’re slow, they can’t handle BGP ‘add-path’, or act as a filter. So the Isolario team have reached for their C compiler to write a modern, threaded, high throughput parser for MRT, with good memory behaviours and the addition of a filter mode.

The code is both a library (for general purpose use) and a tool that works in normal UNIX form as a filter in stdin / stdout pipes and produces what they call ‘grep friendly’ output format for UNIX shell command filtering. The code is at least comparable on average with bgpdump for memory, but up to 12x faster.

This kind of code-refresh is a great contribution to the software ecology in our area. Well done!

BGP Communities on the Internet

Florian Streibelt, exposed how the modern state of play in BGP means we have a ubiquitous emerging risk in the naive use of BGP community strings. You can’t tell by first-pass if a Community is passive (informative) or active (changing routing outcomes) and so the default behaviour for many people is to pass them on. Since BGP Communities can be used to cause ‘action at a distance’ this has the risk of attacks via unexpected / unforeseen consequence of a Community string transiting out of its expected scope.

So, is this really a problem? Well, with 75% of BGP messages seen in their analysis using Communities, and with the AS-PATH greater than six hops at 10%, and 50% of the Communities being seen to go beyond four AS hops in the path, yes, it is. The median AS-PATH distance is well understood to be at four, which means Communities propagate widely.

Basically, the story here is to be aware of what you’re doing; don’t blindly follow recipes for use of Community strings, without understanding their propagation and consequence.

The human factors of security misconfigurations

Constanze Dietrich’s (LEXTA Consultants Group) presentation was a very enlightening and entertaining review of a wide-ranging social policy survey into how tech staff and contractors/consultants feel ‘when things go wrong’. Blameless root-cause analysis has value but the contractors / consultants are less ‘there’ perhaps because they have more direct exposure to risk / consequence outcomes (personal liability insurance premiums?).

Fascinatingly, the report suggests that older operators distrust the tools that they (have to) use in their work. In contrast to that, younger operators (<3y experience) report that they still trust the hardware and software they use. As pointed out to me afterwards (thanks Tobias) the researchers attribute this to younger operators not yet having as much experience being burned by funny behavior of systemd^H^H^H^H^H^Hsoftware.

Innovation and human rights in the Internet architecture – Is self-regulation delivering on its promise?

PhD candidate, Niels ten Oever (University of Amsterdam), gave a very entertaining and thought-provoking talk on public interest governance issues in the Internet. The fundamental question here is ‘can we build a public space on privately-owned infrastructure?’, which of course, is almost exactly what we think we are doing all the time.

The specific focus of the talk was his analysis of the role, beliefs, and behaviours inside the IETF. There were a lot of quotes in his slide pack; the one which I liked most was this one:

It was an interesting overview, which focuses on how standards now emerge, for example, the dominance of Google in QUIC, and the unexpected outcomes. This leads to some economic reductionism from standards developers who believe ‘making standards that will get traction equals making standards that reflect vendor needs’ — perhaps not what we thought from IETF TAO.

An analysis of the imputed company standing of participants (which the Chair of the IETF, Alyssa Cooper, asserted wasn’t currently measured by internal IETF processes because it’s not tracked as part of the formalisms of ‘we attend as individuals’) was quite interesting. The IETF attendance by individuals appears stable, but fewer companies are seen, which implies there is considerable consolidation in the industry.

This talk started with some social science memes I struggled with, but I think it’s uncovering a really important topic we need to be mindful of: we are trying to build something that delivers a publicly beneficial outcome but without a lot of traditional oversight and checks and balances. It makes us agile, and it frees us from some constraints inherent in civil society anchored in the nation-state model, but, it also exposes risks of corporate capture and lack of accountability.

Machine learning is a bit of a theme here

At both the DNS-OARC, and RIPE 77, Machine Learning (ML) and Neural-nets have been a bit of a ‘thing’. A good observation made in the room is that it’s not Artificial Intelligence, as much as well applied maths. The selection of the models to train and the application of filtering to data is pretty important.

DNS-OARC had a talk from Sebastian Castro on applications of ML to resolver detection.

The RIPE 77 talk, Machine Learning with Networking Data, given by KIProtect’s Andreas Dewes and Katharine Jarmul, and DCSO GmbH’s Andreas Lehrer, mentioned an additional topic, about how to anonymize (or more realistically pseudonymize) data captures including PCAP files and flows, in a reversible cryptographic manner. This might be interesting in wider contexts such as cybersecurity, or Day-In-the-Life captures. Note: their goal was to not expose internal network structure.

This work is to be contrasted with the work by Dave Plonka (Akamai) who has worked on a non-reversible (1-in-n) filter to mask out an IP to set it into a determined pool of like addresses.

Stay tuned for insights from the rest of the conference or follow it remotely.


Rate this article

The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.

Leave a Reply

Your email address will not be published. Required fields are marked *