Internet registries, whether they’re Regional Internet Registries (RIRs), National Internet Registries (NIRs) or Local Internet Registries (LIRs) essentially have two core functions — managing resource information and providing it to the public. When it comes to the latter function — actually getting that data to clients — the closer the servers are to the clients the better, due to increased efficiency and speed. This is also true for query/response services like whois or Registration Data Access Protocol (RDAP).
The current RPKI repository system is spread across RIRs, NIRs and LIRs. They’re often concentrated in very few Autonomous Systems (ASes), usually each organization has them in just one AS each.
Could we distribute these servers more efficiently? How would we do that? What technologies would we use?
Fetching data in RPKI
I had this idea when considering JPNIC’s repository.
Resource Public Key Infrastructure (RPKI) is a mechanism to access and verify resource certificates issued by the registries. These resource certificates use Public Key Infrastructure (PKI). In the PKI world, there are chains composed with certificates. Generally, two or three certificates form a chain from the bottom (the server has its own certificate) to the top (the trust anchor).
RPKI interacts with every part of the chain, from the top to the bottom. Using RPKI to derive Route Origin Verification (ROV) for BGP requires RPKI to fetch data from every publication point, everywhere. It isn’t just the resource certificates that need updating; it’s all the parents and all the data about everyone in the RPKI system worldwide.
Read more: RPKI and trust anchors
The purpose of composing a certificate chain is getting a whole set of ‘valid’ data (called VRPs for validating ROA). The chain begins at the five trust anchors’ certificates, then goes down to the subordinate Certificate Authorities (CAs) and so on. This means the relying party software (RPKI clients) gathers everything issued worldwide.
That’s a lot of work. Can we make it easier?
Spreading out the servers
There isn’t a huge amount of RPKI data, but it is updated frequently. The design of its distribution should be appropriate for these needs. Updates to the Certificate Authority (CA) need to be completed within a defined time frame, and then downloaded by clients.
And, of course, these downloads are faster when they’re closer.
This all sounds good, but how would we actually achieve it? How could we archive this data distribution system?
Currently, RPKI uses two Internet protocols to manage access to the PKI products published in repositories. These protocols are rsync and RPKI Repository Delta Protocol (RRDP). Rsync is a standalone protocol directly on TCP/IP. RRDP is designed to operate over HTTP or HTTPS, therefore is part of the ‘web’ family of Internet protocols. Of the two, RRDP is better suited to use of commercial Content Distribution Networks, but both protocols are capable of being deployed using ‘find best local source’ models like Global Server Load Balancing (GSLB).
GSLB is useful for choosing the nearest repository to a client, from domain names written in resource certificates being used to retrieve issued objects. DNS zones accommodating domain names for rsync or RRDP servers would need to be directly under Top Level Domains (TLDs) to have the anycast benefits of DNS root servers.
This wouldn’t be easy. Several things would need to be considered, including defining the level of service and division of roles, as well as configurations in each RPKI CA to put issued objects onto the server. Governance issues should also be considered, particularly in relation to managing a shared namespace. This would be a significant investment and a big change in the way RPKI is managed. It would also change the way validation works, but it could potentially have big benefits.
What would it achieve?
RPKI CAs operated by RIRs, NIRs and others are currently distributing objects from their repositories. If registries and other RPKI CAs can collaborate to accommodate these kinds of changes, it is likely that speed could be enhanced. It can take 20 minutes to download some updates today; this could potentially be reduced to just two minutes in some cases.
All of this would, of course, require input and cooperation from the broader Internet operations community. It would also help for Internet registries to first ensure they have implemented RRDP.
What do you think?
Taiji Kimura is a researcher at JPNIC and Keio University, Japan. He is involved in research on network security and computer security, among other topics.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.
Thank you for this article