In the last post, we talked a lot about the centralization of the Internet.
Read: 13 Propositions on an Internet for a burning world (1-4)
Centralization is the continuous aggregation of infrastructure and content around a couple of hypergiants — Amazon, Meta, Apple, Microsoft, and Google — with their large Infrastructure-, Platform-, and Software-as-a-Service offerings.
As unimaginable as it may have been for teenage me that one day Myspace would no longer be there, or as hypothetical as it may have sounded on Wall Street in 2006 that Lehman Brothers might be gone in a matter of days, the idea of Amazon, Google, Microsoft, Apple, or Meta (and for that matter, a set of several more hyper and micro-giants like Digital Ocean or Hetzner) simply disappearing can’t be discounted. Therefore, we argue that:
‘We have to be prepared for hypergiants failing.‘
Small ripples can cause a hypergiant to ultimately tumble, and our burning world is sending out the first signals. Infrastructure supported by the exploitation of labour in a globalized world will not sustain itself forever.
Still, hypergiants and all those fancy tech companies that make up their heavily paying customer base have a thing for exploiting labour.
Let’s be honest here: One of the big innovations that Uber, DoorDash, and Amazon (in the way warehouse workers are treated) have found is a way around that concept of worker rights. You can be insanely more profitable if you don’t have to bother with the costs of generations of societal development and social security. Nevertheless, this will not work forever.
Workers realize that those mechanisms put in place — organizing, unionizing, strikes, and labour fights — are there because they are useful, and are beginning to prove successful.
With profit margins based on exploitation hopefully reducing towards a more fair and equitable sharing of benefits, accomplishments, and profits, the question of whether entities can keep up the cash flow for hypergiants arises. Ultimately, the promise of the cloud in terms of spot-pricing and pay-what-you-use is shifting the risk of having a lot of unused infrastructure from the client to the giant. The question will be whether hypergiants can scale down their fixed costs quickly enough if customers have to leave or reduce their spending significantly?
A similar effect can be observed around energy. For central Europe, all it took for energy prices to surge was a pandemic followed by a war close enough to our borders that we could no longer close our eyes. Energy costs only know one direction, which is up. Consumers can easily see a 25% increase in prices. Industry, including data centres, is usually handled quite favourably in these circumstances. Still, scepticism of big data centres of hypergiants grows, with a single data centre easily consuming the equivalent electricity of 50,000 homes.
Keep in mind, energy prices can only multiply so much before your already relatively thin profit margin (it’s the scale that brings the money) crumbles. By passing on the cost increase, you may trigger customers leaving, increasing the issues at hand.
In addition to these two points, our world is (slowly) falling apart. We find ourselves hit by supply-chain issues, and governments around the globe blowing the horn of sovereignty to isolate themselves and their markets. The thing is, global open markets, with as little oversight and control as possible, make life a lot easier for global corporations. If that is taken away, well, things will get more difficult for them.
But let’s keep politics out of this argument. Whatever your perspective is, the question of why hypergiants fail, and whether it is ultimately good for the Internet, is not essential. The important question is how we handle them disappearing, possibly suddenly, when the majority of websites contain fonts hosted by Google for example, or run entirely on Amazon Elastic Compute Cloud (EC2).
If a hypergiant fails all of a sudden, we will have a lot of legacies and broken infrastructure. And, historically, legacy infrastructure is not something we are particularly good at dealing with.
‘Communities caring for local and distributed infrastructure are the future in a world falling apart.’
Our world is changing and — by our own hands — not necessarily for the better. You may already understand from the title (burning world), but the outlook on the future we take in this series is not concerned with the best cases. We ask ourselves how the world will look once we’ve increased CO2 in the atmosphere to 685 ppm by 2050, sending our world to a cooking 6 degrees Celsius more on average by the end of the century. This is a world where billions are displaced by heat and floods, and where the global north learns that climate change will ravage us all, no matter where we live or in which delusion of exceptionalism we currently cradle ourselves. A world where there is no global supply chain to collapse anymore, and most long-range fibres just go… dark.
Yes, we know that this is hard to imagine. But it is far too close to expert predictions for comfort. Luckily, at least some people have started to think seriously about such a world; One of them is Solène, an OpenBSD developer, whose blog post was fundamentally inspirational for this proposition, and I suggest you go along and read her article.
The question here is, of course, how dire our future will be. In a ravaged and war-torn Mad Max-style future there will be limited space for things to be, well, peaceful enough for technology to function. Instead, we are following the world Solène sketched out, in which “… we would still have *some* power available …”.
That world is pretty much aligned with a rather ‘solar punk‘ future, one where there is some power available, but it’s not as abundant as now. Where we are conscious about what we use energy for and how we can continue living in and with our world.
Naturally, despite having burned down, our world would remain littered with (dysfunctional) computers and network technology. As Solène says, that world would most likely be a world where local communities commandeer these sets of technology and start to (re)build (potentially interconnected) networks (of networks). However, the focus would always be providing primary and useful services for local communities. Having local access to a knowledge database will be more important than, say, global communication.
With supply chains gone, keeping systems and networks running will also become difficult in terms of getting spare parts and replacements. This world will be about engineers finding ways for the benefit of their local community (again).
This perspective on critical IT infrastructure contradicts the further evolution of the platform economy and centralization of the current Internet. In our digitizing world we find ‘the Cloud’ and ‘the Internet’ progressing into more and more ‘things’ (It is called the Internet of Things for a reason). As we discussed before, many of these tools do not react too well to their cloud controllers and management platforms being gone. Therefore, we also project that the task of ‘making it run, even though the cloud controller is gone’ will be an essential occupation in the potential future. Local communities will (must) find ways to use technology and provide working services to survive.
‘The slow adoption of IPv6 hinders a re-decentralization of the Internet.’
How do we get to this conclusion?
The IPv4 address space is, for all practical matters, exhausted and unjustly distributed. With the Internet still being very much IPv4-centric — at least when it comes to the path outside of hypergiants — communities running their services still need IPv4 addresses to provide services.
Considering the state of the IPv4 address market, this means an investment of tens of thousands of dollars. While this is a prohibitive cost for a small community project, it enables hypergiants and large hosting corporations to continue collecting addresses at a relatively cheap price compared to their annual operating expenses, thereby further centralizing the Internet. So, if you don’t have a network for yourself, just rent a box with one of the big ones.
At the same time, we also see that regions who historically got the shorter end of the stick when it comes to IPv4 address space (essentially everyone, except for ARIN and RIPE), are now not only struggling with supplying addresses to their members, but they also find themselves as the targets of profit-driven address allocation. Where there is scarcity, there will be people trying to gain profit, and how profit aligns with infrastructural care is something we’ve already discussed.
So, if we want to re-decentralize the Internet, work against (predatory) centralization, and ensure that there is equitable access to the Internet, the only thing we can do is roll out more IPv6. And we are not talking about dual-stack here. Dual-stack is a nice transition technology, but happy eyeballs are just insanely good at hiding broken IPv6; ultimately we all know that nothing lasts longer than a temporary solution.
If we want to redistribute the Internet, without further disadvantaging traditionally disadvantaged RIR regions, rolling out more IPv6 (only) is the only path forward.
‘In a burning world, functionality is more important than security, but remains trumped by safety.‘
What is this about? We claim that if we face a world burnt to its foundations — with an Internet fallen apart and hypergiants failed — the paradigms of what is important will shift dramatically. Hopefully we won’t but if we do, we will find ourselves in a situation where the utility and functionality of systems will superimpose their security even more robustly than in the current world.
We already know the common situation of ‘things’ or even an Internet full of them going into production because it just has to work. From a certain perspective, there is not much wrong with that; humans tend to do what is easiest for them. So, unless security is the easiest way, other ways will be taken. Of course, we are not advocating for throwing insecure stuff on the Internet here. In fact, getting security right (that is in the shape of the easiest way) is part of that whole issue of care we have been talking about.
Yet, if we find ourselves in a world where the question is not ‘can this funny bird app transport my, erm, posts to the Internet?’, but ‘is this solar panel system able to function enough to keep the community-powered Internet on?’, then security will not be that important. Instead of that system having a secure password only known to one or a few, we might have a password (if it is needed at all) taped to the top of the screen (as it tends to be common practice already). The reason is simple: A bricked system no longer operates due to a lost password (for example, when the person who maintains the password dies). This has a significantly higher impact on the community than potential security considerations.
Security may end up being resolved by a social contract, along the lines of ‘you won’t break your power supply’. Therefore, we predict that in such a world, threat modelling will see a significant shift away from security threats from the larger Internet. And in that scenario, threat modelling will become a question of safety, akin to the question of ‘What (physical) harm can be done (by outsiders) if it is not secure?’ Ultimately, the physical safety (and survival) of local communities will have the highest importance.
Stay tuned for propositions 9 – 13.
Adapted from the original posts which appeared on Doing stupid things (with packets and OpenBSD).
Tobias Fiebig is a Postdoctoral Researcher at Max Planck Institute for Informatics.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.