In a bravura performance, APNIC’s Chief Scientist, Geoff Huston, presented what he called a ‘tech content free’ presentation, but was actually a very good, bird’s eye view of where technology is driving mass market, consumer devices, and the Internet.
As one of the dinosaurs in the room, still using a laptop, I think Geoff is right when he says that from 40% of all devices seen at present, mobiles and small devices – not laptops – are the future. The quality of change control and the investment in time and effort has gone directly to Android and other mobile operating systems (OS), not to the classic PC marketplace.
Geoff takes a strong position that larger screen devices are marginal in the scale and growth; the story in tablets is a side-story about phones and phablets.
However, diving down into the network effects, there are some amazing consequences from the move to make these technologies smarter, in the face of increasing demands on 3G and 4G models of spectrum management.
WiFi becoming even more important infrastructure
It turns out, that the best path out of congestion in these managed network spaces, is to re-occupy the public-commons – the WiFi segment. Indeed in the 5G spectrum, and in products like VoLTE and the Google Fi, we can see this move happening before our eyes. Applications are not being targeted at 3G/GSM mobile/cellular, they are being targeted at being agile in how they connect and to hop off cellular into WiFi when the bandwidth makes more sense.
This means we are now facing ubiquitous use of NAT, CGN, and variant addresses, because there isn’t a single carrier behind a given user. And, overwhelmingly in the cellular space, they have already deployed carrier-scale intermediate devices.
Interestingly, the decision to make a ‘hand off’ is now held up in the OS and application. The carrier doesn’t get to decide how to reduce cost and maximize revenue in these models, at least not all the time – VoLTE is still in their control and iOS and Android OS is in control of its overlay network, its dependency on underlying transports.
In some ways we didn’t plan for this, or understand this. It means we have to accept a single device is likely to not only have two or more Internet Address bindings, it’s likely to be using them at the same time.
It also means we have to expect Internet flows, traffic, and INR use to reflect this. Overall this is not the consumption of addresses we planned for but it is a reality we’re going to have to get used to.
We’re already seeing bifurcations in the back-end model, diversity of approaches to try and rationalize either the cost to be borne by applications and devices (464xlat and mandatory IPv6 on all apps) or by the system (dual-stack ubiquitous deployment, with all the costs to make this work inside the carrier and ISP).
What might have escaped people’s attention is that even when device manufacturers and OS developers adopt IPv6, they don’t make strong commitments to guarantee it works or has higher preference on a dual-stack binding. Apple has put a strong signal to market they want IPv6 capability, but the implemented preference mechanism is more nuanced. Android doesn’t strongly bias in favour of IPv6 when dual-stack is visible, it lets the client get the best experience it can irrespective.
Geoff touched on the public policy aspects of the problem, because traditionally resource management in cellular has followed a different model stemming from the national monopoly carrier world view which persisted beyond the end of state monopoly telephony. Most cellular provisioning is done at nation scale. But with the intervention of Google Fi and VoLTE models, carriers are now freely constructing WiFi, and landline-consuming links which lie behind them, as much as building out public-utility scale assets for true cellular.
Again, we might not be prepared for the consequences of this. Public policy has given us an income stream in spectrum auctions for exclusive use. But it has also given us public utility in shared spaces with no revenue. What will the public policy be, if the request is for more contended spaces, and less dedicated spectrum?
The customer’s always right?
The logical conclusion of this is that the mass market has already moved to ‘applications’ as their goal, not the Internet per se.
We as network specialists are obsessed about the Internet and its governance, but the consumer has moved beyond us into a view which is much less caring of the Internet underneath – I’m happy as long as the applications just work.
This means questions of end-to-end, IPv6, don’t figure much in their minds as much as “can I watch this video without incurring extra cost”, or “is my Facebook working ok”.
When you consider that ubiquitous online Internet-of-Things views have more in common with mobile devices’ application-centric views, the likelihood of a future dominated by many small devices is one where the behaviours of the system, as a whole, are less like what we experience now, and more unknown and uncontrolled than we currently understand.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.
Good observations. I wonder how the cost issues will be managed by Apple/Android – surely the user is going to want to be able to trade price for quality, or there’ll be some horror stories of gouging. And it’s not clear that the phone can find all of the pricing info.
On IoT. It’s more like a p2p network than the command and control of fb. At least as it’s conceived, where devices interact unobserved by humans. NAT/CGNAT make that interaction impossible.
There’s also an issue, raised some time ago – I first saw it in a textbook on computer security published in 2007 – that the s/w folk view IPv6 as stable addresses, whereas the ISP model is that they can be moved around. There must be trouble ahead with such conflicting models. There’s a whole world of pain to come for the s/w world in naming and addressing of Things in the IoT, especially as many “Things” will be software only, consuming and spitting out events/commands.
In most cases, a valid globally reachable WiFi is the preferred cheaper path. So implementations currently favour WiFi over cellular data, and yes, if your WiFi is either slow, or noisy, or expensive, this is the wrong design choice. I imagine Google Fi has taken that decision logic into its model of when to cut to WiFi and when to stick to Cellular, but Its a good question: Qui Bono and Who decides?
Apple and mTCP, I think the multipath TCP equilibriates on TCP performance, not on a cost/benefit model.
I agree that IoT heads to interactions amongst devies, but I am less sure NAT/CGNAT is a problem right now. I am told they use pull methods to find the external rendesvous point to mimic push behaviour.
Your last point certainly goes to the disconnect between IPv6 theory and ISP practice. I will say that my IPv6 ISP at home (both times, Internode, and Skymesh) assign a static /56 per customer presence, but I am asserting privacy addresses by preference as my /128 endpoint Address. I think ISPs actually do prefer static assignment now, for this model.
on wifi/cellular choice, I’m not sure which bits of the stack understand the price trade-offs.
I found that we wasted quite a lot of effort working around NAT. The issue is where either you want to send a command or an event to a Thing that’s behind NAT. STUN/ICE works most of the time, but it can take ~30 seconds. I estimated that the development burden was ~30%. From a development point of view, you really don’t want to have to handle lots of different routeing problems.
Sky is saying that IPv6 prefixes are not static.