Earlier this month, I had the pleasure of attending NANOG 71 in San Jose, USA. Having not attended a NANOG for a few years, it seems to have got a lot bigger and my luggage was several kilos heavier with vendor t-shirts after Beer-and-Gear sessions.
The stand-out talk for me at this conference was a keynote presentation by Dave Schaeffer from Cogent Communications, who spoke on ‘Why the Internet is the only network that matters‘.
As a topic, we kind of all know this. Deep in our hearts, no matter how much we like to remember FIDOnet, UUCP or DECnet, there really is only one story in town these days. For a while there, even into the 2000s, it was possible to believe in the significance of a world-scale non-IP network in cellular. However, with the rise of VoLTE, it’s clear that the only way is packets; to be specific, IP packets. We’re beyond converged now; it’s an IP world.
What Dave was exposing to me, as somebody who never had to play in this space of ubiquitous fibre (my days in network operations were about commissioning 2 Mb links), was the incredible effect of the super-abundance of fibre in the ground. This fact isn’t entirely true worldwide; he was, after all, addressing North American network operators, however, it is amazing that there really is no shortage of actual trunk-line capacity anymore in the USA, nor is it constrained to large cities.
In his example, Dave believes that almost any metropolitan location in continental USA is served by multiple long-haul fibre and that in all likelihood, only one or two pairs of that bundle (each fibre as pulled, is actually a bundle of 10, 20, 40 fibre pairs nowadays) are lit. How can this be? How can it be possible that we’ve managed to go through so many repeated doublings of bandwidth, but not run out of the headroom inside the glass?
The answer is actually in the glass itself. Unlike pre-fibre communications, most of the technical issues with fibre transmission rest with the head ends. If you start off with a fibre that you can send one signal down, you repurpose the transmission lasers and convert it to two different wavelengths. Then four. Then eight. Now 64, or more.
Each year, the ability to use a finer and finer grained wavelength of light means that you can re-encode more and more data down the same fibre, for no increased cost in the fibre itself. You do have to spend on the transmission head ends, but that cost pays back far more quickly.
Dave projected that in his market space, the year-on-year cost of fibre transmission has dropped 80%, which is a significant reduction in operating costs (to be very clear, this doesn’t mean you can expect an 80% drop in your domestic service. Price and cost don’t relate here).
Interestingly, Dave noted that the head-end price reductions are more like 40%. When he started building fibre systems he remembers being quoted rates at USD 100,000 or more per transmission system. These things now are present as sub-miniature interface cards and whilst they aren’t free, they are nothing like that price point anymore. This, in turn, is due to the manufacturing costs of high-speed laser switching becoming cheaper as fibre communications have become more ubiquitous.
Routing costs are always higher, and one of Dave’s goals is to try and reduce the number of times you have to do optical-electrical-optical conversions because it’s both a bottleneck in time and cost. Given the huge amount of excess (surplus? spare? unused?) fibre in the ground, this isn’t as hard as it sounds. We don’t need to build hub-and-spoke models of networks if we can do rich interconnecting. It might make for more resiliency too.
Cogent is not the largest ISP in the field, and no doubt Dave’s talk was, in part, market positioning. So, when he calls for a more nuanced sense of fibre and data as a utility service, (which philosophically I am attracted to) there is presumably some upside benefit for him.
But in saying this, I think Dave is recognizing what we all know in the IP packet forwarding business; the real longterm model is big, fat, dumb pipes. We don’t want or need complex smart networks, packet queueing, and prioritizing if we can get fatter pipes that have less or no congestion. If people understood this and segmented the market to deliver it, I think we’d all get a better Internet experience. BGP isn’t the only way to maintain routing. But given BGP, and given big fat dumb glass pipes, we can do a lot.
Outside of the fibre business, I think a lot of technologists looks at the Internet and wonder about two things: ‘How did it suck up all the money?’; and ‘How do people go on making money, when year-on-year, the cost of service keeps dropping?’.
The Internet sucked up the money because most of the alternatives (cable TV, cellular as GSM) looked like bad business models — lock-ins to specific supplies of content, rental models, and things people didn’t actually want. Netflix was never going to be possible when the content owners were in a tight relationship with the cable company. Once it became an IP carriage service, Netflix was waiting to burst out. The Internet sucked up all the money because the model works better than the other ones.
To the declining cost question, what I see here is that the underlying cost of operation is dropping far faster than the price, for reasons I don’t entirely understand. Maybe the amount of capital investment to build out is limiting competition. Maybe we’re glued to a price/value equation, which is preventing people driving down the price (I know that in my case, I have tried low-cost Internet solutions on my home fibre, and I prefer to pay a premium price for better performance, but that’s in a very distorted pricing model).
Cogent plays in this space, and sees itself growing as a bulk-data provider. I like that because it feels like the kind of big-fat-dumb-pipe model of the Internet I like.
Bring it on!
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.