Unless you have been asleep under a rock for the last few years, you will be aware that we’re on the brink of another uptick in mobile data bandwidth. 5G has arrived.
But what were the precursor technologies that led to 5G?
The next generation
The G in 2G, 3G or 4G stands for ‘generation’, which is why you don’t really hear the term ‘1G’ often. Much like movies, you don’t start numbering them until you’ve started the sequels.
Later, the terms 0G and 1G were developed to refer to earlier forms of technology that were largely analogue. 0G was all analogue; this was the generation of radiotelephones (or radiophones) that were used until the ’70s. 0G phones had to be mediated through a radio operator, tying a radio network into the phone system.
If you are old enough to have used the pre-G phone, you may recall the rather odd signal that kept interfering with radio or Hi-Fi systems. When you got that static coming through your stereo speakers, you could tell your phone was about to ring. This was a really crude, cheap method that the phones used to send out a simply encoded identity sequence to the phone, to tell it to wake up the more complex and expensive parts of the device (which used the battery more) to accept the call.
In 1G, the call set up and signalling was digital, but the rest was analogue.
1G phones used a technology called Advanced Mobile Phone System (AMPS) and was deployed using a ‘channels’ model. A segment of the radio spectrum was divided up into signalling channels, and each phone had exclusive use of a given set of channels. The way the channels were shared led to the concept of a ‘cell’. This was a particular area using a particular set of channels, distinct from the other cells surrounding it. The terms ‘cellphone’, and ‘cellular mobile’ come directly from the (almost) pre-digital world. But elements of this approach persist in the model used today.
Read more: Satellite communication and regulation
The ‘cell’ is the idea that your mobile device is using a set of frequencies and channels alongside other devices in this cell and in other cells. They cooperate (and are regulated) to share the cells and frequency channels out, at least semi-amicably. The bidding wars of the early 2000s for mobile radio spectrum were about obtaining rights to use these channels, and establish cells for customers as widely as possible.
The 1st generation wasn’t really called the ‘1st generation’ until the 2nd generation branded itself as 2G. 2G was deployed because of the weaknesses found in the 1G system. 1G was hugely insecure and open to phone tapping. It was also noisy and could drop out easily with signal loss.
Our story today really begins with 2G
2G began with the Groupe Speciale Mobile (GSM), the French word for what we would later know as the Global System for Mobile Communications (also GSM).
GSM became the technical term for a type of encoding, while GSMA referred to the association administering it.
This habit of renaming organizations when they go international is common in the communications industry and regulation — the precursor to the ITU was the CCITT (which is another French acronym).
2G brought in digital signalling, basic encryption over the airwaves, and the Short Message Service (SMS). Before this time, messages ran in a separate ‘pager’ network radio spectrum. SMS messages are 140 characters long because that’s how much payload was free, after all the other signalling needs in the digital packet technology being used in 2G. 2G brought in smaller devices, and much more efficient use of the spectrum (the size of the radio channels). The use of digital voice meant a leap in phone audio quality, but it was still highly compressed.
2G worked so well it spawned what we can call ‘intermediary’ generations of 2.5G and 2.7G. 2.5G brought in General Packet Radio Service (GPRS), which was the start of roaming digital services for many people, but at a speed that now would be considered extremely slow (56kbps to 110kbps) with ‘best effort’ delivery (which means you certainly didn’t have a guarantee around delivery times, delay or jitter). 2.7G took GPRS to Enhanced Data GSM Evolution (EDGE), which improved the digital encoding inside the channels, and effectively doubled the bitrate. You still see EDGE often in marginal mobile data deployment (for android phones, this is the ‘E’ symbol on your radio strength indicator on the phone).
But 2G had several competing models of deployment. There was the GSM approach and there was Code Division Multiple Access (CDMA). They weren’t compatible. In some economies, the carrier you chose determined which kind of phone you could use. Some phones were simply unavailable to you if your carrier used the wrong mobile data encoding.
3G was the birth of megabit data services, and the beginning of worldwide alignment
The prior 1G and 2G services were very much divided between competing technologies, competing chipsets, and competing intellectual property licences. But 3G represented a point in time where for most people, the world became both faster (better encoding, better bandwidth and more channels) and more consistent worldwide. This was in part because of the market realities of what people demanded from a mobile device. Before 3G, if you travelled internationally, buying a device was a big gamble because you didn’t know which economies it would work in. A technology like CDMA, which was used widely in rural Australia, was unusable in most places. ‘Roaming’ wasn’t normal. With 3G, the simplification of the model meant you had a higher likelihood of being able to use any device you wanted (within limits; there are, of course, plans that only allow domestic use).
More to the point, the 3G series of standards for data bandwidth matched or even exceeded the ones you could get from Asymmetric Digital Subscriber Line (ADSL) landlines or cable Internet services. For 2G, the speed cap was typically below a megabit, and a home DSL service or cable could deliver up to 20 Mbps (more typically in the range of 1 to 10, depending on distance from the exchange, and congestion). 3G began to offer data services that were capable of better than 10Mbps, depending on whether the radio spectrum was being used by other services as well. Voice quality improved because digitally encoded voice became possible. Improvements in battery design meant that devices could sustain high data use for adequate periods, but a 2G phone battery probably still out-performed a 3G device battery in terms of lifespan. Additional improvements in security made 3G less prone to traffic snooping (2G security was better than nothing, but 2G security was too weak against computing power and developments in signal interception).
3G also represented a huge reinvestment in mobile cellular technology by the phone companies. The technology behind 1G and 2G was obviously going to last quite a while, based on what was known about technology development at the time. 3G, however, was clearly going to have a shorter lifespan before it was going to be beaten by the next ‘G’. 3G also had a huge patent collection behind it, which ensured a strong motivation in the carriers and producers to standardize on a patent-pool model. Although 3G phones are capable of using IPv6, it was not routinely available.
3G was the point where people began to think about their mobile device as IP, more than voice. 3G is when people began to stop using a phone just to speak to people, and started using a phone as a way to access the Internet. In that sense, it was also robbing the carrier of income, because it represents the beginnings of the end of the use of SMS and timed-voice services, which had been spectacularly profitable for phone companies from the late 1980s onwards. From 3G onward, the source of revenue was more likely to be the data service and contract than airtime for calls or SMS volume. This also meant a huge leap in expectations of bandwidth.
People weren’t mobile to be called; people were mobile to send and receive IP packets — as many as possible, all the time. This was when we began to see always-on push services running in the phone.
4G is the birth of digital voice, IPv6 and real bandwidth
4G brought three things to the table.
Firstly, there was Voice over IP (VoIP), which meant the mobile device was now inextricably seen as the IP device. Obviously, 3G devices were mostly being used for IP access, but voice was only optionally digitally encoded and framed in IP packets. In 3G, the voice could still be sent over the phone system rather than the Internet. In 4G, even if not all carriers chose to use IP encoded/sent voice data, the phone supports it. This meant there was also major pressure on mobile operators and virtual mobile operators to consider using VoIP.
Secondly, routine 100 megabit data services were now available inside the improved channels and bandwidths being used.
Read more: The mobile Internet
Finally, 4G came with specifications for how to operate IPv6 and IPv4 properly on the back-end, which meant that mobile operators could deploy cellular base stations and systems that use IPv6 with a high expectation of it working for their customers’ phones. Reliance Jio was deployed in India as a 4G clean-build service, and only accepted new 4G handsets, which means it was able to deploy as an IPv6 network. It is now the world’s largest IPv6 network by customer count.
4G was good enough that people stopped expecting high-speed Internet to be delivered solely over fixed line services (optical, hybrid fibre-coax, coax or DSL over copper) and significant numbers of subscribers now depend on 4G for routine Internet service delivery.
4G is what we have, mostly. But when 4G hits low-signal problems, or high customer counts, its fallback is as likely to be 2G as 3G. There are economies where 3G services are being turned off and 2G is being left alone, as well as economies where 2G radio spectrum has been reclaimed and repurposed for 4G and 5G use.
So what happens when 5G comes along? In the next post in this series, I’ll explore the onset of 5G and the ways it will change the game yet again.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.