This is the third post in a four part series on LEO satellites by Ulrich Speidel.
Other posts: Part one, part two, part four.
The first part of this series looked at the promise that Low-Earth Orbit (LEO) satellites present for Internet connectivity. Part two discussed what constellations and gateway placement mean for coverage, and explored a few aspects that the current LEO providers don’t really talk much about. Today’s instalment will address a question central to all Internet users — ‘what speed will I get?’
One important point to consider: I keep telling my students that the straight path to a D grade is to use the word ‘speed’ in conjunction with data communication — that it’s meaningless, and they ought to use words like ‘rate’, ‘capacity’, ‘latency’, ‘bandwidth’, ‘throughput’ and ‘goodput’ instead.
Today’s questions for discussion are:
- What data rates can your users really expect to see in practice? Not just in beta, but as the system goes commercial and user numbers grow.
- How much of your system capacity will be taken up by cross-traffic? What about the traffic that ‘my’ satellite carries for other satellites? Does this take a chunk out of what I get?
- How will you do your inter-satellite routing? This tells us a little about where a system will actually reach, and how much latency signals will accumulate on the way there.
What data rates can your users really expect to see in practice?
The numbers advertised at the time of writing range between a few dozen to a couple of hundred megabits per second, which sounds cool. And indeed, many of Starlink’s beta testers report that this is indeed what they get. Estimates of the number of beta testers hover somewhere around the mid-five-figures internationally, which at the time of the estimate included those in the US, Canada, UK, Germany, Poland, NZ and Australia.
A quick look at the coverage map today reveals around 90 Starlink satellites in range of these economies. That probably means, on average, less than 1000 beta users per satellite at present. Starlink is meant to provide around 20 Gbps per satellite. Take into account that most users won’t be using their connection at full load most of the time, and the numbers seem realistic right now.
But note that there is only a factor of ~100 between the satellite capacity and the top per-user data rates seen to date. If we use 2020’s estimate of just under 350 GB/month per household as the data volume each Starlink user would want to consume, convert it to bits and divide it by the number of seconds per month, we end up with each user needing about 1 Mbps on average 24/7 across the entire month.
This would allow 20,000 users per satellite in view; presuming there’s no peak hours, usage is evenly distributed around the clock, and we don’t need to worry about uplink capacity. That would cap the number of users in the US and Canada, which see about 50 satellites right now, to distinctly this side of one million at the current point in time. Even with 12,000 active satellites in orbit, around eight times the current number, that wouldn’t see Starlink replacing terrestrial ISPs in a hurry unless there was a significant step up in platform performance.
OneWeb quotes about 50 Mbps for each customer. In part one of the series, I mentioned that a LEO at 500 km orbital altitude projects around 5,000 times more signal power into a receiver on the ground than a GEO satellite, all else being equal — a result of the Friis formula. OneWeb’s higher orbit turns this 5,000:1 advantage into one of less than 1,000:1. Given that OneWeb’s satellites are said to be lighter than Starlink’s and that solar panels, batteries and antennas make up much of a satellite’s mass, it seems a little unlikely that the OneWeb birds will have more transmitting power at their disposal. The Shannon-Hartley capacity theorem then means that something else must give — bigger antennas on the ground, or a lower bitrate per satellite. In theory, the latter would mean fewer users per satellite, and perhaps the 50 Mbps figure is meant to reflect that.
How much of your system capacity will be taken up by cross-traffic once inter-satellite routing is introduced?
For both Starlink and OneWeb, this capability is still a few months away at the very least. Starlink plan to use lasers to interconnect their satellites. That’s been shown to be technically feasible and would, in principle, allow users to get their Internet feed from gateways many thousands of kilometres away.
But there’s a catch. The Friis formula and the capacity theorem above also apply to lasers; the further away your communication partner is and the more data you want to send, the more power you need. If you’re a satellite, then the more traffic you need to carry between neighbouring satellites, the less power will be left to communicate with your own users on the ground.
That might not always matter though; for example, if you are currently over the ocean and have few direct users of your own. But if you’re over a populated area without a gateway in sight, then it could put a dent into the service you might offer your users.
Also, if you’re at the gateway end of a forwarding chain of satellites, how many other satellites’ users would you need to support on the uplink? Might you even be using or contemplating using end-user terminals as ground relays?
How will you do your inter-satellite routing?
Inter-satellite routing in LEO constellations in its own right is a challenge. As the satellites orbit, the position of satellites with respect to gateways and end users change all the time, as does the position of the satellites with respect to each other. So how does one route through something like this? There are several options:
- Compute a real-time model of all satellite positions, figure out which satellite the receiver listens to right now, and use a routing algorithm at the sender to determine a chain of satellites to use as hops in a virtual circuit. Recompute the route frequently. That’s a lot of computation for ground stations, and since you must tell the satellites how to route each packet, you also need, potentially, very big X.25 style headers for possibly dozens of hops. But you could always get the shortest route this way.
- Get the satellites to maintain their own routing tables and get the sender to figure out the entry and exit satellites for the inter-satellite route. I haven’t given this much thought — it sounds doable if a bit messy.
- Forward along orbital planes where each satellite has a fixed successor and predecessor in orbit, and only cross to neighbouring planes when you absolutely must. This might not result in the shortest (or even a short route), but could work well if you had a gateway for each orbital plane at any time (being mindful that the earth rotates below the orbits, too).
- Do something completely revolutionary (I can’t think of right now), maybe something involving network coding.
Just curious. What will you do? Will we get to find out?
This concludes the third part of the series. The next (and last) part will look at the direct-to-site business model chosen by Starlink and OneWeb.
This series continues in Part 4: Why direct to site?
Dr Ulrich Speidel is a senior lecturer in Computer Science at the University of Auckland with research interests in fundamental and applied problems in data communications, information theory, signal processing and information measurement. The APNIC Foundation supported his research through its ISIF Asia grants.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.