What a time to be a criminal. Imagine the luxury! No breaking-and-entering tools required; you can hire them from a third party. Need some ransomware? Someone on the ‘dark net’ will make it for you. Want secure communications? There’s a special phone for that.
But it turns out, all this outsourcing isn’t good for crime. Unfortunately, getting ‘somebody else’ to do the hard work can mean getting the ‘good guys’ faking the work for you. After all, evidence is helpful, right?
Two recent, high profile incidents in the news give excellent examples of what can happen when someone creates or distributes something for the explicit purpose of tracking, or listening in. In these cases, it was the authorities catching criminals, but bear with me, today’s post will explore what happened in these events, take a look at the difference between encryption breaking and encryption subversion, and discuss why subversion is a relevant consideration for all of us — not just the ‘bad guys’.
One of these incidents involved the biggest waves of mass-arrests for organized crime in history. The other involved ransomware blackmail and following cryptocurrency, and it demonstrated that even Bitcoin can be tracked. In both cases, authorities got the drop on criminals who thought their activities were hidden.
In the case of the ‘secure’ AN0M communications platform, it turns out the special handset was special in more ways than thought: It was especially easy for the police to snoop on. They even used criminals to distribute and recruit, and collected a monthly fee for operating the service!
In the case of Bitcoin and the outsourced ransomware, it turns out the ‘anonymous’ bitcoin address wasn’t anonymous, and the police recovered the ransom.
These were, arguably, cases of encryption ‘subversion’, not breaking encryption. So what’s the difference and why does it matter?
Was the crypto ‘broken’ by the police?
Before the big encryption sting ‘Operation Trojan Shield’ began, police got their hands on the AN0M platform from a criminal informant. With the help of its creator, police were able to get past its encryption and had access to all messages sent on the platform, which required specialized phones that could only communicate with each other (and unbeknownst to users, with police).
Arguably, at this phase, it was a case of ‘encryption breaking’ but there’s a debate to be had over whether or not it counts as such when the designer lets you in: This seems more like someone leaving the door unlocked than actually breaking down the door (but it’s possible police also modified said door).
So there’s a strong case to be made that in fact, nothing was ‘broken into’ by police. They didn’t ‘crack the code’ or ‘defeat the cryptography’ because from the time Operation Trojan Shield actually commenced, they had already subverted the security mechanisms.
The whole idea that ‘this is a stripped down, secure, criminal-only, special super secret phone’ was built on quicksand, because the phone included software from the start which was a ‘back door’ into the secret communications.
Maybe it’s a fine point, but a back door isn’t the same as weak private keys, or exhaustive key search, or exploiting a weakness in the cryptography to learn the private key: It was a cryptographic system which included an ability for the police to decrypt the messages. It was a feature, not a bug.
This was more a case of subverting the encryption than breaking it.
There is a link from this to an old story inside Computer Science which we’ve discussed before: Ken Thompson’s paper ‘reflections on trusting trust‘ [PDF] which explores the belief that ‘build the code yourself’ prevents back doors from being put into your systems, but it ignores the ability to modify the compiler and assembler to intrude the back door, in the process of compiling the trusted code.
In this case, the criminals trusted the secure phone because of a lot of what we call ‘security pantomime’. This pantomime involved them having to pay for a special phone, be accepted by a community of fellow criminals using these phones, and the phone’s minimal systems capability. These all gave the impression of it being secure, without it actually being secure. Similarly, the Bitcoin saga involved cryptocurrency being moved through at least 23 accounts, giving the blackmailers some degree of false confidence that the money wasn’t being followed.
In both cases, the trust was misplaced: the AN0M platform had been subverted before it came into any criminal’s hands, and the nature of the blockchain ledger meant the cryptocurrency couldn’t be easily hidden.
This question of trust is a really important consideration, and not just for people engaged in criminal enterprises. We still want to have trust in the fundamental mechanisms behind Transport Layer Security (TLS), as it is responsible for securing protocols on the web and providing end-to-end privacy. We still want to have trust in DNSSEC (trust in domain names) and S/MIME (private email) and when we look at secure instant messaging like WhatsApp or Telegram or Signal, we want to understand what’s going on behind them. We want to know where the messages can be seen in plaintext. Where can a tap be added? Who can see things without our consent?
Can we trust them?
It’s not just police who might subvert elements of supply chains
Two years ago, Bloomberg reported about subversion of supply chains for computer parts, and the risk to the integrity of systems when the chips on the motherboard are suspect from the factory door. At the time, this was dismissed as possibly being a false-flag, but the fact remains we build a high degree of trust into low-level systems coming from vendors. Think about the Intel Management Engine on the chip, which is the ‘trusted region’ in the Apple or Android phone. Or think about the requirement to wipe the secure data if you ‘root’ your handset. These are all signals from our suppliers that we should trust their baked-in security.
It’s when considering this ‘baked in security’ that we need to be mindful of that difference between subversion and encryption. You might be getting the best encryption in the world, but think of it this way: the padlock may be unbreakable, but are you sure the vendor who sold it to you doesn’t have their own key?
In the case of secure protocols, for some time now we’ve known the TOR network is not always as secure as people think, and can be subverted by TOR ‘exit routers’ being operated by national or international police forces for surveillance. Necessarily, this kind of ‘trust a third party’ service comes with limits on how much you can trust people. We have reason now to suspect that many Virtual Private Network (VPN) operations are similarly qualified privacy: They help you change where your apparent source address is, but it would be a mistake to believe nobody else knows who or where you are, or what you are doing.
There are some things we need to be able to trust
But before you go distrusting everything you use, bear in mind we do need to have trust in key aspects of the Internet in order for it to function.
This exercise shows us that we need to qualify our trust in these systems. It is possible to be led to a false belief in the integrity of the device: in this case, it was criminal activity that was compromised, and it was for society’s benefit. But in general, trust in the integrity of systems is a pillar everyone relies on to conduct normal, non-illegal communications. The same processes which undermined the privacy of the ‘bad guys’ are always going to be potentially something that can undermine the privacy of the ‘good guys’ too but, as individuals, we can’t let that stop us from trusting (though some of the more apprehensive cybersecurity specialists may beg to differ, after all, worrying about things is their job).
In the end, it comes down to the simple rule that I can’t stress enough: Trust but verify. The community needs to be engaged in these issues and examining the systems we all rely on.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.