For well over a decade I have been concerned about the blind trust that exists in the technical world.
When protocols get standardized there is trust that there has been sufficient community feedback to make the protocol sound. When vendor implementations get shipped, we trust that the vendor has done enough testing to ship a product with no known critical defects and security-conscious defaults. When operation teams put a system into production the expectation is that there was enough testing and internal feedback to ensure a robust system. And when reading best practice configuration documents the expectation is that they are current best practices.
We all know that nothing is foolproof and even when we do everything right, we still screw up. A few examples of such cases are:
- KRACK (Key Reinstallation Attack): A protocol flaw in the WPA2 protocol that can break encryption and leave Wi-Fi traffic open to eavesdropping.
- Heartbleed: A flaw in the OpenSSL implementation where insufficient validation processes in allocating memory buffers could provide a malicious actor with the capability of accessing confidential information.
- Meltdown/Spectre: Flaws in modern processor hardware that allow a malicious program to access confidential data stored in the memory of other running programs.
- Mirai: Insecure device defaults — Telnet on by default and unmodified default passwords — that compromise hundreds of thousands of devices for use in massive DDoS attacks.
- Open recursive resolvers: Many best practice documents for setting up a recursive DNS server provide examples with the resolver running in ‘open’ mode, meaning there are no restrictions on who can query it. Unmanaged open recursive DNS servers have often been used for large-scale amplification DDoS attacks.
- Infineon flaw: A vulnerability in Infineon’s RSA key generation mechanisms whereby in certain circumstances enough information can be gleaned to ascertain the secret key and get access to confidential information.
Then there are the operational issues I’ve seen when doing security assessments:
- Being assured that data was encrypted between sites since that was the defined policy. Yet when validating the strategic assessment with a configuration check I found the IP security (IPSec) Authentication Header (AH) configured but not the Encapsulating Security Payload (ESP). For those not familiar with IPSec, this meant no encryption was taking place, just cryptographic integrity and non-repudiation were configured.
- Finding configurations with weak encryption functionality — I’ll leave it as an exercise to the reader as to what the difference is when using the configuration command ‘enable password’ vs ‘enable secret’ in Cisco devices.
- Re-use of credentials between personal and business accounts.
- Companies with expensive log correlation systems yet no staff responsible for reviewing the information.
While you want to start with a basis of trust, there is merit to the mantra ‘trust but verify’. But this too has its issues, namely when and what do you verify? And how should you act so as to not lose trust when you learn that something was not as it seemed?
We are currently living in a world of eroding trust and it is time to rethink what we trust and how.
Building renewed trust
I find it somewhat astonishing that the almost weekly occurrences of data breaches (and critical flaws that enable such breaches) have not caused global outcry demanding more effective measures to protect the safety and trust of our use of electronic communication.
While privacy-related regulations such as HIPAA and GDPR make an attempt to preserve data privacy, we have no idea within any industry as to who is sharing what data and with whom. Also, due to the plethora of breaches in the last decade that have occurred in the travel, entertainment, healthcare, financial, energy, transportation, government, and education sectors, who has what data that we consider private?
Are we resigned to the fact that we are all just doomed to have all of our private information exposed and subject to ransomware attacks? Or do we feed into this growing sense of apathy of ‘there’s nothing I can do and I’ll just deal with it when it impacts me’?
At the same time, what role does media hype, misinformation and ‘fake news’ play in this all? I often wonder whether the people (and commentators) trying to ‘break the story’ have any idea what the actual facts are that seek to attack the affected companies, rather than asking themselves, ‘Am I ready in my environment if a similar issue happens?’.
We must get better at assuming responsibility and not placing blame in other directions. As the saying goes: A smart man learns from his own mistakes, a wise man learns from the mistakes of others, and a fool never learns.
To reinvent trust, we must encourage transparency of information without negative impact. International cooperation across a variety of sectors is needed including government, private companies, legal teams, human rights, and technologists. After two decades of talking, we need to start building comprehensive solutions with effective enforcement.
We have to start taking a closer look at standards to point out security-related weaknesses so that practitioners can understand the risks when using certain protocols.
Frameworks and best practices need to be scrutinized more and should mention why something is recommended, not just what.
Code reference implementations need careful code reviews and should be augmented by an external security assessment, especially in smaller companies where there may not be internal security expertise.
And all vendor implementations need greater transparency in terms of what the default configurations are as well as documentation on how to secure the system or application if the default configuration choices were for ease of use over some security and privacy configuration.
Increasing trust in implementations requires better programming tools. There has been a lot of progress in technologies looking at how to automatically detect exploitable vulnerabilities. These automated tools save time and also augment human creativity and intuition.
In operational deployments, improvements are needed in continuous monitoring, logging, and auditing with attention to root cause analysis for any anomalous behaviours. Tools should exist that can validate that operational realities match existing documented policies — this is a very hard problem but is a critical missing component.
Finally, when it comes to building trust in people, start by developing a culture of commitment, challenge the culture of blame and learn from incidents to highlight where controls are unnecessary and/or ineffective.
Trust but verify
To a certain extent, it’s human nature to trust that some, if not all, of the best practice security and privacy features are being incorporated in your products. But at the same time, such trust is blind if we don’t ask the fundamental questions to understand how things work.
Read: Balancing security, privacy and convenience
This really is no different from other areas of blind trust — how many people sign contracts without reading every single word, and opt-in to data sharing without really knowing who is allowed to share your data and with whom?
I’ve had life experiences where things are not fair, and mistakes have been made. But I believe in personal accountability and taking responsibility for one’s actions. We cannot just blame vendors and listen to the hype when it is critical that we fix and continue to improve our online tools and services.
Businesses and organizations need to think through requirements and get the details they need to make informed decisions on which tools and services are best suited for their needs. Product vendors and service providers should also make it easier for regular users to understand the tradeoffs that have been made between convenience, security, and privacy.
Rather than this culture of vilification and shaming that seems to permeate across the Internet ecosystem, we should reinvent our current trust models. Instead of blind trust that disappoints again and again, we should be receptive to being human and our tendency to err, and ensure we have checks in place to validate the entities and circumstances that we want to trust.
Merike Kaeo has over 25 years of experience in pioneering core Internet technology deployments and leading strategic digital security transformations.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.
“Rather than this culture of vilification and shaming that seems to permeate across the Internet ecosystem, we should reinvent our current trust models.”
Agreed, in principle.
And yes tech culture can get pretty heated, almost as bad as journos arguing about ethics.
Talk about snake pits!
But whatever new trust models appear, they cannot replace the role of independent observers, including news media, to raise questions, express criticism and yes even prompt vilification, richly deserved in so many cases.
My suggestion would be that as well as high-tech approaches, we also need to address low-tech aspects – especially ownership.
For example, Facebook stocks may be traded publicly, but the company itself? The actual company that issues the stocks which everyone follows so avidly?
Facebook’s original parent company and its successors are registered in Delaware, a secrecy and tax haven notorious for enabling shady deals and outright corruption. So notorious Vox recently headlined “How the US became the center of global kleptocracy”, see:
https://www.vox.com/policy-and-politics/2020/2/3/21100092/us-trump-kleptocracy-corruption-tax-havens
What does this sort of low-tech secrecy mean for us as telecommunications users, analysts, observers and critics?
It means, as the author states, “…we have no idea within any industry as to who is sharing what data and with whom.”
Nevermind Cambridge Analytica, the entire Facebook company could be a secret front for, well, anyone?
Until public utilities such as internet and social media companies are dragged ultimate beneficiaries blinking, kicking and screaming into the sunlight, no amount of 21st century high-tech wizardry is going to solve the problem of companies being run like they’re still in the 15th century.
I may be overstating the case, but from stories I’m working on now? Facebook is just the tip of an iceberg of secrecy, corruption and exploitation.