Incidents of user data being mishandled by organizations have made headlines around the world in the past year – as well as the punishments.
As a result, the issue of Internet user privacy remains a hot topic. At the recent IETF 105 meeting in Montreal, Canada, the importance of researchers and engineers working together to find a suitable middle ground when it comes to protecting users’ privacy – while being able to ensure quality of service – was the subject of the keynote by Associate Professor Arvind Narayanan of Princeton University.
Arvind shared lessons he and his colleagues had learnt over the last decade measuring privacy in the Internet, as well as the future challenges they face.
Encryption makes meaningful privacy measurements basically infeasible
Measuring the Internet was a fairly straightforward task when the network was merely forwarding packets from a source to a destination. However, the introduction of various elements to improve security and efficiency, for example, caches, firewalls and NATs, has made tracing the number, source and route of packets less clear. Another element that has Arvind concerned is end-to-end encryption.
“[End-to-end encryption] is great for privacy. Unfortunately, the downside of it is that the two ends of end-to-end encryption are the device and the server — it doesn’t involve the user and it doesn’t involve the researcher,” said Arvind during his presentation.
“A researcher can’t figure out what data is being collected and where it’s being sent, which makes meaningful privacy measurements basically infeasible.”
This has implications outside of privacy as well, as discussed in draft-fairhurst-tsvwg-transport-encrypt-10
Hiding transport protocol header information can make it harder to determine which transport protocols and features are being used across a network segment and to measure trends in the pattern of usage. This could impact the ability for an operator to anticipate the need for network upgrades and roll-out. It can also impact the on-going traffic engineering activities performed by operators (such as determining which parts of the path contribute delay, jitter or loss).
While this issue should concern users, in particular, those that surround themselves with more ‘proactive’ data collecting devices (think wearables and smart home devices), Arvind said companies collecting the data from such devices should equally be worried from an auditing and reputational perspective.
“If you’re a reputable company and you want to be able to show your users that your data collection is completely according to your specified privacy policies, there’s no good way to do that today because researchers can’t examine the plain text of these communications.
“For example, if we wanted to know if the smart light bulbs in our homes are transmitting conversations — because [some] actually have microphones — we really don’t have a good way to check that today,” Arvind said, echoing concerns around a recent IoT case.
This form of data capturing would appeal to malicious actors, whose activity could be ‘obscured’ from all parties because of the lack of monitoring/measuring capabilities.
“I think some way of being able to examine the communications of IoT devices is critical and I think there’s a role for standardization here,” he said.
Standards need to be explicit about assumptions
Privacy has been an ongoing topic of discussion within the IETF for many years, with a number of working groups being assembled to focus on the issue.
For Arvind, the challenge is for future standards to be explicit about assumptions.
“It’s very hard in a standards document to write down a fixed privacy definition and then say that ‘I’ve analysed this protocol with respect to this privacy definition and I’m confident that this is going to be a privacy respecting protocol now and for all time to come’,” he said.
“Because privacy changes so quickly and because we can’t anticipate what new privacy infringing technologies will be out there in five years it helps to be explicit about assumptions as part of the standards process.
“[If we] explicitly say we have created the standard assuming that this API will not be highly susceptible to fingerprint ability [but] if it turns out that it is being exploited in the wild, here are some things that implementers could do to mitigate that risk.”
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.