Companies could police encrypted messaging services for possible child abuse while still preserving the privacy and security of the people who use them, government security and intelligence experts said in a discussion paper published yesterday.
Ian Levy, technical director of the UK National Cyber Security Centre (NCSC), and Crispin Robinson, technical director for cryptanalysis at GCHQ, argued that it is “neither necessary nor inevitable” for society to choose between making communications “insecure by default” or creating “safe spaces for child abusers”.
The technical directors proposed in a discussion paper, Thoughts on child safety on commodity platforms, that client-side scanning software placed on mobile phones and other electronic devices could be deployed to police child abuse without disrupting individuals’ privacy and security.
The proposals were criticised yesterday by technology companies, campaign groups and academics.
Meta, owner of Facebook and WhatsApp, said the technologies proposed in the paper would undermine the internet, would threaten security and damage people’s privacy and human rights.
The Open Rights Group, an internet campaign group, described Levy and Robinson’s proposals as a step towards a surveillance state.
The technical directors argued that developments in technology mean there is not a binary choice between the privacy and security offered by end-to-end encryption and the risk of child sexual abusers not being identified.
They argued in the paper that the shift towards end-to-end encryption “fundamentally breaks” most of the safety systems that protect individuals from child abuse material and that are relied on by law enforcement to find and prosecute offenders.
“Child sexual abuse is a societal problem that was not created by the internet, and combating it requires an all-of-society response,” they wrote.
“However, online activity uniquely allows offenders to scale their activities, but also enables entirely new online-only harms, the effects of which are just as catastrophic for the victims.”
Neural Hash on hold
Apple attempted to introduce client-side scanning technology – known as Neural Hash – to detect known child sexual abuse images on iPhones last year, but put the plans on indefinite hold following an outcry by leading experts and cryptography experts.
A report by 15 leading computer scientists, Bugs in our pockets: the risks of client-side scanning, published by Columbia University, identified multiple ways that states, malicious actors and abusers could turn the technology around to cause harm to others or society.
“Client-side scanning, by its nature, creates serious security and privacy risks for all society, while the assistance it can provide for law enforcement is at best problematic,” they said. “There are multiple ways in which client-side scanning can fail, can be evaded and can be abused.”
Levy and Robinson said there was an “unhelpful tendency” to consider end-to-end encrypted services as “academic ecosystems” rather than the set of real-world compromises that they actually are.
“We have found no reason as to why client-side scanning techniques cannot be implemented safely in many of the situations that society will encounter,” they said.
“That is not to say that more work is not needed, but there are clear paths to implementation that would seem to have the requisite effectiveness, privacy and security properties.”
The possibility of people being wrongly accused after being sent images that cause “false positive” alerts in the scanning software would be mitigated in practice by multiple independent checks before any referral to law enforcement, they said.
The risk of “mission creep”, where client-side scanning could potentially be used by some governments to detect other forms of content unrelated to child abuse could also be prevented, the technical chiefs argued.
Under their proposals, child protection organisations worldwide would use a “consistent list” of known illegal image databases.
The databases would use cryptographic techniques to verify that they only contained child abuse images and their contents would be verified by private audits.
The technical directors acknowledged that abusers might be able to evade or disable client-side scanning on their devices to share images between themselves without detection.
However, the presence of the technology on victims’ mobile phones would protect them from receiving images from potential abusers, they argued.
Detecting grooming
Levy and Robinson also proposed running “language models” on phones and other devices to detect language associated with grooming. The software would warn and nudge potential victims to report risky conversations to a human moderator.
“Since the models can be tested and the user is involved in the provider’s access to content, we do not believe this sort of approach attracts the same vulnerabilities as others,” they said.
In 2018, Levy and Robinson proposed allowing government and law enforcement “exceptional access” to encrypted communications, akin to listening in to encrypted communications services.
But they argued that countering child sexual abuse is complex, that the detail is important and that governments have never clearly laid out the “totality of the problem”.
“In publishing this paper, we hope to correct that information asymmetry and engender a more informed debate,” they said.
Analysis of metadata ineffective
The paper argued that the use of artificial intelligence (AI) to analyse metadata, rather than the content of communications, is an ineffective way to detect the use of end-to-end encrypted services for child abuse images.
Many proposed AI-based solutions do not give law enforcement access to suspect messages, but calculate a probability that an offence has occurred, it said.
Any steps that law enforcement could take, such as surveillance or arrest, would not currently meet the high threshold of evidence needed for law enforcement to intervene, the paper said.
“Down this road lies the dystopian future depicted in the film Minority Report,” it added.
“The Online Safety Bill is an opportunity to tackle child abuse taking place at an industrial scale. Despite the breathless suggestions that the Bill could ‘break’ encryption, it is clear that legislation can incentivise companies to develop technical solutions and deliver safer and more private online services.”
Online Safety Bill
Andy Burrows, head of child safety online policy at children’s charity the NSPCC, said the paper showed it is wrong to suggest that children’s right to online safety can only be achieved at the expense of privacy.
“The report demonstrates that it will be technically feasible to identify child abuse material and grooming in end-to end-encrypted products,” he said. “It is clear that the barriers to child protection are not technical, but driven by tech companies that don’t want to develop a balanced settlement for their users.”
Burrows said the proposed Online Safety Bill is an opportunity to tackle child abuse by incentivising companies to develop technical solutions.
Proposals would ‘undermine security’
Meta, which owns Facebook and WhatsApp, said the technologies proposed in the paper by Levy and Robinson would undermine the security of end-to-end encryption.
“Experts are clear that technologies like those proposed in this paper would undermine end-to-end encryption and threaten people’s privacy, security and human rights,” said a Meta spokesperson.
“We have no tolerance for child exploitation on our platforms and are focused on solutions that do not require the intrusive scanning of people’s private conversations. We want to prevent harm from happening in the first place, not just detect it after the fact.”
Meta said it protected children by banning suspicious profiles, restricting adults from messaging children they are not connected with on Facebook, and limiting the capabilities of accounts of people aged under 18.
“We are also encouraging people to report harmful messages to us, so we can see the reported contents, respond swiftly and make referrals to the authorities,” the spokesperson said.
UK push ‘irresponsible’
Michael Veale, an associate professor in digital rights and regulations at UCL, wrote in an anlaysis on Twitter that it was irresponsible of the UK to push for client-side scanning.
“Other countries will piggyback on the same (faulty, unreliable) tech to demand scanning for links to abortion clinics or political material,” he wrote.
Veale said the people sharing child sexual abuse material would be able to evade scanning by moving to other communications services or encrypting their files before sending them.
“Those being persecuted for exercising normal, day-to-day human rights cannot,” he added.
Security vulnerabilties
Jim Killock, executive director of the Open Rights Group, said client-side scanning would have the effect of breaking end-to-end encryption and creating vulnerabilities that could be exploited by criminals, and state actors in cyber-warfare battles.
“UK cyber security chiefs plan to invade our privacy, break encryption, and start automatically scanning our mobile phones for images that will turn them into a ‘spies in your pocket’,” he said.
“This would be a massive step towards a Chinese-style surveillance state. We have already seen China wanting to exploit similar technology to crack down on political dissidents.”