Client-side content scanning as an unworkable, insecure disaster for democracy

Fourteen of the world’s leading computer security and cryptography experts have released a paper arguing against the use of client-side scanning because it creates security and privacy risks.

Client-side scanning (CSS, not to be confused with Cascading Style Sheets) involves analyzing data on a mobile device or personal computer prior to the application of encryption for secure network transit or remote storage. CSS in theory provides a way to look for unlawful content while also allowing data to be protected off-device.

Apple in August proposed a CSS system by which it would analyze photos destined for iCloud backup on customers’ devices to look for child sexual abuse material (CSAM), only to backtrack in the face of objections from the security community and many advocacy organizations.

The paper [PDF], “Bugs in our Pockets: The Risks of Client-Side Scanning,” elaborates on the concerns raised immediately following Apple’s CSAM scanning announcement with an extensive analysis of the technology.

Penned by some of the most prominent computer science and cryptography professionals – Hal Abelson, Ross Anderson, Steven M. Bellovin, Josh Benaloh, Matt Blaze, Jon Callas, Whitfield Diffie, Susan Landau, Peter G. Neumann, Ronald L. Rivest, Jeffrey I. Schiller, Bruce Schneier, Vanessa Teague, and Carmela Troncoso – the paper contends that CSS represents bulk surveillance that threatens free speech, democracy, security, and privacy.

“In this report, we argue that CSS neither guarantees efficacious crime prevention nor prevents surveillance,” the paper says.

“Indeed, the effect is the opposite. CSS by its nature creates serious security and privacy risks for all society while the assistance it can provide for law enforcement is at best problematic. There are multiple ways in which client-side scanning can fail, can be evaded, and can be abused.”

The method in the madness

CSS currently relies on one of two methods for image scanning: perceptual hashing, in which an algorithm generates a hash (a numeric identifier) that functions as a digital fingerprint and produces the same hash if the image is altered in a minor way; and machine learning, in which a machine learning model is trained to recognize target content, even images it hasn’t seen before. Both methods, the paper points out, generate false positives, may rely on proprietary technologies that limit auditing, can be subverted, and can be evaded.

Apple’s system figures prominently in the analysis but concern about potential deployment of CSS goes beyond a single company and a single type of troubling content, CSAM. As the paper observes, the EU has suggested that content related to terrorism and organized crime, in addition to CSAM, should be targetable by CSS.

Moreover, the issue is not just illegal content. In the UK, for example, the Draft Online Safety Bill contemplates a requirement to block legal speech that some authority finds objectionable.

Apple has attempted to address with the possibility that repressive regimes might try to co-opt its system to scan for politically unacceptable content by insisting that it will scan only for content hashes present in lists maintained by multiple child safety organizations. The idea is that if China, say, were to submit a picture of Tiananmen Square’s tank man through a child safety organization, to get reports on political dissidents, its gambit wouldn’t work in the absence of another organization submitting the same image to the target list.

But the paper notes that this approach depends on Apple being willing and able to enforce its policy, which might not survive insistence by nations that they can dictate policy within their borders.

“Apple has yielded to such pressures in the past, such as by moving the iCloud data of its Chinese users to three data centers under the control of a Chinese state-owned company,85 and by removing the ‘Navalny’ voting app from its Russian app store,” the paper says.

And even if Apple were to show unprecedented spine by standing up to authorities demanding CSS access, nations like Russia and Belarus could collude, each submitting a list of supposed child-safety image identifiers that in fact point to political content, the paper posits.

“In summary, Apple has devoted a major engineering effort and employed top technical talent in an attempt to build a safe and secure CSS system, but it has still not produced a secure and trustworthy design,” the paper says.

And more dangers ahead

It goes on to look at all the potential problems with CSS systems. These include the possibility of abuse by authorized and unauthorized parties, as well as local adversaries – a user’s partner, ex-partner, other family member, or rival who has access to the user’s device.

CSS faces challenges adhering to policy principles like those that prohibit bulk surveillance without a warrant in the US and the EU. Then there’s the question of whether a CSS system could operate equitably (without algorithmic bias). That may not be easy to determine given that these systems tend to operate as black boxes.

CSS, the paper says, entails privacy risks in the form of “upgrades” that expand what content can be scanned and adversarial misuse.

And it poses security risks, such as deliberate efforts to get people reported by the system and software vulnerabilities. The authors conclude that CSS systems cannot be trustworthy or secure because of the way they’re designed.

“The proposal to preemptively scan all user devices for targeted content is far more insidious than earlier proposals for key escrow and exceptional access,” the paper says.

“Instead of having targeted capabilities such as to wiretap communications with a warrant and to perform forensics on seized devices, the agencies’ direction of travel is the bulk scanning of everyone’s private data, all the time, without warrant or suspicion. That crosses a red line. Is it prudent to deploy extremely powerful surveillance technology that could easily be extended to undermine basic freedoms?” ®

READ MORE HERE