WeeklyWorker

12.08.2021

For your eyes only

Paul Demarty assesses the dangers of Apple’s new surveillance measures against child sex abuse

The experience of consumer technology most common nowadays is of merely passive enjoyment of its endless sluice of content, coupled with a creeping anxiety about what it is ‘doing to us’.

We are rather like the civilians in John Carpenter’s They live - walking around and enjoying the bland delights of late-capitalist consumer culture, while being unknowingly terrorised by subliminal messaging from our disguised alien overlords. The screen is a black mirror, as per the eponymous science fiction TV series. We know that on the other side of it there is someone watching us, but we do not care to think too deeply about who, and for what purposes.

There is, however, a certain subculture of people all too aware. They are usually professional technologists themselves, specialists in information security. Their archetype is Edward Snowden - the whistleblower on the US government’s indiscriminate electronic surveillance programmes. And he, and they more generally, are currently very exercised about Apple.

Blinding with science

The high-end tech behemoth announced last week that it was adding a slew of new ‘child protection’ features to its iOS operating system. Among them is a program that will attempt to identify child pornography among the images stored on the device. (The provided details of how it will do this are numerous, but rather fuzzy on closer inspection - more of which anon.) Having found a likely candidate, it will be uploaded to a US government agency, the National Center for Missing and Exploited Children (NCMEC).1 All that stands between the ordinary iPhone user and her phone’s contents being shipped to Uncle Sam is, presumably, the sophistication of its child porn detection algorithm.

Such algorithms are, at the current state of the art, pretty dismal. Apple’s claims to the contrary are backed up with a lot of somewhat maths-y looking stuff, but this amounts in the end - as noted by security researcher Neal Krawetz - to a “proof by cumbersome notation” (that is, by blinding the reader with science). The idea is to use AI to somehow identify the salient features of an image and, having done so, to represent those features as a number. The same picture should generate the same number, even if it has been slightly altered - monochromised, cropped, slightly corrupted in transit over the internet. You then ship the numbers calculated for known child porn images and the AI program to all the iPhones and iPads. When the phone receives a new image, it is run through the same AI program, and if a resulting number matches one in the database, then the picture gets uploaded to the NCMEC.

The obvious problem - for those familiar with the reality rather than the grand marketing claims of AI - is that the behaviour of these systems is opaque and frequently eccentric in quite unpredictable ways. As Krawertz illustrates it:

The problem with AI is that you don’t know what attributes it finds important. Back in college, some of my friends were trying to teach an AI system to identify male or female from face photos. The main thing it learned? Men have facial hair and women have long hair. It determined that a woman with a fuzzy lip must be ‘male’ and a guy with long hair is female.2

Of course, the models built in industry are rather less naive than those of Krawertz’s college buddies; but they still fail in apparently crude ways (compare the notorious example of the AI credit scoring tool that just deducts points when the applicant is black). Modern AI is vastly better than human intelligence at certain very specific things (chess, let us say); but it is still very, very stupid at most other things.

The more serious problem, of course, is the precedent. Apple has reconciled itself to the need to snitch on its users to the US government. Now Apple has conceded the principle, my friends in the infosec community worry, the government will proceed to negotiate on the price.

The irony of the situation is not lost on many. After all, the whole thrust of Apple’s recent marketing has focused on privacy-baiting its rivals in the market. They include Google, of course, whose unimaginably vast revenue is produced largely by its monopolistic position in the digital ads market. Apple does not exactly compete with Facebook as such, but they spar - shall we say - in markets like instant messaging. Apple has its iMessage, Facebook has its own Messenger product and WhatsApp; it finally started ratcheting up its user tracking with unilateral changes to the WhatsApp terms of service this year.

Apple, by contrast, has sought a reputation as the one major brand in the space that did not have a business incentive to systematically spy on all its customers. Apple gets money by selling you really expensive trinkets, made inexpensively by semi-free labour in dystopian factories far from prying eyes. Google and Facebook get money by selling you to prying eyes.

It may be objected that there is a difference between spying on the basis of mere greed and doing so on the basis of some civic obligation. But Apple has made a great performance out of defying the demands of the American state for access to its customers’ data. In 2016, the FBI - investigating the San Bernardino mass shooting the previous year - leaned on Apple to do something quite unprecedented. The agency had in its possession the iPhone used by the shooter, Syed Farook, but was unable to compromise it. It demanded of Apple a new version of the iOS operating system which would disable security features, and deliver it to the phone. Apple not only refused the request, but published a ‘letter to customers’ from CEO Tim Cook explaining its reasoning at length.3 Alongside paedophiles, terrorists are included in that category of persons whose elementary civil rights it is considered acceptable to abrogate in ‘polite society’. Yet Apple refused, and reaped the PR benefits (and costs, for that matter, since there were, after all, many people who wanted any potential collaborators of Farook hunted down and brought to justice).

Apple can, of course, claim that there is no inconsistency. The specific demand of the FBI in 2016 was for Apple to replace the strong encryption present on iPhone data with a weaker version, with a backdoor for the Feds. That, for Cook, was an unacceptable slippery slope: “The government suggests this tool could only be used once, on one phone. But that’s simply not true. Once created, the technique could be used over and over again, on any number of devices.”

Cook’s language here is slippery. Because, of course, it is not only the technique that can be reused, but the rationale. If you can do it just this once, for just this phone, why not just this one other time, for another phone, and so on? Likewise, child sexual abuse is a heinous crime and images of it ought not, all things being equal, to circulate unimpeded. But what is the difference of principle with communications by those deemed terrorists? Is not conspiracy to commit mass murder in the same bracket - if not worse - than distribution of child porn? If some technical smokescreen, some proof by sufficiently cumbersome notation, could be found for identifying irrefutably such communications, then why should not Apple do its fair share of US imperialism’s dirty work?

But why stop at the US? Apple has already compromised on the privacy features of its iCloud storage service to appease the Chinese government. In other words, it has signalled that, if the choice is between suspending its business operations (which include production, as well as consumer sales) in the People’s Republic and compromising its supposedly deeply held privacy commitments, there is only one answer. How many more markets is Apple unwilling to drop, and what will their governments demand?

Privacy politics

We could view Apple’s behaviour as hypocritical, but in reality it is perfectly consistent. Like any other public corporation, it must attend to its immediate interests and the profits of its shareholders. But its shareholders are increasingly ‘abstract’ - vast passively managed funds - and so it is subject to something like a collective compulsion of the capitalist class as a class. That class wants no struggle with the state, so long as the state does not interfere with profitability, at the very least with ‘onerous’ taxation, and furthermore nowadays insulates it from its risks.

What we observe, then, is the passing of time: the five years separating the San Bernardino contretemps and the present controversy is, in synecdoche, the presidency of Donald Trump, and the fraying of the post-cold war pax Americana regime. The trend is towards illiberal capitalist state regimes, and these will tend to demand more in the way of service from their ‘national’ capitalist firms. We may soon face the spectacle of a tug of war between the Americans and the Chinese for such services.

The question of what this means for the privacy-obsessed is a little more involved, though rather grim. It was ever the view in such circles that sufficiently strong encryption and wide dissemination of techniques would allow groups of ordinary citizens to prevail against the intrusions of state power. The problem with such a perspective was illustrated years ago in a comic strip beloved to programmers, in two panels. The left panel, labelled ‘A crypto nerd’s imagination’, shows two Feds trying to hack a laptop with a million dollars’ worth of computers, but giving up because the encryption is too strong even for that. The right panel, labelled ‘What would actually happen’, has one of the Feds telling the other: “His laptop’s encrypted. Drug him and hit him with this $5 wrench until he tells us the password.”4

This is in the end a kind of petty bourgeois utopia: skilled artisans might secede from an oppressive, totalising society, defending themselves with their wits and savoir faire. Yet those skills always somehow knit the utopian into a network exceeding his or her comrades, and that has never been truer than of the paranoid technologist. Their skills make sense, really, only in the context of global IT networks; their vocation must always take them onto the same terrain as the men with the million-dollar password-cracking cluster and the five-dollar wrench.

Privacy, then, really is abolished in our day. If even the experts cannot defend theirs, then what hope is there for the rest of us? The option remaining is to overcome the corrupt nexus of hypertrophic state regimes and imperturbable tech corporations altogether. Our best remaining weapon - indeed, our best weapon even in the days of far less sophisticated surveillance - is maximal transparency and a commitment to open mass politics, not conspiratorialism. Appropriate methods should be employed to maintain liberty of publication and related activities in times of increased repression, of course, and belt-and-braces information security techniques will certainly play a part in that respect.

The key question, however, is not whether the state knows who people are (it usually does, and did in the age of the telegraph and steam locomotives), but whether it can effectively take advantage of such knowledge. That, in the end, is a question of the balance of forces. No cryptographic algorithm will adjust that balance meaningfully.

paul.demarty@weeklyworker.co.uk


  1. www.apple.com/child-safety.↩︎

  2. www.hackerfactor.com/blog/index.php?/archives/929-One-Bad-Apple.html.↩︎

  3. www.apple.com/customer-letter.↩︎

  4. xkcd.com/538.↩︎