WeeklyWorker

18.05.2017

The roads to Wannacry

How could an amateurish cyber-attack take down the NHS? Paul Demarty dons his black hat

There is enough going on in the world, but in the last week the news has been dominated by the spread of a particularly aggressive computer worm, now known as Wannacry.

The worm is a form of ‘ransomware’: once it shows up on a vulnerable computer, it encrypts files, denying users access to them, and eventually places its demand - buy $300 worth of the anonymous crypto-currency, Bitcoin, and send it to a particular address. Ransomware is not new, and nor are worms (malicious software, or malware, that spread over a network), and - for that matter - nor are ransomware worms. What is unprecedented is the scale of the damage. Industrial plants, hospitals, cellular telecom networks, military hardware and countless individual users have been affected.

Eyebrows have been raised, among other things, at how little money has been raised so far by the criminals (according to people watching the Bitcoin transactions, only around $30,000 has been collected). This peculiarity is in the end a matter of unprofessionalism. Only atomised individuals will pay up: the government or Telefonica or the National Health Service will ‘call IT’. Once people hear about the worm on the news, meanwhile, they will wait for ‘something to be done’ rather than pony up if they are affected.

For these reasons, and also the increased likelihood of vigorous international police attention that follows from hitting big targets, serious cyber-criminals will typically make some effort to deduce whether they are working in a large corporate or government network. No such effort has been made here. The worm also had a ‘kill-switch’ mechanism, which was a crude attempt to circumvent analysis by security experts; one such researcher was able to cease the spread of Wannacry globally merely by registering an incomprehensible domain name. The story is that the damage was so vast in any case, and behind that the more interesting one of how we came to be so vulnerable. For Wannacry is at the intersection of many roads in information security, and a glimpse of the dangerous world we now inhabit - a creation not primarily of technology, but of decaying society.

Black hats

Our first story is the most general - about the rise of malicious software itself.

Before the late 1970s, there was little point writing about viruses, worms, Trojan horses or anything else. There were scarcely a few thousand computers the world over. Most of them had no hard drives or other means of permanent data storage. The only way to run a program was to carve it into a deck of punch-cards, and convince the operators of the computer - which in those days would be a vast, building-sized monstrosity - to run it: a process that could take several hours for any serious job. Even when timesharing computers became available, obviating the need to talk to the operators, access was limited to academics, students and a small coterie of white-collar workers in industry. Malicious software, in that context, could consist of little more than an undergraduate’s prank.

In 1977, the Commodore PET was released, the first personal computer aimed at anything like the general consumer, and in the subsequent decade and a half, the number of PCs exploded. The Apple II and Commodore 64 sold millions. IBM, which dominated the ‘big iron’ market of enormous business mainframes, inadvertently created a machine which could be copied at low cost within the letter of the law, and the availability of cheap IBM clones - combined with huge advances in processing power and storage - put computers in the hands of millions of home users and white-collar workers alike. Home computers were no longer exorbitantly priced, but cheap; they were no longer toys, but useful, and increasingly indispensable.

It is therefore hardly surprising that the first serious malware attacks occurred in the middle to late 1980s. Two brothers in Pakistan created the first computer virus in 1986, as an anti-piracy measure. The Morris worm in 1988 became the first major outbreak of malware on the internet, still then limited to government and academia. Though not malicious in intent, the worm caused enormous damage by contemporary standards, because its author allowed computers to be ‘infected’ multiple times, eventually causing them to grind to a halt when processing power was exhausted. Ransomware originated in the same year, when a mentally unstable evolutionary biologist distributed a stand-alone malicious program, purporting to be educational material about HIV/Aids, that scrambled all the user’s filenames until they paid up - apparently to raise money for research into the disease. (The author, Joseph Popp, was found to be unfit to stand trial - no such luck for Robert Tappan Morris of the eponymous worm, who was sentenced to community service and three years of probation.)

Since that time, malware has become ‘serious business’. There are now billions, not millions, of computer users, and probably trillions of computers - most so small that we do not think of them as computers. Cars are connected to the internet, and so (apparently) are X-ray machines and the like. Take all this into account, and only the most remote, pre-modern agricultural corners of the earth and extant hunter-gatherers must pass their lives untouched by Alan Turing’s inventions.

Thus both the potential rewards of cyber-crime and access to the things needed to commit it (at this point little more than a computer and the internet) have grown exponentially. The vigilante biologists and curious undergraduate scamps are mostly gone, or at least eclipsed by Russian mafiosi and the like. A permanent arms race exists between the criminal underworld and the information security professionals who attempt to prevent attacks - the so-called ‘black hats’ and ‘white hats’ among hackers.

For your own good

There are those who do not fit neatly into either of those categories (‘grey hats’), who break others’ IT security within the letter of the law for self-interested motives.

Here, we meet Wannacry’s origin story. Some time ago - no later than mid-2013, but possibly earlier - one or more security researchers found an alarming vulnerability in all extant versions of Microsoft’s Windows operating system. The essence of the matter is this: one of Windows’ features is to make it easy to share files between computers linked by a network. As is drearily often the case with Windows, it was altogether too easy, and a loophole in the code was easily exploitable by criminals, spies and other undesirables. Here is what is supposed to happen under such circumstances: the researcher approaches Microsoft discreetly through channels set up specifically for the purpose, with a demonstration of the vulnerability, that Microsoft’s engineers can easily repeat. If it is agreed that there is a problem, Microsoft will even pay you for your time.

The researchers concerned did not do this. Why? Because they work for the National Security Agency, exposed so sensationally by Edward Snowden as having engaged in Baroque levels of snooping on American citizens and foreign heads of state, and systematically attacking every privacy measure invented by that most notoriously paranoid of castes - programmers. We knew already from the Snowden leak that the NSA was ‘hoarding vulnerabilities’: that is, keeping their discoveries secret in case they were to come in handy later. This was all fine, as it was for the good of the ‘free world’ and the NSA took security very seriously - it’s right there in the name! - and so there was never any chance at all that this material would fall into the wrong hands. Of course not. In any case, the NSA created an exploit for this weakness in the Windows file-sharing code. They called it EternalBlue.

Fast forward to the spring of 2017. In March, Microsoft published its usual batch of security updates for the last three major versions of Windows, including the fix for EternalBlue. We do not yet know how they came to discover it, but we cannot exclude the possibility that the NSA came clean at last; for a month later, a group of hackers published a new tranche of leaked NSA documents, possibly originating from the Russian security services (but who knows?), including details of EternalBlue and similar hoarded exploits. It took less than a month after that for the Wannacry attack to take place.

If there is any one thing to blame for this fiasco then, it is the NSA - and, more broadly, the increasing obeisance of capitalist governments towards the ‘securocracy’. The popular imagination of this caste is informed by The X-files: the hyper-organised secret state, powerful to an unimaginable extent, always one step ahead of everyone. In truth, it is like any bureaucracy - riddled with internecine strife, barely functional and dominated by pathological incentives. To wit: as a cyberspook, you operate in a world in which a great many people use Windows. Say you discover a fatal security flaw - if you keep it to yourself, you can use it. If you tell Microsoft, it is fixed, and nobody can use it. What do you do? Why, keep it to yourself - that is your competitive edge.

Yet this is the result - widespread chaos, and who knows how many more times this will happen? Security measures can only ever go so far - the NSA needs, in the end, to trust its own people, and the more malignant and insane its activities, the greater in number the whistleblowers. What, then, even of America’s precious ‘national security’? Some ‘experts’ are pinning Wannacry on the North Korean regime - I am sceptical, but Kim’s goon-boys could have, if they wanted to, and, regardless, I do take seriously the idea that the leak was Russian in origin. For the time being, cyber-espionage is cheap compared to the alternatives - the playing field is more level, and therefore America is more vulnerable. In statised, declining capitalism, the state comes to depend in ever greater part on its spooks, meaning it is too afraid to rein them in or submit their activities to meaningful oversight, with the paradoxical consequence that the local incentives of these all-too-human operatives run riot, resulting in disasters such as this, potentially far more destructive to American or British interests than any lunatic with a suicide belt.

Computer says no

There is another, more subtle aspect to all this, which is the somewhat fetishistic attitude to computers common in contemporary society - overlapping, but not identical to, the fetishism of the commodity well known to Marxists.

One of the brief shafts of light in the turgid gloaming of Capital volume 2 comes when Marx discusses the reproduction of the productive forces, and notes with acid amusement the general impression among worthies of the middle 19th century, as exemplified by star-struck speakers in parliamentary debates, that the railroads would never have to be replaced. They were, after all, made out of metal, which is really hard. Of course, nothing of the sort was true, as those of us dependent on London’s dysfunctional commuter lines will readily tell you. Much popular frustration with computers comes down to the fact that people make essentially the same mistake. They view the computer as a thing; in fact, it is a complex arrangement of components with a bewilderingly complex and constantly changing arrangement of software installed on it. Thus, the temperamental nature of our PCs, phones and even microwave ovens vexes us, by frustrating our fetishised understanding of the thing before us (‘I don’t understand - it was working just fine yesterday ...’).

What sort of thing do we view it as? I have already used the word ‘temperamental’, without thinking: we view it as something like a person - but a person with great power over us. We think of the great comedy sketch in Little Britain, where a passive-aggressive bank clerk can only communicate to the outside world through her office PC, whose answer is always the same - “Computer says no.” The computer stands in, first of all, for the internet, the digital mega-machine: it is, in a word, a god; and an Old Testament sort of god, always saying no ... It is ‘automation’ that puts people out of work, not the pursuit of profit; it is ‘artificial intelligence’ that will solve sundry social problems, not the collective human intelligence that actually drives AI.

Thus the peculiar character of the mainstream coverage of all this: a gang of criminals is threatening the NHS, we are told. As if the world’s computer systems are a transcendent being outside and beyond the causality of narrow material life. As if the NHS’s problems were not abundantly human; as if a paltry £5.5 million line item for IT upgrades - chicken feed at twice the price (a necessary adjustment, given the tendency of IT budgets to inflate ... ) - capriciously denied by the repugnant George Osborne, would not have excluded it from the excitement of the last week. As if it was all a matter of heretics defying the plan of an omnibenevolent god.

In truth, this is a particular form of the appearance of the machine as an alien power over the worker, and ultimately - as we have alluded already - of the phenomenon, identified by Feuerbach, for humans to find themselves more comprehensible in alienated form, as gods of one sort or another. We should not be surprised, in a capitalist society, to find people willing to defy the law for a quick buck, or a quick Bitcoin. “We’re a big, rough, rich, wild people,” Raymond Chandler wrote of America, “and crime is the price we pay for it, and organised crime is the price we pay for organisation.”

The inability to properly understand the dynamic nature of computer systems - particular when it comes to the complex and frequently esoteric matter of security - is deeply worrying, in an age already beset with perilous surprises.

paul.demarty@weeklyworker.co.uk