Monkey see, monkey do
Revolution or vapourware? Paul Demarty assesses brain-computer interfaces and Elon Musk’s hype
There has been a renewed wave of attention to so-called brain-computer interfaces (BCIs) in recent years. This is due no doubt in some respect to significant advances in the field, but all the more so to the identity of their latest major proponent: Elon Musk.
In his inexhaustible enthusiasm for making the tropes of pulp science fiction a reality, Musk founded yet another of his many companies: Neuralink. The dream would be to develop a workable, commercially viable solution that would allow people to control devices by the power of thought, through an intracranial electronic sensor. He claimed, a few years ago, that Neuralink had successfully installed one of its chips in the brain of a pig, and - with that slightly-chilling billionaire whimsy of his - more recently announced that he had, in the same way, gotten a monkey to play video games. And now, they have implanted an electrode into a human brain.
It is difficult, when Musk’s name is involved, to suppress a certain scepticism. Many of his grandiose ventures have failed. His big idea for mass transit - the ‘hyperloop’ - has been attempted by many bright young things the world over (he did not claim any monopoly over it, for once), all of which have failed, and none of which have ever looked like a better bet than old-fashioned high-speed rail. SpaceX makes its money from military contracts, doing boring things like launching satellites, and - we confidently predict - will never result in a Mars colony.
BCIs, however, are a longer-standing field. Serious research began, on the Defence Advance Research Projects Agency (Darpa) dime, half a century ago (Darpa - perhaps the paramount example of the so-called ‘entrepreneurial state’ - has a far better record of turning science fiction into science fact than any number of two-bit Edison wannabes like Musk). Over time, some promising applications have been found, notably in medical treatment.
It is not hard to think of examples. A BCI that allowed fine motor control of a prosthetic limb would be an excellent addition to the surgeon’s arsenal, and - supposing such things were available to the general population - could potentially improve the lives of many millions, seeing as how the human species does have a regrettable habit of maiming its members in frequent and large wars. Or we can picture the late Stephen Hawking, using the last of his motor functions to select words from a list of a few thousand to be spoken by a speech synthesiser: could we produce a sensor accurate enough, and a software system sophisticated enough, to turn brain activity into fluent natural language?
Perhaps, perhaps … We are unsurprisingly a lot closer to the first of those, where the desired outputs are relatively simpler. There are major technical challenges at both ends of the pipeline: creating sensors that can be safely and durably integrated with the human brain, yet still be close enough to the action to reliably pick up electrical signals sometimes measured in the microvolts, is a formidable difficulty. Interpreting an endless stream of electrical signals and mapping them onto human intent is another one. There is a great deal of excitement about advances in machine learning (ML), and what that might be able to do on the output side. Well trained ML models, after all, take a bunch of meaningless stuff and impose meaning on it by way of known historical examples; advanced medical technologies seem a more profitable use of such methods than asking ChatGPT to write a sonnet about Ant-Man.
An interesting article in the Financial Times interviews several people working in the field, about what ML has achieved recently, and about the problems that excessive hype can bring.1 It is also oddly paradigmatic of the bourgeois media’s technology coverage. A great deal of space is given over to (admittedly interesting) portraits of the lines of research these people are doing (or claim to be doing). In spite of the negative spin, Musk looms absurdly large over proceedings. One interviewee claims, no doubt accurately, that none of the great breakthroughs claimed by Neuralink are actually novel, though she concedes that their work is “state of the art”. (It certainly is not the first outfit to implant a sensor into a human brain.) Another worried about “a huge weight of over-expectations” falling on the field. Something like that is - at least in computer folklore - supposed to have resulted in the ‘AI winter’ of the 1980s and 90s.
Relegated to our peripheral vision is the social context of the technology. It is notable that all but two authorities quoted are working in private companies (one welcomed the fact that Musk had “really put a spotlight on this field and it’s bringing the capital in”). Yet by placing the focus entirely on the tech, the article is extremely typical. There are two other aspects to the question we might want to examine: the conditions under which this development is undertaken; and the ideological motors driving it.
So far as the first of these is concerned, we have already mentioned the fact that we have yet another example of the private sector sweeping in at the last minute and taking credit for developments largely undertaken by the state (and, as is often the case, undertaken by the in-house and contracted boffins of the armed wing of the state). There are very different incentives at work here, let us say, when dealing with your intrepid journalist. Military discipline abhors leaks, and generally prefers to preserve its competitive advantages in the interests of strategic or even battlefield surprise. Earlier research was not exactly done in secret - papers were published, conferences held, and so on. Yet there was no need to shout it from the rooftops. The excitement was limited to professional and lay science nerds.
The chief executive officer, especially the start-up CEO, has instead the need to make a big performance out of innovation. Musk has made a career of it, as we have noted. The mini-Musks do the same thing, however. The gulf between biotech bullshit and reality has sometimes been known to grow to criminal dimensions - as in the case of Elizabeth Holmes, whose company, Theranos, imploded when its “revolutionary” blood-testing technology proved to be wholly fraudulent, landing her and certain consiglieri in prison. Less dramatically, the thing operates like a ‘Nigerian prince’ scam: an extraordinary and lucrative breakthrough is around the corner; we need only one more round of venture capital funding … In the new, VC-unfriendly macro environment, there is still more incentive to be as loud as possible, as companies squabble for funding like baby birds screaming for a worm from their mother.
In most sectors of the tech industry, this sort of thing is mostly harmless - an illustration of the old proverb that a fool is easily parted from his money (unless he can sell on to a greater fool). When medical applications are involved, more troubling issues arise. Theranos, after all, sent bogus test results back to actual patients, resulting in obvious harm to them. The clearest ethical dilemma on the BCI front is experimentation. There was a brief round of Neuralink news when it was revealed that their tests had killed 1,500 animals in their hurry to market, including many of those Pong-playing monkeys, possibly violating animal welfare laws in the States.
Experiments on animals of this sort, public or private, are seldom free of the taint of cruelty; those who support such tests do so in the name of the greater good of medical breakthroughs for humans. Granting that, however, there is still the problem of human experimentation. The incentives are the same, although the penalties for regulatory violations obviously greater. Do we trust Musk’s hirelings to fiddle with people’s brains? Do we trust his competitors? Do we trust the bourgeois regulators and justice system to suitably punish negligence?
That depends on what the stakes are for the people in charge. Beyond the narrow pecuniary interests, there are, of course, the ideological drivers.
Neuralink, like its competitors, leads its charm offensive with the revolutionary medical technologies we discussed above, as well they might. But nobody could accuse Elon Musk of being a private man. For him, the endgame is a seamless interface between the meat-world of embodied human identity and the digital one. The stakes are little short of a back-road to immortality.
He is not alone. Bryan Johnson - another tech billionaire - has recently attracted attention for his individual quest for perpetual youth, up to and including regular blood transfusions from his teenage son. Larry Ellison, another, invests heavily in research into prolonging life: his, presumably. (As one of his former employees famously quipped, “Do not make the mistake of anthropomorphising Larry Ellison.”) Many others look up to the eccentric futurologist, Ray Kurzweil, who expects death to be abolished - for some - in his lifetime, although the clock is ticking for him.
There is an overlap with the strange subculture that has built up around the idea of a coming ‘singularity’, in which artificial intelligence will suddenly and rapidly overtake human capacities and thereby sweep away all the prior antagonisms, whether by way of merging with us and giving us immortality or by wiping us out to remove obstacles to whatever stupid, petty aim the AI was given in advance (the usual example is maximising the output of paperclip factories, thanks to a famous thought experiment of the similarly minded Nick Bostrum).
With the recent success of large language models, these debates have once more penetrated the mainstream. Most versions of the AI apocalypse are of the fear-mongering sort, and so various worthies urge greater regulation of the field. Many are united by two utilitarian philosophies, called effective altruism (the idea that philanthropy must be guided by evidence of empirical effectiveness) and long-termism (the idea that future pleasure and pain count for as much as present pleasure and pain). Thus ‘altruistic’ billionaires are justified in spending their wealth on attempting to prevent unlikely long-term scenarios, since a 1% chance of 100 million people dying is worth more utility points than the certainty of a thousand people dying.
Yet this outlook now has a rival - ‘effective accelerationism’, in which the upside of such eschatological technological changes is highlighted, with a corresponding moral duty to accelerate, rather than arrest, the pace of development. It has found a champion in the person of Marc Andreessen, a prominent venture capitalist and the same sort of narcissistic bloviator as Musk.
Why are these men not horrified by their own dreams? There is an underlying factor: the legitimating ideology especially of the tech mogul: that it is their purpose to conquer new frontiers and specifically not be intimidated by the small concerns of small people. It is an ideology traceable back to Friedrich Nietzsche, but more directly to Ayn Rand, whose hyper-capitalist utopianism openly scorned altruism (unlike, say, the Austrian economists who believed their theories to be the best for everyone).
The great satire of this outlook is not a book or a film, but - suitably - a computer game, 2007’s classic BioShock, in which the player is stranded in the ruins of Rapture - a Randian utopia at the bottom of the ocean floor, whose society has fallen apart, as it is stratified by increasingly horrific body modifications. It must fight its way out through the hordes of post-human mutants which roam the swish art-deco setting. The BCI-AI singularity is a strange objective, which would only occur to people who could play BioShock and earnestly wish it was real.
Yet the capitalists are quite as trapped by capitalism as the working class - albeit in very comfortable house arrest - and none more so than these self-absorbed tech types. The laws of the system, even the mere laws of popular esteem, constrain their actions. Their money is in the gift of wholly bureaucratised institutional investors, or else state largesse. Their ‘inventions’ are purloined from academia. Every limitation is an intolerable insult. They long for freedom, and the dignity due to them. (“Is a man not entitled to the sweat of his brow?” Rapture’s founder asks the player rhetorically at the beginning of BioShock. Randianism has always, in practice, the whiff of the worst kind of slave morality.)
To them, we are mere lab monkeys, to be played with on the way to some unsurpassable horizon of self-actualisation. Can we really object, Nietzsche asked, to the sacrifice of thousands so that one overman might arise? Yet the overman never arises - instead, we get Musk and Andreessen, dressing up for the part in the shadow of Blackrock, the Pentagon and the Public Investment Fund.