WeeklyWorker

06.04.2023

Diabolus ex machina

With Elon Musk leading the way in expressing worries about AI, Paul Demarty explains what is really going on

Having touted it as the next big thing (the next 10 big things really) for an age, it seems the tech elite - or part of it at least - is getting cold feet, when it comes to artificial intelligence.

This may seem a strange moment for such a psychological reversal. After all, the last year or so has seen a series of very public successes in the market - a wave of AI systems that can generate images and text with a human verisimilitude missing heretofore. The most recent is the GPT-4 large language model (LLM) developed by OpenAI, and available as part of its ChatGPT AI chatbot, with which we were already having plenty of fun before GPT-4 landed.

Yet we read with interest the open letter published by the Future of Life Institute, and signed by Elon Musk, Steve Wozniak and a host of other marquee names of Silicon Valley - and beyond (pop-science speculator Yuval Noah Harari being the biggest non-tech name). They write:

AI systems with human-competitive intelligence can pose profound risks to society and humanity … As stated in the widely-endorsed Asilomar AI principles, “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.” Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one - not even their creators - can understand, predict or reliably control.1

The concrete proposal is a sixth-month hiatus on all development of advanced AI - until, as Donald Trump might say, we can figure out what the hell is going on.

Long-termism

So what is going on here, exactly? A cynical answer springs to mind. Musk, after all, is a co-founder of OpenAI, along with CEO Sam Altman; originally it was set up as a non-profit, but abandoned that status in 2019. It now has capped profit status - it is forbidden to clear more than 100 times the investment it collects, which is barely a cap at all (it would certainly count as a nice payday for any venture capitalist …). Given that the GPT line of LLMs is such a stunning success, the pause might look a bit too much like OpenAI, with its partners, pulling up the rope-ladder to competitors. Certainly it is unlikely to be received with any enthusiasm in Google’s Mountain View headquarters, as the beleaguered giant attempts to catch up with Musk, Altman and their friends.

There is an alternative, more generous spin one could put on it. The non-profit that put it out (largely funded by Musk) is, as we noted, called the Future of Life Institute (FLI). This branding marks it out immediately as a product of what is called ‘long-termism’ - in essence a particular approach to utilitarian ethics, which has become a guiding light for big-tech philanthropy.

The idea is this: suppose you have a straight, exclusive choice between preventing harm to one person, or preventing harm to several people. This is usually framed as the ‘trolley problem’ - given an out-of-control streetcar, you have the choice of allowing it to plough into a family of five on its current course, or deliberately diverting it towards an individual. The staying power of this thought experiment rests in the fact that it usefully distinguishes different schools of moral philosophy and gives them something to argue about until the heat death of the universe.

We will not concern ourselves with the arguments, but merely with the unavoidable, utilitarian2 conclusion - that you must flip the switch. But suppose you knew that the individual on the other line was the sole caregiver for a family of 10 children; and if this person died, those ten children would surely starve to death. Would it not then be right, from the utilitarian point of view, to leave the car on its present course?

The long-termist mutation is simply to take this a step further: what about future generations? Should their pain and pleasure not be incorporated into our calculations? Taken on its own, this is perfectly reasonable: most people, whether parties to the endless debates of moral philosophers or (in Alasdair MacIntyre’s phrase) “plain unphilosophical persons”, would agree that we should not act in wilful disregard of our world’s future. Conceiving of this duty in strictly utilitarian terms, however, leads to the conclusion that (for example) a 5% chance of human extinction in 2300AD is a more morally urgent matter to deal with than the absolute certainty that millions of people alive today will not be alive this time next year, thanks to starvation and preventable diseases. Most people take this to be a moral absurdity - but not a certain faction of moral philosophers (and not a certain kind of tech billionaire philanthropist).

Indeed, long-termism - and the wider ‘effective altruism’ approach to philanthropy - has acquired a certain notoriety recently, thanks to the downfall of one of its most prominent advocates in the tech business, Sam Bankman-Fried, formerly a billionaire operator in the cryptocurrency world, now awaiting trial for multiple fraud charges. Bankman-Fried ran a crypto exchange called FTX - essentially a stock-trading platform for crypto tokens - and a kind of crypto hedge fund called Alameda Research. At a certain point, Alameda started dipping into client accounts at FTX, and when the whole sector went kablooey, it took FTX with it.

The fall of SBF, as he is known more widely, illustrates the basic problem with things like long-termism and effective altruism. They are based on moral calculations, but even Jeremy Bentham’s original hedonic calculus presented an impossible task, given the complexity of causal interactions in the world. Trying to assess the relative ‘price’ of far-future risks is hubris to the point of comedy. Do we expect philanthropists like SBF - a man barely in his 30s, who has spent his entire adult life working in stock trading and then a crypto grift in the Bahamas, lubricated all the way along by amphetamines - to make rational calculations of the rationally incalculable? No - but we expect that they will think themselves capable, given the intrinsic megalomania of capitalist success, catalysed by Adderall.

Applied to AI, the long-termist scaremongering devolves into a peculiar kind of collective narcissism on the part of the tech moguls, and even the engineers and data scientists who have signed the FLI letter. To imagine yourself causing the apocalypse is a perverse form of self-aggrandisement; and it naturally follows that only you have the power to stop it.

So what of these doomsday scenarios? Outside of pop-culture, the most common image, in contemporary discussion of AI, of things getting out of control is the “paperclip maximiser” - a thought experiment by the long-termist pioneer, Nick Bostrum.

Imagine general artificial intelligence is achieved, implausibly, by a company that manufactures paperclips. The AI is instructed to manufacture as many paperclips as possible. It rapidly optimises the company’s factories, increasing output tenfold. But then it runs out of raw materials. No matter: it gambles the company’s money on the stock market, and achieves controlling stakes in all the world’s steel companies, sending the steel to paperclip factories. But then the steel companies run out of iron ore; so it does the same trick again, taking over the mining companies to direct all the iron ore to the steel mills. Finally, it runs out of iron ore; but humans have traces of iron in their blood, and the last desultory batch of paperclips is produced by exterminating the only beings that could possibly have had any use for them.

This is, in truth, a far more interesting thought experiment than its ‘long-termist’ pedigree might suggest. Unlike the Skynet system responsible for the events of the Terminator franchise, the paperclip maximiser is an exercise in fractal banality. Its aim is meaningless - even before clerical work was fully computerised, paperclips mostly grouped documents that only existed thanks to empty, bureaucratic rationality. It is a need that has been met (with great technical ingenuity and total moral disinterest) by human intelligence heretofore.

What is wrong with Bostrum’s conjecture is not the idea that AI would be morally capable of ending life on earth to produce paperclips, but the implication that humans would not be. Not for nothing is the greatest modern satire of corporate life, The office, set in the midst of the stationery business - a business producing commodities for itself in a perfect circle. Capitalist businesses - even the notionally ‘productive’ ones - are already paperclip maximisers, endlessly rationalising the implementation of irrational objectives.

That is Bostrum’s limitation, and that of Musk - and, Sams Altman, Bankman-Fried and the rest of the ‘long-termist’ confraternity. However broad their historical horizon, they cannot imagine a better way to organise production than a purely anarchic pursuit of profit. ‘Long-termism’ tacitly assumes the infinite short-termism of capitalist production.

Military-industrial

What is at issue, in other words, is not a qualitatively new existential threat to humankind, but a (possibly very large) quantitative shift in the potency of existing dangers. We have succeeded in making human administrators into morally indifferent ‘maximisers’ and, however much greater the ‘maximisation’ capabilities of AI, it will only ever pursue the goals set by its human masters - who are themselves today mastered by the anonymous mechanisms of capitalist production. Likewise, Skynet and similar military-industrial AI apocalypses are projections of the unlimited trend towards war (a deep pessimism about human self-destructiveness runs through James Cameron’s deceptively popcorn-friendly oeuvre).

This is true even of the more modest concrete consequences of the advances in LLMs, image generators and so forth that have given so much publicity to the field. It is not yet clear if jobs are disappearing on account of ChatGPT, but work certainly is, with jobs perhaps to follow. It can generate readable text in English and, presumably, other major languages, given a prompt. True, the model is almost comically unreliable in terms of factual accuracy, like the wider internet it feeds off, but hardly stands out in that regard.

It seems to me that the bottom will likely fall out of the writing market as a result. Heretofore, humans are employed in large numbers (albeit not for large salaries) to produce vast scads of low-quality copy, primarily for the purpose of gaming Google’s search algorithm. Truthfulness is wholly optional in this sort of writing. It is also quite good at generating snippets of computer code in various programming languages. Predictions that software engineering is about to disappear as a profession seem a little overblown, but it is one more piece of downward pressure on the total number of jobs available, to add to the end of the low-interest-rate, free-money era. Advertising creatives and photographers of my acquaintance, meanwhile, are fearing the wholesale replacement of their professional niches with AI models of various kinds. The bottom line is that white-collar work is not safe from automation.

Humanity

Yet this is to a large extent capitalism pushing pre-existing malign potentialities by developing the forces of production (if you can call it that). To take the ‘search engine optimisation’ prose industry as an example: we have begun with what is, from the grand human point of view, a wholly senseless activity (writing worse prose to impress an algorithm), and then perhaps automate it.

We now have an algorithm writing terrible prose to impress another algorithm. But the people who have done this until now (as I used to many years ago) were already fully subsumed and dominated by the machine. (It is worth remembering that the word ‘computer’ originally referred to a human who would be given sums to calculate mechanically.) In principle, the destruction of this ‘profession’ could free people up to do worthwhile writing - to become novelists, journalists or whatever. Instead, it hurls them onto the dole queue - and continues the worthless ‘production’. It throws out the baby, and keeps the bathwater.

All this is ultimately a function of capital’s ceaseless, autocannibalistic drive for self-expansion. With social reality absent from such a background, there is a perfectly simple solution to the paperclip maximiser experiment: don’t invent the paperclip AI! The human species is well served - excessively so indeed - in this area already; we no more need ‘PaperclipGPT’ than we need a new and more destructive generation of nuclear weapons.

Here, I suspect, I part company with many ‘Marxists’, of the broadly accelerationist or futurist stripe: on my account, Marxism proposes to organise production around human needs, both in terms of consumption but also the productive process itself, which should be maximally humane and democratic. This need not - and, given the stubborn reality of human biological nature, probably will not - mean ‘fully automated luxury communism’, but the real subsumption of the machine under labour.

More advanced algorithms and AI models should serve to simplify planning, enable robots to take care of unavoidably dirty and dangerous work, and free us humans up for a full social life, including a manageable amount of useful, rewarding labour undertaken in common. A person living in perfect post-human idleness, their limbic system forever tickled by machines of loving grace, is no more a rounded human individual than a wage-slave - although this is an understandable fantasy in an age that condemns some to relentless overwork and everyone else to degrading penury.


  1. . futureoflife.org/open-letter/pause-giant-ai-experiments. The principles referred to are at: futureoflife.org/open-letter/ai-principles.↩︎

  2. . Strictly speaking, this is act-utilitarianism, which evaluates each individual act according to some calculus of utility, as opposed to rule-utilitarianism, which chooses between different regimes of rules on the basis of utility in aggregate.↩︎