04.09.2025

Artificial intelligence, human flourishing
AI is, we are told, an investment bubble waiting to burst, but what role, if any, should AI play in socialist society? And what role does it play in today’s world? Paul Demarty explores the complex issues
Artificial intelligence continues to drive a lot of excitement - not only in the technology and business media, but the wider news ecosystem.
Arguments continue to rage about whether the novel technologies of our moment will lead to the apocalypse or merely a second dotcom bubble, about its impacts on the environment, human literacy and even about general sanity (see the recent fracas over the decision to make ChatGPT less obsequious). Yet it is worth taking a moment to zoom out, and think about the overall relationship between these novel technologies and the communist project.
In order to get there, of course, we will have to talk about capitalism and artificial intelligence, particularly in the current situation where we are in the middle of an enormous hype‑cycle about AI, which is kicking off a whole series of attendant controversies.
The current hype-cycle - at least the third over AI - really began with the public launch of ChatGPT three years ago, made by the peculiar capped-profit company, OpenAI. It was the first of many applications of its type: a chatbot that displayed uncanny intuition of the intent behind entered prompts, and proved an effective tool (with some caveats, which we will come to) for information retrieval, software code generation, document writing, machine translation and many other things besides. Competitors rushed out their own versions, sometimes rather too quickly, but consumers and businesses today can choose between ChatGPT, Google’s Gemini and Anthropic’s Claude, to name only the three most well-known examples.
This series of very impressive product launches have led to very grand predictions: that AI is going to completely reshape the world economy, that the next phase of great-power competition will be focused on winning the AI race, or indeed that we are on the cusp of machine superintelligence, for good or ill. To be sure, many of these predictions have come from the people selling the technology - a fact that, to my mind, has been little remarked on by a rather credulous media. I do not propose to discuss it in any depth, but I find the superintelligence stuff a little silly; yet there can be no doubt that at least some economic activity will be transformed by this family of novel technologies, given their broad applicability in different domains; and the same is true of the military competition angle.
Basics
To get a handle on how these changes will play out, we need to understand some basics, and we also need to consider the history.
First, let us talk about algorithms. The word is very common now, and usually refers to recommendation systems - you go on YouTube, and The Algorithm recommends to you some videos and, after you watch one, it recommends you another one. Algorithms thus seem quite mysterious, capricious beasts.
The basic idea is very simple, however. An algorithm is a series of repeatable steps a computer can take to turn some input data into some output data. The methods you learned in primary school for adding and multiplying numbers - carry the ten, all that - are algorithms. The input is the two numbers; the output is the sum or the product.
I said ‘a computer’ above, but until the invention of computers, as we know them today, the word simply meant a well-instructed human. And algorithms in this sense radically predate the computing machine. The word itself comes from the name of the 9th century Persian mathematician, al-Khwarizmi, who lived in Baghdad during the golden age and came up with a series of procedures for basic arithmetic with Arabic numerals (or Indian numerals, as the Arabs called them). Some even older algorithms are still in common use - Euclid’s algorithm for calculating the greatest common divisor of two numbers - which dates back to around 300BCE - is simple and relatively efficient, and is widely used, improbably, in the production of techno music.
Back to YouTube. It is clear that this is an algorithm in the classic sense - there is an input (the data you send over the internet when you visit the site, and everything the site has ever learned about you) and an output (recommended videos). We understand, at least in a very abstract way, that a computer somewhere is executing a series of steps to get from the input to the output.
Yet in modern recommendation algorithms, we are not dealing with something like Euclid’s algorithm, where someone with basic knowledge of algebra can learn and readily understand how to apply it in a few minutes. These algorithms are very strange, because they were not created directly by people for other people to understand and use, but are themselves the output of far more complex and sophisticated software systems. There are a lot of definitions of AI out there, which are more or less useful for different purposes. To describe these modern AI systems we could do worse than ‘algorithms for making algorithms’.
History
This seems a good moment to look at the history of AI. Much of the theoretical groundwork of modern computing systems was accomplished in the 1920s and 30s, by founding figures like Alan Turing and Alonzo Church. (There are more distant predecessors, like the programmable Jacquard loom and the unsuccessful attempts of Charles Babbage and Ada Lovelace to produce a mechanical computer.) Turing, Church and others had remarkable insight into what a computer - if it existed - could and could not do. Turing, in particular, speculated about machine intelligence, and proposed the famous Turing test or ‘imitation game’. The idea was that if a machine could fool a human into believing it was a person, it would truly be seen to be intelligent.
The term ‘artificial intelligence’ was invented later, by the great computer scientist, John McCarthy. It immediately became a major research area in computer science, and it is worth noting that many of the key ideas underlying today’s AI systems were already current in the 1970s and 80s. That includes neural networks - which have exploded in use in the last 15 years - and generative pretraining (which is the ‘GP’ in GPT). Some applications were found at that stage, including voice recognition and early machine translation. A great hype-cycle began - and then crashed, when expectations were not met. Much of the 1990s is today described as the ‘AI winter’.
The basic problem was really a mismatch between the ideas and the available technology. We need, here, to talk about neural networks, which are really the core of the thing. Neural networks attempt to model the neurones in the brain. Given some input data, each ‘neurone’ generates an output; there may be many layers of such ‘neurones’ that feed off each other’s output and, finally, an overall output is obtained. This will be easier with an example. Suppose you have the goal of identifying whether a picture is of an orange. A neural network will run the picture through, and at the end, it will have calculated the probability that it is indeed one of an orange.
How? This is done through training - you feed the system endless examples of photos, some of oranges, some of other things. These are all labelled for the system ‘orange’ or ‘not orange’. Given this data, the system produces a neural network that it thinks can classify images into oranges and non-oranges. Then you give it a bunch of images that are not labelled and it classifies them. Where it picks the wrong answer, you tell it, and then the system can try to create a better model.
This process picked up the name ‘machine learning’ at some point, and it really is like human learning in some respects. We do learn the use of words, for instance, by hearing others use them in different cases and observing the results. The problem is where it differs - humans can learn remarkably quickly from very few examples. Training a machine to recognise oranges requires millions, even billions of photos. Neither the raw computing power nor the data storage existed in the 1980s to do this in anything more than a rudimentary way.
Much energy was expanded basically making the individual ‘neurones’ smarter, but that failed to really make these systems practical. What made the difference was the existence, by the end of the 2000s, of enormous pools of computing power, owned principally by the new generation of giant tech companies like Amazon and Google. Now you really could just throw data at the problem - and it worked. It improved search and recommendation engines, machine translation, and many other applications. It is easy to say that AI will change economic activity, in other words, because it already has.
A ceiling was, nonetheless, hit. Somewhat terrifyingly, the ceiling is that there is basically only so much data in the entire world. And that is where the generative pretraining comes in. This basically means automatically enriching the data in an initial phase, before the model is finally trained. Think of voice recognition here. You are sending some sound to your model: generative pretraining will fairly reliably be able to identify the parts of the recording that are actual speech, so the model will not be exhaustively checking the hum of the air-conditioning between words for meaning, with all due apologies to John Cage.
That, you may be relieved to hear, is the end of the technical content of this article. I think it is important to go through, because there is not anything fundamentally difficult to understand going on, and so much discussion of AI today is overdazzled by the tech. It is pretty cool tech, no doubt; but part of what makes it cool is that it is built, improbably, out of quite simple primitives.
Economy
What role is AI playing in the contemporary economy? We have mentioned established uses in popular web technology. To this we must add the adoption of the technology by the military and intelligence apparatuses of the state (and their semi-autonomous contractors, of course). Innumerable examples could be listed, but under present circumstances the pertinent case is that of Israel, which widely uses AI in its ‘selection of targets’, such as it is. As in all developed societies, there is no Chinese wall between military and civilian uses of such technology. Take a consumer-market drone, after all, and strap a grenade to it, and you have a single-use bomber aircraft. Take an adtech algorithm that is supposed to feed you plausible adverts, and then slightly change how you interpret the resulting data, and you have a way of identifying targets for surveillance - or even assassination.
That last one works equally well in reverse, of course. Israel is very proud of its tech industry, but, when you take a closer look, it all seems to be leaking out of the Israel Defence Forces. Paradigmatic here is the famous Unit 8200 (quite justly famous really), which trains bright youngsters to undertake offensive cyberwarfare during their years of military service, and then spits them out as Silicon Valley entrepreneur types. Many Unit 8200 alumni have been absorbed, by way of mergers and acquisitions, into the great American tech firms. Yet this is no Israeli innovation: there was no clear line between the computer researchers I mentioned earlier and the US government. The internet itself is an invention of the research division of the US Department of Defense.
What about all the millions of jobs that are to be imminently automated, according to the industry’s prophets? I think it is worth deferring that question for a moment to discuss the role the AI boom is having in the tech industry and global political economy more broadly. That in turn requires some more history.
After both the dotcom bubble and the great crash of 2008, the response of governments - especially the USA - was to reduce central bank interest rates, in the end to close to zero. The idea was to stimulate economic activity, which it sort of did, but the way this happened was a little peculiar. Much of the available investment capital in the world is concentrated in a few, quite passive institutional funds: pension funds, but also sovereign wealth funds that can be very large (for example, the Saudi public investment fund).
Such funds are typically quite risk-averse, and so buy up very safe assets - foremost among them US treasury bonds. But slashing the interest rate at the Fed means reducing the yields of treasuries. There was an awful lot of money sloshing around, in other words, that needed somewhere to go. (A lot of it went into esoteric derivatives based on the American mortgage market, but that is another story.) For our purposes we need to talk about venture capital.
Venture capital is a particular form of private equity investment. A VC fund will make a large number of investments, each individually quite modest, into high-risk opportunities. The fund makes money, in the end, if a small number of those investments cash out way above the money advanced; the simple fact that most of them will fail is priced into the model. Tech companies are an obvious outlet here: if a start-up succeeds, as Facebook did for example, in capturing a near-monopoly of a market by way of its innovations, then the upside is unlimited, and investors are happy.
If you were in the start-up world around 2010, the lifecycle of a successful company might look like this. In the beginning were a couple of people - usually young and rather uncultured men - working away in a roach-infested studio flat in San Francisco. They would build some cool little app, and it would get a little buzz in the tech press, and start generating a little revenue. At this point the company was ‘ramen profitable’ - because it made enough profit for the two founders to live on the cheapest instant noodles to be bought at Kroger.
Then, perhaps, a venture capitalist would throw them some of his play money. This was called ‘angel investment’ or ‘seed capital’. The company could grow to 10 or so employees. If it survived long enough, it could pitch some other VCs for some serious money: this was called the Series A, in that it was the first real interest shown by this pool of capital. That money would all go into maximising revenue growth, and if the numbers looked good, you could go for a Series B, which would be a much larger payment. By now the company would be hundreds-strong. There might be a Series C, but basically at that point the investors would want to get paid, and would be a significant voice on the board, so the company would be polished up for sale either to a larger company, or in an initial public offering.
By the middle of that decade, something strange was happening. There were Series Ds, Series Es, and even higher. The beautiful dream of the big ‘exit’ - acquisition or ‘initial public offering’ (IPO) - faded oddly into the background. After all, at that point, you would have to start making a profit, rather than merely making the top-line go up. Early investors could make money by selling on to later ones. And there was so much money! Big, stupid money, apparently agog at the genius of these spotty young ingenues, desperate for something to buy that was not a treasury bond.
This all slammed into a brick wall in 2022 after Russia invaded Ukraine, broad sanctions were imposed, inflation skyrocketed, and therefore the central banks pulled the only lever they had: raising interest rates. This was in the midst of one of the periodic cryptocurrency bubbles, which promptly burst (although the Trump administration’s comical corruption has led it to reinflate at present). The wider tech industry went through a period of retrenchment, with enormous layoffs at the biggest tech firms, and smaller ones through the ecosystem of VC-backed start-ups.
Good timing
The appearance of ChatGPT and the large language models therefore could not have come at a better time. The technology may or may not prove to be as revolutionary as claimed (if history is a judge, probably somewhere in between). What must be borne in mind is that the source of the hype is not primarily the technology. It is the need for the infinite money-spigot to be reopened, so that VCs and other tech capitalists can get back to a place of comfort.
The AI boom has one thing in common with the 2010s tech bubble (I think we can, in retrospect, call it a bubble). AI in its current form is not profitable. ChatGPT is a loss leader, and a spectacularly good one. The question is: where is the actual money to be made? Though the GPT approach significantly increases the efficiency of training, it is still astonishingly inefficient, compared to the average three-year-old. The environmental costs - in energy consumption, and in water consumption - are notorious.
Can this be fixed? Perhaps it can. The Chinese company, DeepSeek, released its R1 model earlier this year, which was notable for having been far cheaper to train than the incumbents. DeepSeek achieved this not through some pathbreaking technical revolution, but by methodically optimising its programmes in ways that would be familiar to any working programmer in performance-critical fields - video games, operating systems, and so on. That is why it was so humiliating to OpenAI, Google and friends, who employ many such people: they had not even bothered to try such optimising.
Suppose it can be unit-profitable - that is, let us say, that every prompt in ChatGPT somehow makes OpenAI money. Then we get to the ‘disappearing jobs’ part of the equation. It is clear that a large number of positions in the professional classes are under threat. Examples could be cited in many places. In software engineering, which is my bailiwick, there is a noticeable shrinkage in junior positions. A great part of this is actually just the effect of the end of the ‘zero interest rate’ era: the jobs shed by the tech companies during the inflation shock simply have not come back. But the impressive capabilities of AI coding assistants will no doubt increase productivity, and therefore decrease available jobs.
Many of the disappearing jobs should not have existed in the first place, of course. The fatuity of the typical tech start-up cannot be overstated - the world is not crying out for a ‘smart’ wine cooler, and never will be. I observe a considerable winnowing of the advertising industry, but regrettably there will be some advertising still taking place at the end of all this. In the end, it matters not if the slop is produced by humans or machines.
The late David Graeber wrote a book about “bullshit jobs” - jobs so obviously pointless that simply carrying them out inflicted a level of psychic damage on those employed. In the corporate and government bureaucracies of the world, there is a lot of work that falls into what I would call the trough of meaninglessness: too fiddly to be automated, but too tediously artificial to be rewarding for a human to do. The currently fashionable AI agents, based on ‘large language models’ (LLMs), may do a good job on this kind of work - a huge amount of it is basically turning a spreadsheet into an official government form in mostly predictable ways. In this country, councils used to employ a lot of people to manage housing benefit; they would pay money to tenants, who would pay it to landlords, and the paperwork would flow through the council in both directions. Clearly, even on the basis of capitalist landlordism, this was make-work.
The question of what to do with all the people so displaced seems pertinent, and is not unimportant in the grand scheme of things, but in fact is largely accidental to the question of AI, or any other particular technology that might appear. We have a society that is based on the pursuit of profit above all else, in accordance with the basic laws of capitalism, but that cannot survive politically if unemployment rises to a certain level (much higher than any we have seen recently). Thus the tendency for these systems to be overdesigned, ludicrously over-manual, and so on.
There is a common thread between the (potential) AI unemployment I have described, and the strange economic epicycles of the tech industry I mentioned earlier. Both present themselves as outworkings of technological progress, but on cursory examination they reveal underlying social dynamics as the real motor. AI may destroy some bullshit jobs, but did not create them. The tech industry is currently riding the AI wave, but it is not the first wave, and (assuming there is no social revolution) it will not be the last; these are determined by larger tendencies in political economy.
Culture
In that context, I want to discuss the cultural impacts of AI; for, while I am a technologist by trade, I am a humanist by inclination. There is a lot of doom-mongering around on this point. In academia, professors are driven to despair by the inability of their students to learn without getting chatbots to write their essays. The students accuse the academics of discriminating against their preferred ‘learning style’.
Artists - painters, photographers, musicians - likewise despair that the meagre income they manage to get from stock photo and music libraries will be replaced by AI image- and song-generating prompts (the strikes of the Hollywood writers and actors unions a couple of years ago hinged in part on the potential uses of AI to render them obsolete). On a wider scale, we find morbid symptoms: people completely dependent on AI to make decisions, people who have fallen in love with and married chatbots, and so on. Less spectacular is a certain novel philistinism: why should I read Marcel Proust, if I can just ask ChatGPT to distil the 10 key lessons from his great but intimidatingly long novel? Isn’t that more efficient?
In response to such complaints, AI boosters will point out that there have been moral panics about all new media so far: about the deadening effects of television, of the cinema, of the novel; indeed - if we go back to Plato’s Phaedrus - about writing itself, which he worried would degrade man’s memory. The trouble is that Plato was at least partly correct. Consider the London cabbie - who cannot join the guild without learning by heart the whole map of central London. This quite literally changes the shape of their brain. Compare the confusion of many of our contemporaries who cannot get around London without staring at a map on their phones continuously. Some of those people actually live there.
Yet again AI seems an all-too-convenient scapegoat. Academics increasingly sound the alarm about the fact that it is effectively impossible to stop students cheating, and that more and more university administrations are in cahoots with the generative AI vendors. Yet universities only face these problems because they are already reduced to mere rubber-stamping of degrees on a thoroughly marketised model. If you pay £10,000 a year for a degree, you damn well expect to get the degree; vice-chancellors know this, and so the idea of a university as a community of knowledge, and therefore a community of discrimination between acceptable and unacceptable standards, died long ago.
The colonisation of the arts by vast corporate interests is now decades old. Popular music in the anglosphere has long been dominated by a few centralised ‘hit factories’, whose product is then laundered through the image of a succession of pop stars. The book lists are dominated by celebrity ‘autobiographies’ that are, of course, universally ghostwritten. Netflix and other streamers increasingly commission their films by way of the same microtargeting techniques employed by digital advertisers.
Now, of course we defend people against the attempts of their bosses to replace them with machines and throw them into penury. Even when mechanisation represents significant progress, which is doubtful in many of these cases, the question remains: what now for the workers? And that goes even for Hollywood actors, I would say, despite the problems posed by having one union for both Al Pacino and someone whose biggest gig was playing a waitress in one episode of Law and order.
Beyond that, however, we must ask - what is being defended here? If our visual culture must be dominated by comic book franchises, since they allow Disney to write itself a cheque for a billion dollars a few times a year, why should humans be involved in making them? The whole thing, considered in the large, is an algorithm for printing money. Why should algorithms not dominate the component parts of the process?
In the same way: it is one thing to defend the institution of the modern research university. Can we really defend institutions that effectively pretend to be such universities when in fact what they do is offer a ticket to a comfortable professional existence in exchange for large sums of money, and when that offer is in very many cases essentially fraudulent? When so much ‘research’ is of such low quality, focused on gaming impact metrics and the like rather than anything so vulgar as advancing the state of human knowledge?
The large language model is, apart from its reality as a technical instrument, the perfect image of the contemporary culture industry and the neoliberalised university. It takes text inputs, and turns them into roughly plausible outputs (whether text, image or sound). Likewise, in every medium-sized town in this country you can find something that roughly looks like a university, and produces graduates and research papers. Martin Scorsese got into trouble a few years back for saying that, for him, the comic book franchises are not cinema. I think he is quite right - but they are roughly like cinema.
Why not do all this stuff by machine, then? Because to do so would be in some respects to admit that the whole thing is a fraud. The current situation, where, stereotypically, a student uses ChatGPT to write a paper and an overworked post-grad uses ChatGPT to grade it, tells you only that nothing should write (or grade) the paper - human or machine. LLMs do not make cultural and intellectual life obsolete: they merely demonstrate that, from the perspective of a declining capitalist order in a state of acute cultural exhaustion, they already were.
The future
Which seems a good moment to talk, finally, about AI in connection with socialism and communism, with all the usual caveats about writing recipes for the cookshops of the future.
The socialist revolution we seek has, as I see it, three pertinent characteristics for our discussion. Firstly, it is the act of the broad masses themselves, and establishes a truly democratic political regime. Secondly, this political regime is to assume control of the economy (leaving aside the question of petty proprietors). Thirdly, the overriding objective of socialist and communist society is the flourishing of humanity.
That means, of course, that there will have to be rapid movement towards directive economic planning in natura in the central sectors of the economy. Planning must be alert to environmental constraints, to political calculations of the revolutionary parties, and so forth. It seems to me utterly inconceivable that planning could be effectively done without the kind of large-scale, data-driven, machine-learning/AI systems that have been invented in recent decades. Such systems, after all, are already used for central planning by large enterprises like Amazon.
How will these systems differ from the ones currently employed by capitalist enterprises? The need for democracy means that they must be far more transparent in their functioning. So far as planning involves ML algorithms, for such planning to be subject to democratic accountability, the algorithms must be transparent. Source code must be published; so must corpuses of training data. Only in this way can laypeople, assisted by subject-matter experts, take decisions about planning mediated by AI systems. Under capitalism, a given AI model is currently the firm’s ‘secret sauce’. Socialism must abolish such secrecy. If the sauce is so tasty, give us all the recipe!
The other condition we mentioned - the goal of the flourishing of humanity to replace the quest for profit - constrains not so much the nature of such technology as its use. AI, like all other industrial technology, should be used to eliminate drudgery and dangerous work, so that people are freed up for higher and more human pursuits. AI should not be used to replace those higher pursuits. The age of the AI girlfriend has to come to an end, for a start. We do not aim to free up people’s time so it can then be gobbled up by manipulative social media algorithms.
More than that cannot really be said without dictating too much to the ‘cookshops of the future’. But that is the point: it should be up to the people. When discussing technological change, we too often miss the point: it is not technology per se that throws people out of work or degrades them, any more than it is the sword that kills them. We need to think less about what AI will or will not do, and more about who is really doing it).