WeeklyWorker

16.10.2025
Unimaginable amounts of capital being ploughed in - so far with little or no profit

After the AI bubble

Despite the record high stock market ratings, there are more and more signs that the AI stock market boom is unsustainable. Paul Demarty assesses the chances of a major correction

Last month, OpenAI - the capped-profit company that kicked off the current artificial intelligence hype-cycle - had some big news (don’t they always?).

They had signed a deal with Oracle to build five new enormous high-performance data centres (to add to one already in partial operation in Abilene, Texas) - a deal worth a cool $300 billion (some of which is coming from SoftBank, an enormous Japanese tech investment fund). Champagne glasses clinked; much hot air circulated about world-changing innovation - all the usual. Various relevant stock prices jumped.

Yet on closer examination there is something a little odd about this deal. On its own, it is impossible to ignore a certain circularity. Oracle is a cloud computing heavyweight, among other things. It seems that this deal is, at least in part, a matter of giving OpenAI a load of money, which it will effectively give back in rents for cloud services, for which it is infamously voraciously hungry. This is a trick known as ‘round-tripping’. But then we must also zoom out a little. Oracle and OpenAI will be filling their data centres with chips from second-tier manufacturer AMD AI Solutions; and this follows a peculiar deal between OpenAI and AMD, which sees OpenAI buying AMD shares at knockdown prices in return for purchasing a tonne of chips.

Ponzi

OpenAI has already entered into such quid-pro-quo investment arrangements with Microsoft, which is thought to be a likely buyer at some point in the future. Its rival, Anthropic - the maker of Claude - has similar deals with Amazon Web Services (AWS). It all smells a little incestuous. It would not quite be fair to bring up the legendary conman, Charles Ponzi, but, like his eponymous schemes, it seems a lot of the effervescence of tech stocks at the moment is tied up in just keeping the momentum going.

And effervescent they certainly are. The major stock indexes are in a wild bull-market at the moment, but this is being driven overwhelmingly by the performance of the big tech companies - in addition to those mentioned, we should certainly cite Nvidia, the manufacturer of, by common consent, the best silicon chips for AI workloads, whose market capitalisation has increased more than tenfold in the last three years (and which, we note, also bunged $100 billion to OpenAI last month).

It is not necessarily a bad sign for there to be big-money deals of various sorts going on between participants in an industry. A steel mill buys ore from a mine; the mill is successful, and increases its orders from the mine; the mine expands production (and to do so, perhaps, it needs to buy steel). All of this is perfectly normal activity, in times of economic expansion at least, within what Marx called department 1 of the economy (commodities destined for further production, rather than direct consumption).

Yet the blunt truth is that the flagship AI products from companies like OpenAI do not make money. Indeed, that is an understatement: these companies burn money at an extraordinary rate. There is no clear path to profitability; in its place, there are endless starry-eyed promises of breakthroughs just around the corner, and the occasional spook story about artificial general intelligence. Investors are buying the story; but the sheer weight of anxiety currently leaking out of the investor class suggests they are increasingly impatient to see how that story ends.

The vast cost base of the AI companies is well-known, despite all the hype, but worth briefly describing here. When OpenAI launched ChatGPT, they went to market with a product that was built on a real breakthrough in AI-driven text transformation. It was described in a paper published by Google in 2017, called ‘Attention is all you need’: Google had been using these new techniques in their machine translation service to great effect, and they caused quite a stir in the AI world.

The trouble is that there have not really been any other comparable breakthroughs in this current wave of AI activity. Improvements in the models have been largely achieved by brute force: that is, training and running the models on ever vaster pools of compute, using ever larger corpuses of training data.

These inputs each pose particular problems, and the one most relevant to cost is the need for data centres. These are big, fixed-capital investments at the best of times, and the specialised needs of AI software makes them more so. The electrical power needs of such installations are themselves vast and expensive (Oracle is racing to build a bunch of natural gas turbines in Abilene). With the launch of the apparently far more efficient Chinese DeepSeek model earlier this year, there was some hope that the need for such capital outlays would be reduced, but that does not seem to have transpired.

In return for such investment, OpenAI, Anthropic and friends have delivered real, but modest, improvements in model performance. Crucially, they have not made much headway on the problems that bedevil these systems - most infamously their habit of just making stuff up a lot of the time. AI ‘hallucinations’ are the major obstacle to selling these large language models to the sort of large corporate customers who can really make all this economically viable. Replacing white-collar workers with computer systems - the only potential upside, really, of adopting AI - requires that those computer systems can work predictably and reliably. If you cannot use this stuff to make your core business more profitable, then what is the point?

Corporate America

This dynamic is visible precisely in the concentration of apparent economic growth in a small group of tech companies, while much of the rest of corporate America (never mind the rest of the world) stagnates, and job growth is essentially non-existent - to such an extent that Donald Trump has come up with the novel strategy of dealing with the problem by firing the people who come up with the statistics. In a gold rush, they say, sell shovels - Nvidia, AWS, Microsoft and co are shovel-pushers. There are other signs too, like a recent research report from the Massachusetts Institute of Technology - the very cradle of much of the original AI research decades ago - reporting that 95% of AI pilot projects run at US organisations had yielded no or negative return on investment.1

Core use cases are suffering too. Take software engineering itself - which at its core consists of transforming a natural language ‘prompt’ (the requirements for the software) into a computer program, which is, at the end of the day, a piece of text written in a (restricted, special-purpose) language. This is the sort of thing Google was doing with its machine translation in the first place. Yet, even here, research cautions against wild claims of a revolution in technique: one study found that, while engineers perceived a productivity increase of about 20%, the tools in fact seemed to slow them down by roughly that amount.2

So is this a bubble? It seems so. Indeed, it is arguably merely the extension of an earlier tech bubble, which saw a huge wave of start-ups through the 2010s benefiting from the low interest rates of the post-financial crash era and the advent of cloud computing, which massively reduced the capital outlay required to run an internet company. Although the marquee names of that era are ‘social media’ and ‘gig economy’ companies, the exemplary case here is probably cryptocurrencies - another supposedly world-changing technology that has never quite arrived, except as a class of dubious speculative assets and a means of exchange among drug dealers and ransomware gangs.

This earlier bubble, in software-as-a-service companies and consumer tech, popped in 2022, when central bank rates started to increase, reducing the availability of investment capital and suddenly bringing forward the date at which this class of company was expected to turn a profit. The splashy launch of ChatGPT, however, provided a fine opportunity for the venture capital set to induce a new wave of investment. They also read the political weather astutely, and cosied up to Donald Trump, who has repaid the favour by coming out firmly against AI regulation and turning his own hand to worthless novelty cryptocurrencies (‘shitcoins’, as they are known).

What happens if it pops for real? No doubt the stock prices of the core tech companies will take a beating, but they will probably survive - their core business models are, after all, profitable, and they likely have cash on hand. But the effect on the wider economy will likely be very negative, as the animal spirits of the investor class go into fight-or-flight mode.

Unemployment will rise, as it is already doing. A serious market correction will have severe knock-on effects for large institutional investment funds, and thus, ultimately, to pensions and other savings in the pockets of ordinary people. (If anyone is still under the illusion, by the way, that this will necessarily have a radicalising effect on the workers so dispossessed, they should have a clear-eyed look at the political history of the imperialist countries since the last crash.)

Also at risk here, I would suggest, is the coherence of a certain ideology most characteristically associated with the neoliberal era: that the old days of mass industrial production are gone, and that the future is a ‘knowledge economy’, driven by endless revolutions in information technology. In truth, the signs of stagnation are everywhere. LLMs are not revolutionising anything except the sheer size of economic bubbles. Other consumer-grade AI/machine learning systems are finding novel applications, but at a reduced rate. Mass-market consumer tech - smartphones, personal computers and the like - is plainly stagnant, and afflicted by new anxieties about the unintended consequences of plugging everyone on earth permanently into the internet.

Panglossian

Yet that would be merely a step-change in a wider process, where the Panglossian techno-optimism of triumphant neoliberalism has steadily been displaced by something altogether colder: the rise of national and ethnic chauvinism, open militarism and associated symptoms, in the global north and south alike. The fundamental driver of this ideological shift is the reality that the world is moving into a fresh wave of great-power competition, and - inescapably, but for revolution - great-power war.

On this front, of course, the AI people may find reasons to be cheerful. The various corrupt gestures of the Trump administration towards the tech industry are driven by the perceived need to ‘win’ the battle over AI with China - whatever that is supposed to mean. Current and future generations of military equipment will benefit mightily from improvements in large-scale machine learning, computer vision, and many other things besides. The promised breakthroughs in medical science - protein folding and what have you - may well be susceptible to weaponisation. Why not? Because some long-dead idealist signed a treaty outlawing it?

So, if the great and the good of the tech industry, AI people included, want to keep on top of things, they will move their attention from chatbots to drone swarms, from AI therapists to missile defence. The passive institutional investors will, of course, follow.

For our part, we get to live through the process predicted by Marx: of the means of production being transformed ever more decisively into means of destruction.


  1. fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo.↩︎

  2. metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study.↩︎