WeeklyWorker

26.09.2024
Consuming huge amounts of energy

Dark underbelly of the beast

It comes with huge promises and many dangers. Robert James explores how capital seeks to make use of artificial intelligence at the expense of labour

When we turn on the TV or open the paper, go online or go to work, we are bombarded with messages about the dawning of the new age of technological development - the Age of Artificial Intelligence.

Or so it seemed a few months ago, when the advent of sleek, modern AI systems filtered down from the closed-circuit ‘productivity’ boosting boardrooms and war cabinets. Then prime minister Rishi Sunak met with Elon Musk to discuss how AI could revolutionise Britain.

Everywhere I looked there seemed to be a litany of AI-related stories, fear-mongering about how it was going to steal our jobs, our identity and put our children’s safety at risk. This provoked in me a fervour to learn all I could about these systems, to understand how the thing is made and what makes it tick. What will our world look like when it is integrated into every walk of our life? And, most of all, how much capital has been flocking to it?

Of course, AI systems have been with us for a while now: they are in our pockets, they are on our feeds, they check your face at the airports and thank you for shopping with them. But peek beneath the surface and you begin to see its true face: the fiscal and moral cost of AI is immense, but so are its possibilities.

So what is AI? Artificial intelligence refers to the creation of intelligence by human beings. We all know such things have long been in the scope of human imagination, be it 2001: a space odyssey or Star trek. AIs have become familiar to us in both dystopian and utopian forms. Computer science projects from the 1950s have morphed in the furnace of academia and industry into the models of AI we face today.

‘Deep learning’ models of AI are based on a number of core ‘structural tenets’. The first is the architecture of the model, its design based on a neural network called a transformer. This transformer relies on learnt attention mechanisms, which can correctly identify relevant information relating to either specific or general queries. The combination of these components creates the framework for the transmission of information that is the purpose of the modern AI models.

Structural

Another structural tenet is its training data - data meticulously scraped from a wide variety of avenues. This is fed through the architecture, and human workers help the AI identify relevant data to specified queries. The AI generates a range of the data, starting with a broad brush, gradually refining it through a process of elimination. At present, AIs need a staggeringly large amount of training data to mimic a human’s differential capacity.

Seeing my infant daughter begin to recognise ‘cat’ in abstract forms over the course of a week - as opposed to the months of training with thousands of samples required for an AI to draw similar conclusions - made me realise just how far this technology must develop to truly reflect human intelligence. Once trained, AI can recognise an absolutely vast amount of parameters from human inputs, and output relevant information. ChatGPT (in its already outdated form, GPT-3), can recognise some 125 billion parameters relating to the entirety of human knowledge.

So can we really see this form of AI as ‘intelligent’? Not really: they cannot reason in the same way as humans; they lack human experience and emotional capacity on any level. They are machines and they know it, as far as they can really ‘know’ anything. But what they can do is act as vast repositories of information relating to all things known, and quickly identify the parts relevant to the use it has been assigned to. This is what makes them incredibly significant for a capitalist society increasingly reliant on the management of data.

The reaction of the markets to this technology is a significant indicator as to why it has made its way into the heart of successive governments and why there is such fanfare in the mainstream media about its advent. Every major tech company has some stake in AI technology. Goldman Sachs projects a job market impact of 300 million jobs lost to AI and advanced automation that is enabled by it. As jobs related to data handling are at risk, this will mostly come out of white-collar jobs - industries that have formerly been minimally affected by technological advancements.1 This in turn is projected to cause a productivity boom that will result in a 7% rise in gross domestic product over a decade in relation to savings in labour cost. If corporate investment in AI is to continue at the same rate as investments, it will account for 1% of American GDP by 2030. This flow of investment speaks to an optimistic section of the market that sees AI as a vital resource for increasing productivity.

However, economists are not in agreement about this. Daron Acemoglu is significantly less optimistic about the impact on productivity, estimating a 0.66% increase over 10 years if the technology remains static. The wider impact of new products and services could increase GDP by 2%. However, when you consider some of the social downsides, this will be tapered by a 0.72% drop in welfare.2 While Acemoglu clearly believes it will be a long time before the market sees the yield of increased investment in the tech sector, he does not deny that in the future there is real space in the market for AI models that are better suited to the demands of industry - to provide useful data in regard to specific jobs, which would yield better productivity gains.

Marxism

This puts the market in a position of insecurity about the future, with some sections seeing AI as the definitive future of industry and others as a misleading hoax that over-promises but will under-deliver. Why then, with this risk hanging over it, does AI continue to be heavily pursued by big tech and endorsed by global leaders as the inevitable future of work? Marx has the answers.

Despite being written 150 years ago, Marx’s theory of value offers an intriguing perspective on why there is this push for AI integration from the top down into the market and labour relations. It can explain how AI fulfils one of the underlying drivers of capital’s economics.

Let us go back to the basic laws of the source of profit from Capital volume 1. For Marx the production of value in the capitalist mode of production can be broken down into the relation between two types of capital - variable and constant. Variable relates to the role of wage labour, and constant capital relates to the means of production, by which we mean land, tech and infrastructure involved in the production of any given commodity. Combined, these produce commodities of a variable rate of value.

This created value is exclusively measured as a products-exchange value in the capitalist mode of production and is comprised of the sum of labour embodied in a product (necessary labour time) and the surplus labour (that is, labour above the amount provided to the worker to reproduce themself: ie, wages) extracted in the production cycle.3 For Marx, the key argument in Capital is that value and profit is only created by labour, and that profit is derived from the surplus value added by unpaid labour. That is not to say that only variable capital adds value, but that constant capital already has within it the extracted and realised dead labour of the workers who created the conditions of land, tech and infrastructure required for the productive process.

AI tries to wriggle out of this relation by defining its properties as ‘intangible’. The software and algorithms represent elements of human thought giving AIs abstract properties that are not nominally associated with the tangible properties of constant capital. However, in Grundrisse Marx clearly disassociates constant capital from the tangible and states that the development of constant capital is a drive of what he describes as the “general intellect” - a combination of scientific research and the force of investment from capital.

That the worker is unaware of the internal logic of the machines they operate relates significantly to other themes in Marx around the alienation of labour from the productive process - essentially deskilling them and justifying a lower cost of reproduction for the labour they can offer the market. With this in mind, we can see that AI can be categorised as constant capital, as it cannot independently create value: it is reliant on the input of the human to provide a direction and is the sum of dead labour used in the creation of its datasets, as mentioned above.

Why then does this characterisation matter? Because AI represents something much desired by the capitalist: a concrete reduction in necessary labour time, for products that have a high rate of surplus value extraction. In short, less work and more profit.

In Grundrisse, Marx makes plain that the development of constant capital is continually driven through mastery of the general intellect to fit the form most adequate for the productive process. You can see this thrust in the course of the history of the tools of labour. From the spinning jenny, through the telegram, to AI, this downward pressure on necessary labour time is blatant. This is infused with the competitive nature of capital, each capitalist trying to get ahead of the curve, to ride the wave, while productivity appears to increase and they gain more profit than competitors, while those that fail to adapt are driven from the market.

However, this belies a fundamental contradiction within the capitalist mode of production: it reduces necessary labour time to a minimum, while also relying on that time to realise surplus value. As new developments in the means of production fundamentally alter the production process, commodities once requiring a great deal of labour are suddenly superfluous - which at first results in staggering gains in productivity, but eventually leads to profit decline, as the reality of the necessary labour time kicks in. Marxism shows us the reality of AI: it offers the capitalist that magical formula for increased profit and prosperity, but, while at first it appears to shake the world, it will soon settle into the all-too-familiar pattern of low growth and economic downturn we are familiar with.

Controversy

With this in mind, we turn now to the dark underbelly of the beast. Up to now we have been looking at the AI presented to us by big tech - the clean and faultless system for the translation of information. But behind this facade there sits a significant moral and fiscal cost to modern AI models.

The environmental impact of AIs are significant, the vast sums of computational power required by AI require a huge amount of electrical supply. ChatGPT alone consumes more than 6,000 times the electricity of a European city. When this is combined with the demands on water supply and rare earth metals, you can see that AIs produce a disproportionately large carbon footprint. Also, in the creation of the data sets used to train AIs and the pattern recognition software there are dangerous levels of labour exploitation, with precariat workers in India, Kenya, the Philippines and Mexico completing data labelling jobs for a few cents per job completed.4 When you consider the vast array of ChatGPT parameters, millions of workers have been short-changed in the production of AI models.

Even when completed, AI models can contain the bias of their training data. When utilised by the police, AIs programmed to find potential offenders utilise previous data, and people were more likely to be targeted based on race and socio-economic background. In its most horrifying application, AI becomes a war criminal: the Lavender AI utilised by the IDF generated 39,000 targets in Gaza in the first months of the conflict.5

What is clear is that AI is a machine in the productive cycle, which draws a large amount of speculative interest from large bastions of capital, potentially reducing necessary labour time in a number of vital areas. It is expensive, exploitative and has been weaponised to terrifying effect by the forces of imperialism.

It will be a barrier to class action and something that we Marxists must seek to understand if we are to see plainly the machinations of capital.

Robert James spoke at Why Marx?

www.youtube.com/watch?v=E9QWtRhhOds


  1. See Michael Roberts, ‘AI-GPT - a game-changer?’: thenextrecession.wordpress.com/2023/04/08/ai-gpt-a-game-changer.↩︎

  2. D Acemoglu The simple macroeconomics of AI: www.nber.org/papers/w32487 (National Bureau of Economic Research, May 2024).↩︎

  3. K Marx Capital Vol 1, chapters 7 and 9.↩︎

  4. A Williams, M Miceli and T Gebru, ‘The exploited labor behind artificial intelligence’ Noema Magazine October 13 2022: www.noemamag.com/the-exploited-labor-behind-artificial-intelligence.↩︎

  5. See B McKernan and H Davies ‘“The machine did it coldly”: Israel used AI to identify 37,000 Hamas targets’ The Guardian April 3.↩︎