WeeklyWorker

13.04.2023

Still far from human

The widespread introduction of AI is unlikely to boost profit rates and rescue capitalism from its long depression. Michael Roberts explains

ChatGPT is being heralded as a revolution in artificial intelligence (AI) and has been taking the media and tech world by storm since launching in late 2022.

According to OpenAI, ChatGPT is “an artificial intelligence, trained to assist with a variety of tasks”.1 More specifically, it is a large language model (LLM) designed to produce human-like text and converse with people - hence the ‘chat’ in ChatGPT.

‘GPT’ stands for ‘generative pre-trained transformer’. The GPT models are pre-trained by human developers and then are left to learn for themselves and generate ever increasing amounts of knowledge - delivering that knowledge in an acceptable way to humans (chat).

Practically, this means that you present the model with a query or request by entering it into a text box. The AI then processes this request and responds, based on the information that it has available. It can perform many tasks - from holding a conversation to writing an entire exam paper; from making a brand logo to composing music; and more. So much more than a simple Google-type search engine or Wikipedia, it is claimed.

Human developers are working to raise the ‘intelligence’ of GPTs. The current version of GPT is 3.5, with 4.0 coming out by the end of this year. And it is rumoured that ChatGPT-5 could achieve ‘artificial general intelligence’ (AGI). This means it could pass the Turing test,2 which is a test that determines if a computer can communicate in a manner that is indistinguishable from a human.

Will the development of LLMs be a game-changer for capitalism in this decade? Will these self-learning machines be able to increase the productivity of labour at an unprecedented rate and so take the major economies out of their current ‘long depression’ of low real gross domestic product, investment and income growth - and then enable the world to take new strides out of poverty? This is the claim by some of the ‘techno-optimists’ that occupy the media.

Let us then consider the answers to those questions.

First, just how good and accurate are the current versions of ChatGPT? Well, not very, just yet. There are plenty of ‘facts’ about the world which humans disagree on - a regular search lets you compare those versions and consider their sources. A language model might instead attempt to calculate some kind of average of every opinion it has been trained on - which is sometimes what you want, but often is not. ChatGPT sometimes writes plausible-sounding, but incorrect (or nonsensical) answers. Let me give you some examples.

I asked ChatGPT 3.5: “Who is Michael Roberts, Marxist economist?” This was the reply.

Me: Who is Michael Roberts?

Nova AI Assistant: Michael Roberts is a Marxist economist known for his critiques of capitalism and his writings on Marxist economic theory. He has written extensively on topics such as the global financial crisis, the labour theory of value and the political economy of socialism. Roberts has worked in the banking and finance industry for many years, and has also taught economics at various universities in the UK. He runs a popular blog on Marxist economics, where he offers analysis and commentary on contemporary economic issues from a Marxist perspective.

This is mostly right, but it is also wrong in parts (I won’t say which).

Then I asked it to review my book, The long depression. This is what it said:

Me: Review Michael Roberts’ book The long depression.

Nova AI Assistant: As an AI language model, I cannot provide an opinion, but I can give you a summary of Michael Roberts’ book The long depression.

In his book Michael Roberts argues that the global economy has been in a period of “long depression” since the Great Recession of 2008-09. He presents evidence to support his claim that the causes of the Great Recession were not resolved and instead only temporary fixes were employed to stabilise the economy.

Roberts attributes the long depression to the inherent instability of capitalism and argues that the current economic system is plagued by a number of structural problems, including low productivity growth, high debt levels and rising inequality. He believes that these issues have led to a persistent state of low growth and low investment, resulting in a general malaise in the global economy.

In his book, Roberts also explores the various policy responses that have been employed by governments and central banks to address the long depression. He notes that these measures, such as low interest rates and quantitative easing, have failed to stimulate sustained growth and that new approaches are needed to address the underlying structural problems facing the global economy.

Overall, Roberts’ book provides a thought-provoking analysis of the current state of the global economy and offers insights into the challenges faced by policy-makers in addressing the long-term issues that are constraining growth and prosperity.

This gives a very ‘general’ review or synopsis of my book, but leaves out the kernel of its thesis: the role of profitability in crises under capitalism. Why, I don’t know.

So I asked this question about Marx’s law of profitability:

Me: What is Marx’s law of the tendency of the rate of profit to fall?

Nova AI Assistant: Marx’s law of the tendency of the rate of profit to fall is a concept in Marxist economics, which argues that, over time, the rate of profit in a capitalist economy tends to decline due to various factors, including technological advancements, increased competition and a rise in the proportion of capital investment over labour investment. Marx believed this tendency towards a declining rate of profit would eventually lead to a crisis of capitalism, as capitalists would be unable to generate enough profit to maintain their system of production. While there is some debate among economists about the validity of Marx’s law, it continues to be an important concept in Marxist economic theory.

Again, this is broadly right - but just broadly. The answer does not really take you very far in understanding the law. Indeed, it is no better than Wikipedia. Of course, you can dig (prompt) further to get more detailed answers. But there seems to be some way to go in replacing human research and analysis.

Then there is the question of the productivity of labour and jobs. Goldman Sachs economists reckon that, if the technology lived up to its promise, it would bring “significant disruption” to the labour market, exposing the equivalent of 300 million full-time workers across the major economies to automation of their jobs. Lawyers and administrative staff would be among those at greatest risk of becoming redundant (and probably economists). The GS economists calculate that roughly two-thirds of jobs in the US and Europe are exposed to some degree of AI automation, based on data on the tasks typically performed in thousands of occupations.

Most people would see less than half of their workload automated and would probably continue in their jobs, with some of their time freed up for more productive activities. In the US, this would apply to 63% of the workforce, they calculated. A further 30% working in physical or outdoor jobs would be unaffected, although their work might be susceptible to other forms of automation.

The GS economists concluded: “Our findings reveal that around 80% of the US workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted.”

With access to an LLM, about 15% of all worker tasks in the US could be completed significantly faster at the same level of quality. When incorporating software and tooling built on top of LLMs, this share increases to 47%-56% of all tasks. About 7% of US workers are in jobs where at least half of their tasks could be done by generative AI and are vulnerable to replacement. At a global level, since manual jobs are a bigger share of employment in the developing world, GS estimates about a fifth of work could be done by AI - or about 300 million full-time jobs across big economies.

Very essence

These job loss forecasts are nothing new. In previous posts, I have outlined several forecasts on the number of jobs that will be lost to robots and AI over the next decade or more.3 It appears to be huge - and not just in manual work in factories, for instance, but also in so-called white-collar work.

It is in the very essence of capitalist accumulation that the workers will continually face the loss of their work from capitalist investment in machines. The replacement of human labour by machines started at the beginning of the British industrial revolution in the textile industry, and automation played a major role in American industrialisation during the 19th century. The rapid mechanisation of agriculture, starting in the middle of the 19th century, is another example of automation.

As Engels noted in his book, The condition of the working class in England (1844), whereas mechanisation not only shed jobs, often it also created new jobs in new sectors.4 But Marx identified this in the 1850s:

The real facts, which are travestied by the optimism of the economists, are these: the workers, when driven out of the workshop by the machinery, are thrown onto the labour market. Their presence in the labour market increases the number of labour-powers which are at the disposal of capitalist exploitation ... the effect of machinery, which has been represented as a compensation for the working class, is, on the contrary, a most frightful scourge ... As soon as machinery has set free a part of the workers employed in a given branch of industry, the reserve men are also diverted into new channels of employment and become absorbed in other branches; meanwhile the original victims, during the period of transition, for the most part starve and perish.5

The implication here is that automation means increased precarious jobs and rising inequality.

Up to now, mechanisation has still required human labour to start and maintain it. But are we now moving towards the takeover of all tasks, and especially those requiring complexity and ideas with LLMs? And will this mean a dramatic rise in the productivity of labour, so that capitalism will have a new lease of life?

If LLMs can replace human labour and thus raise the rate of surplus value dramatically, but without a sharp rise in investment costs of physical machinery (what Marx called a rising organic composition of capital), then perhaps the average profitability of capital will jump back from its current lows?

Goldman Sachs claims that these “generative” AI systems, such as ChatGPT, could spark a productivity boom that would eventually raise annual global GDP by 7% over a decade. If corporate investment in AI continued to grow at a similar pace to software investment in the 1990s, AI investment in the USA alone could approach 1% of US GDP by 2030.

I will not go into how GS calculates these outcomes, because the results are conjectures. But, even if we accept the results, are they such an exponential leap? According to the latest forecasts by the World Bank, global growth is set to decline by roughly a third from the rate that prevailed in the first decade of this century- to just 2.2% a year.6 And the International Monetary Fund puts the average growth rate at 3% a year for the rest of this decade.

If we add in the GS forecast of the impact of LLMs, we get about 3%-3.5% a year for global real GDP growth, maybe - and this does not account for population growth. In other words, the likely impact would be no better than the average seen since the 1990s! That reminds us of what economist Robert Solow famously said in 1987: that the “computer age was everywhere except for the productivity statistics”.7

US economist Daron Acemoglu adds that not all automation technologies actually raise the productivity of labour. That is because companies mainly introduce automation in areas that may boost profitability, like marketing, accounting or fossil fuel technology, but may not raise productivity for the economy as a whole or meet social needs. Big tech has a particular approach to business and technology that is centred on the use of algorithms for replacing humans. It is no coincidence that companies such as Google are employing less than one 10th of the number of workers that large businesses, such as General Motors, used to do in the past. This is a consequence of big tech’s business model - which is based not on creating jobs, but automating them.

Dialectical

That is the business model for AI under capitalism. But under cooperative, commonly owned, automated means of production there are many applications of AI that instead could augment human capabilities and create new tasks in education, healthcare and even manufacturing. Acemoglu suggested that

… rather than using AI for automated grading, homework help and increasingly for substitution of algorithms for teachers, we can invest in using AI for developing more individualised, student-centric teaching methods that are calibrated to the specific strengths and weaknesses of different groups of pupils. Such technologies would lead to the employment of more teachers, as well as increasing the demand for new teacher skills - thus exactly going in the direction of creating new jobs centred on new tasks.8

And rather than reduce jobs and the livelihoods of humans, AI under common ownership and planning could reduce the hours of human labour for all.

And then there is the issue of the profitability boost provided by AI technology. Even if LLM investment requires less physical means of production and lowers costs of such capital, the loss of human labour-power could be even greater. So Marx’s law of profitability would still apply. It is the great contradiction of capitalism that increasing the productivity of labour through more machines (AI) reduces the profitability of capital. That leads to regular and recurring crises of production, investment and employment - of increasing intensity and duration.

Finally, there is the question of intelligence. Microsoft argues that intelligence is a “very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience”.9 Microsoft hints that LLMs could soon obtain this “generalised intelligence” and surpass all human ability to think.

But, even here, there is scepticism:

The ChatGPT model is huge, but it’s not huge enough to retain every exact fact it’s encountered in its training set. It can produce a convincing answer to anything, but that doesn’t mean it’s reflecting actual facts in its answers. You always have to stay sceptical and fact-check what it tells you. Language models are also famous for ‘hallucinating’ - for inventing new facts that fit the sentence structure despite having no basis in the underlying data.10

That is not very encouraging.

But Guglielmo Carchedi has a more fundamental reason to deny that AI can replace human ‘intelligence’. In our book we state: “… machines behave according only to the rules of formal logic. Contrary to humans, machines are structurally unable to behave according to the rules of dialectical thinking. Only humans do that.”11

Here is the ChatGPT answer to the dialectical question: “Can A be equal to A and at the same time be different from A?”:

No, it is not possible for A to be equal to A and at the same time be different from A. This would be a contradiction in terms, as the statement, “A is equal to A”, is a tautology and always true, while the statement, “A is different from A”, is a contradiction and always false. Therefore, these two statements cannot both be true at the same time.

Machines cannot think of potential and qualitative changes. New knowledge comes from such transformations (human), not from the extension of existing knowledge (machines).

Only human intelligence is social and can see the potential for change - in particular social change - that leads to a better life for humanity and nature l

Michael Roberts blogs at thenextrecession.wordpress.com


  1. . openai.com/blog/chatgpt.↩︎

  2. . See en.wikipedia.org/wiki/Turing_test.↩︎

  3. . See, for example, thenextrecession.wordpress.com/2015/09/24/robots-and-ai-utopia-or-dystopia-part-three.↩︎

  4. . See my book on Engels’ economics: M Roberts Engels 200: his contribution to political economy London 2020, pp54-57.↩︎

  5. . K Marx Capital vol 3, Moscow 1971, pp339.↩︎

  6. . openknowledge.worldbank.org/server/api/core/bitstreams/6cadf98e-e265-4055-aef7-fad9943a0afd/content.↩︎

  7. . www.mckinsey.com/capabilities/mckinsey-digital/our-insights/is-the-solow-paradox-back.↩︎

  8. . www.alai.info/en/the-future-of-work-iii-automation.↩︎

  9. . www.vice.com/en/article/g5ypex/microsoft-now-claims-gpt-4-shows-sparks-of-general-intelligence.↩︎

  10. . See simonwillison.net/2023/Apr/2/calculator-for-words.↩︎

  11. . G Carchedi and M Roberts Capitalism in the 21st century London 2022, p167.↩︎