WeeklyWorker

20.02.2025

Codes and latest buzz words

Western countries and China are engaged in a race for domination. Yassamine Mather looks at the risks, as well as the potential of AI

The AI Action Summit, held in Paris on February 10 and 11, brought together leaders from over 100 countries, international organisations, academia and the private sector to discuss the advances and global implications of AI. It had two co-chairs: French president Emmanuel Macron and Indian prime minister Narendra Modi (the US vice-president, JD Vance, also dropped in). The agenda covered diverse issues such as public interest in AI, the future of work, plus trust in AI and global AI governance.

As in other global issues, there were significant points of contention regarding the approach to AI regulation. Vance criticised European regulatory measures, suggesting that excessive oversight could stifle innovation and hinder AI’s transformative potential, adding that the new Trump administration wants regulatory frameworks that promote AI development rather than constrain it. While European leaders, including Macron, claimed they advocated robust regulations to ensure AI technologies are “ethical, transparent and aligned with public interests”, at the end of the summit, the USA and UK declined to sign a declaration promoting “inclusive and sustainable” aims, which was endorsed by 60 other countries, including France, China and India.

OpenAI’s CEO, Sam Altman, was there to introduce his company’s latest AI product, Deep Research, designed to autonomously generate detailed reports on user-specified topics via the web. During the summit, Elon Musk and a consortium of investors made a substantial bid to acquire OpenAI - an offer that was declined by Altman. However, this was a sign of ongoing strategic manoeuvres within the AI industry.

In my talk at a recent Online Communist Forum, I mentioned the colossal waste of funds in models of AI projects that are followed in most capitalist countries. The army of overpaid administrators, project managers and business analysts often bring to the table very little that is tasked with monitoring and managing the work of code writers and software developers. Reading the statements made by various governments during and after this conference convinced me that there is a whole layer of ignorant bureaucrats and ‘experts’ pontificating about AI, while they have little or no understanding of machine learning software, only know acronyms of the latest buzz words and have not got a clue about how any of it works - or how it can evolve or collapse.

Of course, these days you cannot have such a conference without a pretence of addressing ‘ethical and environmental concerns’, and the Paris summit followed this trend. AI expert Yoshua Bengio talked about the potential risks of advanced AI systems, highlighting concerns about issues of control and alignment with human values. There were calls for a moratorium on the development of artificial general intelligence (AGI). This is a type of AI that can understand, learn and apply knowledge across a wide range of tasks at a level comparable to human intelligence. Narrow AI is designed for specific tasks (eg, facial recognition, language translation or playing chess), but AGI can perform any intellectual task that a human can do by applying knowledge from one domain to another, demonstrating flexibility and adaptability. It can learn from experience, improve over time and acquire new skills without explicit programming and therefore can solve complex problems, make decisions and reason abstractly. AGI has a level of independence, in that it can operate independently, set its own goals and take steps to achieve them. It is claimed that some AGIs can have the ability to acquire consciousness or self-awareness, though this is highly debated.

Commitments

The event was an opportunity for the beleaguered French president to show off.

In a bid which is supposed to position Europe as a competitive player in the global AI landscape, Macron announced a €109 billion investment in AI infrastructure. This initiative aims to develop computing clusters and leverage the country’s low-carbon nuclear energy to support AI growth, thereby reducing reliance on technologies from the US and China.

In recent years, western states, particularly the US, EU members, Canada and the UK, have significantly increased their investments in AI. These investments are driven by the recognition of its potential to transform economies, enhance national security, and improve public services. Here are some key points about their spending and approaches:

United States: The US federal government has allocated billions of dollars to AI research and development (R&D) through various agencies, including the Department of Defence (DoD), the National Science Foundation and the Department of Energy. The American AI Initiative, launched in 2019, aims to promote AI R&D, set governance standards and ensure global leadership in AI. The DoD’s Joint Artificial Intelligence Center focuses on integrating AI into military operations.

Critics argue that the US approach is heavily militarised, with a significant portion of AI funding directed toward defence applications. This raises ethical concerns about the use of AI in warfare and surveillance. Additionally, there is a lack of comprehensive federal regulation, leading to potential misuse and privacy violations.

European Union: The EU has committed substantial funds to AI through its ‘Horizon Europe’ scheme and the Digital Europe Programme. Member-states like Germany and France have also launched national AI strategies with significant funding.

The EU’s strategy is full of claims about ethical AI, human-centric approaches and stringent regulations. The proposed Artificial Intelligence Act aims to create a legal framework for AI, focusing on risk management and transparency. The EU’s focus on ethics and regulation is more talk than action, however. Critics argue that the regulatory framework will stifle innovation and put European companies at a disadvantage, compared to less regulated markets like the US and China. There are also concerns about the slow pace of implementation and the complexity of compliance.

United Kingdom: The government has invested heavily in AI through its ‘Industrial Strategy’ and the creation of the Office for AI. The Alan Turing Institute, the national body for data science and AI, receives significant funding. The UK’s AI Sector Deal aims to boost AI skills, R&D and infrastructure. The National AI Strategy claims to focus on long-term growth and ethical consideration.

Critics highlight the gap between the UK’s ambitious AI goals and the actual implementation. There are concerns about the lack of sufficient funding for AI ethics research and the potential for bias in AI systems developed without diverse input.

Canada:  The country claims to be a pioneer in AI research, with significant investments through the Pan-Canadian Artificial Intelligence Strategy. The Canadian Institute for Advanced Research plays a key role in funding AI research. Canada’s AI strategy focuses on research excellence, talent development and commercialisation. The country has also established the Global Partnership on AI to promote ‘responsible AI development’.

Critics argue that the commercialisation of AI technologies lags behind other countries. There are also concerns about the brain drain, with top AI talent often moving to the US for better opportunities.

Whose interest?

So what can we say about western approaches to AI?

Firstly, on bias and discrimination, we all know that AI systems can perpetuate and amplify existing biases - in fact some current designs are open to such abuse. However, as the post-war consensus between the US, Canada and European countries regarding ‘liberal bourgeois’ ideology is fraying, we might see divergence on social issues between the kind of bias added by current European governments, as opposed to the more extremist rightwing bias supported by tech leaders (and increasingly the entire tech industry) in the US.

The widespread use of AI in surveillance and data analysis also raises significant privacy issues. Critics argue that current regulations are insufficient to protect individuals’ privacy rights.

It is unlikely that the current diverse approaches to AI can lead to a unified regulatory framework across western states. There are numerous inconsistencies hindering international collaboration and the EU’s AI Act is unlikely to get far in the current political climate.

Then there is ‘innovation vs regulation’: striking a balance between fostering innovation and implementing necessary regulations will be a major, persistent challenge.

Western states are also in a race with China to dominate the AI landscape. Inevitably this focus on competition may lead to a neglect of scientific international cooperation. We are already hearing scare stories about DeepSeek ‘security’ issues. DeepSeek makes no secret that it collects extensive user data, such as chat histories, uploaded files, IP addresses and even “keystroke patterns or rhythms”. This data is stored on servers located in China, and western companies and governments are raising concerns about potential access by Chinese authorities under local data laws. No doubt that is true, but many thousands of people have voluntarily, and at times unconsciously, provided unlimited access to their activities, thoughts, patterns of behaviour, etc - every minute of every hour to western tech giants, nowadays owned and led by very dubious characters, some associated with extreme-right opinion. I am not sure if sharing data with the Chinese regime is going to be that much worse!

The benefits of AI advances are not evenly distributed, leading to even more economic inequality, especially as we see displacement of workers due to AI automation. We also have to be concerned about the militarisation of AI: the significant investment in AI for military applications raises many issues.

Given all the hype about AI and recognising the dangers posed by its current trajectory, both in China and the west, it is important that the organisations of the working class are well informed not only about the details of AI progress, but the potential risks it poses - not just to jobs, but to society in general. However, we should also recognise how, under different circumstances, humanity can benefit from the technological and medical advances made possible by AI.

In order to do so we must keep up to date with every aspect of AI development: we must understand how it works and be part of its evolution. Only then can we make practical, informed interventions to reduce military and ‘security’ abuse of AI. This will also help us play an active role in defending the skills and jobs currently under threat by the way capitalism sees this technology as a cutting, cost-saving tool.