WeeklyWorker

01.04.2021

AI and our tasks

Yassamine Mather examines the many problems posed for the workers’ movement by artificial intelligence

According to the World Economic Forum (WEF), “a new generation of smart machines, fuelled by rapid advances in artificial intelligence (AI) and robotics, could potentially replace a large proportion of existing human jobs”.1 Robotics and AI will cause a serious “double disruption”, as the coronavirus pandemic pushed companies to fast-track the deployment of new technologies to slash costs, enhance productivity and be less reliant on real-life people.

We all know about massive job losses caused by the effects of Covid-19. However, the predictions for the next few years are alarming. The WEF estimates that currently approximately 30% of all tasks are done by machines - and, of course, humans do the other 70%. But by the year 2025 this balance will dramatically change to a 50-50 combination of humans and machines. According to PricewaterhouseCoopers, “AI, robotics and other forms of smart automation have the potential to bring great economic benefits, contributing up to $15 trillion to global GDP by 2030.”2

The downside will be the human cost: new skilled jobs will be created, but many existing jobs will disappear. “Banking and financial services employees, factory workers and office staff will seemingly face the loss of their jobs - or need to find a way to reinvent themselves in this brave new world.” While the estimates vary, at a conservative estimate 85 million jobs will be lost by 2025 and it is believed that over 50 million Chinese workers may require retraining, as a result of AI-related deployment.

So what exactly is artificial intelligence and why is it endangering so many jobs? AI is the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making and translation between languages.

So far we have four types of AI:

1. Reactive machines. These are programmed to respond/react to a given set of conditions. Robots used in car assembly plants are a good example of this category. If the given conditions (for example, the location of the parts an assembly plant robot is programmed to pick up) are changed, they are lost.

2. Limited memory. This type of device remembers events and data. Self-driving cars use sensors (radar, sonar, etc) to perceive their surroundings and estimate changes in location. Advanced control systems interpret sensory information to decide the correct navigation paths, to avoid obstacles and respond to relevant road signage.

The relative success of the above two sections should be obvious.

3. Theory of mind. Human beings have thoughts and feelings, memories or other brain patterns that drive and influence their behaviour. Researchers involved in theory of mind work believe it is possible to develop computers that are able to imitate human mental models, machines that are capable of understanding how thoughts and feelings in humans affects their behaviour.

Theory of mind machines would be needed to use the information derived from people and learn from it, which would then inform how the machine enters social interaction - and communicates in or reacts to a different situation.

A famous, but still very primitive, example of this technology is Sophia, the world-famous robot developed by Hanson Robotics, who often undertake press tours to portray an ever-evolving example of what robots are capable of doing. Whilst Sophia is not natively able to determine or understand human emotion, ‘she’ can hold basic conversation, using image recognition and an ability to respond to interactions with humans with the appropriate facial expression, as well as an incredibly human-like appearance.

4. Self-awareness. This is probably the most challenging form of AI. In theory these machines will have human-level consciousness and understand their existence in the world - a long-term plan for AI. A machine that has memory and accumulates information learning from events can apply it to future decisions. Developing this will lead to AI innovation that could turn society on its head, enhance how we live and save lives.

Both in traditional science and social science, deep learning and machine learning are offering new ways to develop models, train bots and classify data. The aim here is to teach computers to learn from examples, memorise the data they have been given and use them to classify inputs.

In order to teach a computer ways of classifying input we use what is referred to as the ‘standard machine learning approach’. We select aspects of an image - for example, its corners and boundaries - to train the computer. Every object presented is recognised, using the references learnt by the computer, and then evaluated.

Deep learning uses more advanced techniques. Images of objects/scenes are directly fed into the deep-learning algorithm. When there is a large amount of data and tens of thousands of images, it is necessary to use a high-performance graphical processor unit (GPU) to analyse and subsequently recognise objects with reasonable accuracy. The time you need to build a model depends on the capability and number of central processor units (CPUs), as well as GPUs, you have - and all this will speed up dramatically through the use of quantum computers in the near future.

Human work

Given the current capabilities of machines using AI, you could say any job that can be theorised as a process can be automated - and in our current world this covers a large number of professions. In advanced capitalist countries, the conscious or unconscious dumbing down of education, training and top-down management systems have all paved the way for such a situation. In other words, anyone in a job where the line manager decides their daily/hourly tasks should be concerned.

However, even under capitalism, leaving aside skilled programmers and AI specialists, it will ironically be those who can think outside the box and use their imagination who will keep their jobs.

Driving: Over the last few years, the main objective of Uber, for example, was the development of an autonomous car. In December 2020 the company’s Advanced Technologies Group was part-sold to Aurora - a start-up backed by Amazon and Sequoia Capital, known for making sensors and software development for autonomous vehicles. Uber owns 26% of the company and its CEO sits on the board. So we can expect Uber to continue using real-life data from the recorded daily experience of its drivers to improve the capabilities of driverless cars.

In the current set-up the human driver has the advantage of being able to make conscious decisions. However, with developments in AI and improvements in robotics, Uber and others involved in developing autonomous vehicles are looking for driverless cars that can make human-like decisions. Covid and with it an increased reliance on internet deliveries has speeded up this process.

In terms of software all that is required is a few lines of code, giving the autonomous vehicle a time limit, say, for waiting for passengers and then automatically switching to the software necessary to act as a delivery vehicle. Also required would be the automation of a boot-opening mechanism, programmed to react to human intervention at the depot and delivery point. Of course, an autonomous car can be in all sorts of unforeseen circumstances, and, as with all other forms of automation, it will rely on an army of low-paid employees, able to correct ‘automation’ errors.

‘Ghost’ AI work: a number of major internet companies - eg, Sama, CrowdFlower, Microworkers and Amazon Mechanical Turk (MTurk) - use low-paid ‘ghost’ workers. MTurk is the most interesting: it was named after ‘The Turk’ - an 18th century ‘automaton’ that won chess games in Europe, only to be exposed later as having a human behind it. With unemployment soaring in the US and the UK, there is no shortage of volunteers for MTurk work. The company’s website tells us:

While computing technology continues to improve, there are still many things that people can do much more effectively than computers. These include tasks such as identifying objects in a photo or video, performing data de-duplication, transcribing audio recordings or researching data details.3

However, it unashamedly declares the aim behind this to be “to minimise the costs and time for each stage of machine-learning development”.

Other such workers are used by YouTube or social media companies to block ‘unsuitable’ content, correct/improve decisions made by bots regarding ‘offensive content’. In the US many of these workers are paid as little as $2 (£1.45) per hour and the company has the right to reject the work they have done with no explanation. Most such companies are also using global employees - especially cheap labour in Africa. According to Saiph Savage, director of the Human Computer Interaction Lab at West Virginia University, often little is known about who the workers are. She cited a recent study relating to YouTube that found that some LGBTQ content had been banned: “Dig beneath the surface and it was not the algorithm that was biased, but the workers behind the scenes, who were working in a country where there was censoring of LGBTQ content.”4

According to Oxford’s Online Labour Index, US employers are the largest users of online labour, followed by the United Kingdom, India and Australia. Using these digital labour platforms, companies manage real-time hiring of services from a global pool of low-cost labour, ranging from IT design to copywriting and routine clerical tasks. These ‘ghost’ human workers are available to work day and night: they play a vital role in keeping systems and services operational, while the consumer assumes all this to be automatic.5

So, while companies are training bots to improve their machine learning, all sorts of bias could be pre-programmed into the artificial intelligence of the future.

News reporting: We already know that Jeff Bezos’s The Washington Post uses a bot called Heliograf to write stories about content that the staff are unable to cover. Associated Press follows a similar process. In fact if you look at news agencies and newspaper web pages, you might be surprised at the similarity of the coverage of some stories. On occasions there is some minimum human intervention, but the bots have picked up stories according to similar algorithms.

One reason why we are in this state and why journalists’ jobs are in danger is the domination of the media echo chamber, with its centre-right ideology. They carry more or less the same headlines - at times picked up by bots searching social media or other news sites. Investigative journalism has been dead for the last couple of decades. There is no radicalism, no thirst for the truth, no attempt to think outside the box. If journalists want to keep their jobs in such a situation, they will have to show more originality, engage in proper investigation of stories, challenge and look beyond the media echo chamber. Otherwise bots will take over, even when it comes to evolving stories.

Manufacturing: Here automation had obviously already cost millions of jobs. Before the pandemic it was estimated that another 20 million manufacturing jobs were set to be lost to robots by 2030, but that has risen sharply since February 2020. According to Time magazine,

The drive to replace humans with machinery is accelerating, as companies struggle to avoid workplace infections of Covid-19 and to keep operating costs low. The US shed around 40 million jobs at the peak of the pandemic and, while some have come back, some will never return. One group of economists estimates that 42% of the jobs lost are gone forever.6

Clerical work: If the first wave of automation mainly took its toll in terms of blue-collar jobs, white-collar work will certainly be more affected by advances in AI.

AI can help with monotonous legal work by improving productivity, automating tedious tasks that do not require expertise, such as collecting and processing data, and in this respect administrative, paralegal jobs are definitely in danger, as well as legal jobs that follow a set process: ticking boxes, creating documents, etc. As far as insurance is concerned, AI technology using machine learning is taking over every aspect, including life insurance.

Over the last few years, tasks previously identified as human-resource responsibilities have been automated, especially in larger organisations. A long list of software programs has taken over human tasks, recording everything from time sheets to allocation and approval of leave. Capitalism has deprived human resources (HR) of any empathy, humanity, emotion or sensibility, so there is no doubt that this category of jobs will continue to be endangered, with bots replacing whatever is left of HR.

Most people requiring customer services from banks, stores and service providers will be aware that the most efficient way to make an enquiry is to use automated services on their website - the only other option being holding a phone to your ear for what seems like hours, listening to boring music, before a human finally answers - perhaps only to tell you to use a particular form on the company website. It is a similar case with IT administrators, project managers, etc - all these jobs follow very set processes requiring little imagination or innovation. Many aspects of them are already done or assisted by computers.

Our response

Of course, there is no doubt that under capitalism robots and artificial intelligence help to increase the exploitation and control of the working class. The contradictions inherent in capitalism mean that the rollout of these new technologies will be uneven and decided on the basis of maximum profit.

In terms of the socio-political consequences of these developments, a number of (often contradictory) theories are often discussed:

1. AI-driven concentration of economic power could create the necessary conditions for a revolution. According to this view, the fundamental transformation of working patterns caused by AI can lead to the concentration of economic power in the hands of a capital-owning techno-elite. This in turn will result in labour revolts against capital - in other words, an extension of the ‘internal contradictions’ of capitalism that Marx referred to at the time of the first industrial revolution, when automation allowed a capitalist managerial elite to build up significant power.

The problem with this analysis is that it ignores the current weakness of the left and the absence of revolutionary organisation of the working class. The concept of surplus production overtaking our needs and explosion of leisure time can only happen under communism.

2. State-controlled algorithms might enable an economy delivering “From each according to his ability, to each according to his need”, and in doing so it will replace market capitalism. Centralised, data-driven algorithms could potentially deliver better economic results than decentralised market competition. According to this scenario, free markets, as currently configured, would be replaced by centralised command-and-control economics. These measures would limit well-known external failings associated with capitalist inequality and environmental degradation.

Again in the absence of awareness about these technologies, given the state of the international left, this remains very much in the arena of wishful thinking. However, it is true to say that under Covid and in the post-Covid situation, if states do not intervene, inequality will rise dramatically, and the majority of the population will struggle to survive economically, with living standards dropping substantially. AI industry will enhance the tendency towards monopolisation (big data improves company’s algorithms, allowing them access to an even larger consumer market) and this is in line with Marx’s prediction that the introduction of new technology leads to the formation of a permanently unemployed class, together with greater inequality.

3. Surveillance aids dictatorships: eg, a combination of Covid and progress in technology has created advantages for the Chinese leadership. At the onset of the Covid crisis Chinese citizens were subjected to a form of risk scoring. The computer algorithm assigned people a colour code - green, yellow or red - which determined their fitness to, for instance, enter buildings in China’s larger cities. In a sophisticated digital system of social control, codes like these could be used to score up a person’s political views, with restrictions imposed accordingly.

It is possible to use algorithms to combine data points from a large number of sources - for example, internet communication, travel records, social media friends, reading habits, online purchases - to predict the political opinion of individuals and restrain them accordingly. Clearly in most of the world we are not there yet. However, we should not ignore the warning signs.

From our point of view, it is important to keep up to date with all aspects of robotics, AI and machine-learning development: closing one’s eyes will not make this question disappear.

When it comes to our minimum programme, what are the demands we should make? This will require some consideration, but as a first step we should organise the working class to resist management techniques that already treat human employees as bots - perhaps prior to replacing them with AI. This means active resistance to dehumanising processes. In every job humans have so much more to offer than simply following a dumbed-down list of simple tasks. They can use their experiences, their accumulated knowledge, their humanity to enhance the quality of the work they do. Trade unions should encourage employees to think beyond the box. Nowadays every job, from cleaning to teaching, from baggage handling to piloting, has a whole raft of line managers - often managed by those who have very little understanding of the tasks involved. They are just ‘managers’, after all, but we should challenge the whole concept of line management. Only jobs where humans can make decisions will survive in future and current processes, overseen by a hierarchy of line managers, are not amongst them.

We have to call for more transparency when it comes to AI ‘ghost workers’, including decisions they make that may introduce gender, race or political bias into artificial intelligence. While machines themselves may well be blamed, it is the way major companies use ghost workers that should be challenged.


  1. weforum.org/agenda/2018/09/ai-and-robots-could-create-as-many-jobs-as-they-displace.↩︎

  2. pwc.co.uk/economic-services/assets/international-impact-of-automation-feb-2018.pdf.↩︎

  3. mturk.com/worker.↩︎

  4. bbc.co.uk/news/technology-56414491.↩︎

  5. See ilabour.oii.ox.ac.uk/online-labour-index.↩︎

  6. time.com/5876604/machines-jobs-coronavirus.↩︎