WeeklyWorker

13.01.2022

Rise of the killer machines

It is no longer dystopian sc-fi. Yassamine Mather looks at the race to develop and deploy autonomous lethal weapons

In December 2021, in a lecture on artificial intelligence, professor Stuart Russell warned about artificial intelligence weapons as a threat to humanity. Russell has, on a number of occasions, called for an international moratorium on autonomous lethal weapons: “The killing of humans should never be automated, based on algorithmic formulas.” During his lecture, Russell said: “Such dehumanisation of life-and-death decision-making by autonomous weapons systems must be outlawed worldwide.”1

One of the reasons Russell is worried is that AI lethal weapons are no longer science fiction: they are “small, cheap and easily manufactured”. According to him, such weapons are advertised on the web and you can buy them today online.

AI experts often refer to particular examples of such systems, not least the Israeli government’s Harpy (AIA Harop), a 12-foot-long fixed-wing drone capable of carrying 50 pounds of explosives. It can be flown remotely or can run autonomously after a human target is specified. There are rumours that India and Azerbaijan have both purchased Harop planes, and in fact Israel Aerospace Industries boasted last year that “hundreds” of these had been sold. Harpy is programmed to fly to a specific geographic area, identify a particular human target (based on face recognition techniques) and kill them, using a high-explosive warhead nicknamed Fire and Forget.

We also know about the Turkish arms manufacturer, STM, which in 2017 revealed its production of the Kargu - a fully autonomous killer drone no bigger than the size of a rugby ball. Kargu drones also rely on image and face recognition to attack their victims. It is believed such drones were used in Libya in 2020 to selectively home in on targets. Again, according to Russell, “STM is a relatively small manufacturer in a country that isn’t a leader in technology. So you have to assume that there are programmes going on in many countries to develop these weapons.”

Most AI weaponry incorporates a number of diverse AI technologies, including visual perception, speech and facial recognition, comparison with large databases, as well as decision-making tools to perform a variety of air, ground and maritime military operations - all this independent of human intervention or indeed supervision.

Then there is the category of ‘loitering attack munitions’ (LAMs). These weapons are pre-programmed to loiter for targets (in other words, they look for enemy planes, ships or tanks) to be identified by sensors that detect an enemy’s air defence radar. LAMs use AI technology to shoot down incoming projectiles, reacting much faster than any human operator, and are able to remain in flight (or loiter) for much longer periods of time.

Israel’s Harpy II is a LAM that is able to remain in flight for up to six hours, and there are also a number of fully autonomous LAM weapons developed by the UK, China, Germany, India, Israel, South Korea, Russia ...

At a recent UN conference, held in December 2021 in Geneva, it became clear that a number of governments, including the US, Israel, Russia, the UK and Australia, are against a ban on such weapons. Surprisingly, China seems to favour a ban, and the country’s UN arms control ambassador, Li Song, was quoted as saying that the application of AI technology can play a significant role in the future of warfare, with the possibility of causing major humanitarian crises: “rules were needed to stop it becoming a tool for war or hegemony”.2

Of course, China is well aware of US intransigence on this subject and is simply making such claims to maintain the ‘moral high ground’. In fact we already know that China’s sea and air-based drones are linked to complicated neural networks, which could monitor and control the waters in the South China Sea. These could be used to impede future US ‘freedom of navigation’ in this zone. But it is clear that the Biden administration will not support a legally-binding ban on the development of unmanned, autonomous war machines (or ‘killer war robots’), favouring instead a non-binding ‘code of conduct’.

Dangers

We all know of the mistakes and subsequent disasters caused by human-operated drones, so it is easy to imagine how the actions of autonomous ‘killer robots’ can get out of hand.

Amongst AI ethics experts who have expressed considerable wariness about such devices is John Tasioulas, Oxford professor and director of the Institute for Ethics in AI, who has tweeted that the Biden administration’s stance was “sad and unsurprising”.3

The 2017 video Slaughterbots gives an accurate representation of AI-enabled small weapons - bird-sized drones targeting a specific victim or set of victims. These are programmed to fire a small amount of dynamite into the target’s skull. Defence against or interception of such small weapons is very difficult.

Today such weapons are reasonably cheap and are no longer science fiction. The next stage in their development is, it seems, to ‘teach’ them how to form an army and work as a team. In this and other AI applications, group behaviour by the likes of ants and bees is studied and applied to autonomous devices, so that they act in coordination with each other. Imagine a few hundred thousand of such devices attacking a city. In theory they could kill half of the inhabitants with no damage to the infrastructure.

It is said that AI robots cost much less than hiring a killer and that no human lives would have to be sacrificed (no need for suicide bombers). But there is the complicated issue of accountability. Who is responsible? The person who gave the order, the one who programmed the robot, the manufacturer? All this creates a minefield, when it comes to international law.

According to Stuart Russell,

The capabilities of autonomous weapons will be limited more by the laws of physics - for example, by constraints on range, speed and payload - than by any deficiencies in the AI systems that control them. One can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless.4

If such predictions are correct, it is said that AI arms could be more dangerous than the threat posed by nuclear weapons, which international regulations, including the non-proliferation treaty, supposedly limit. Although Israel is exempt, all other countries with nuclear weaponry have signed up to the NPT and face regular monitoring and inspection. In addition, nuclear weapons are allegedly just a deterrent, because everybody knows that nuclear war between the major nuclear powers would inevitably lead to mutual destruction.

When it comes to AI weapons, there have been several proposals for avoiding a potential disaster. For example, some AI experts and military advisors have suggested using some kind of ‘human in the loop’ solution. Of course, we know that human decision-making has not saved the lives of innocent victims of drones in Afghanistan or in the Middle East. However, for the ardent supporters of AI weapons, the main advantage of their new lethal toys comes from the speed and supposed accuracy of such devices operating without human intervention.

Others have proposed ‘regulations’ to control AI weapons, but this has not been taken very far. For a start, it is not very clear what can be classified as a fully automated weapon. According to Heather Roff of Case Western Reserve University School of Law, autonomous AI weaponry consists of “armed weapons systems, capable of learning and adapting their ‘functioning in response to changing circumstances in the environment in which [they are] deployed’, as well as capable of making firing decisions on their own.”5

Some academics use a much lower threshold to classify any weapons system capable of causing fatality without human supervision as an autonomous weapon, while the British ministry of defence has another definition:

systems that are capable of understanding higher-level intent and direction. From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control - such human engagement with the system may still be present, though. While the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be.6

Then there is the question of who would oversee regulations over AI weapons. It is said that these would only permit robots fighting robots, therefore avoiding ‘collateral damage’. But it is difficult to imagine imperialist states adhering to such regulations. During a war, the aim is often to inflict maximum damage, which includes human casualties.

Ban?

One proposal, supported by many academics and scientists, is the call from the Campaign to Stop Killer Robots for a ban on all AI weapons. There is nothing new about such a demand: biologists, chemists and physicists have long campaigned against biological, chemical and nuclear weapons. However, many states, including the USA, UK and Russia, oppose such a ban, claiming it is premature in relation to AI weaponry.

In February 2021 the US National Security Commission on Artificial Intelligence supported the development of autonomous weapons powered by AI software. Robert Work, a former deputy secretary of defence and panel member, claimed that autonomous weapons would make fewer mistakes than humans, for example by reducing casualties caused by target misidentification.

So what is the current state of play? Who is winning the war so far?

In the current arms race, China is not the underdog and there are predictions that it will soon overtake the US. This is mainly because of its superiority when it comes to the data necessary to feed machine-learning algorithms. In 2018 Chinese president Xi Jinping was already emphasising the importance of “big data”, cloud storage, and quantum communications amongst the “liveliest and most promising areas for civil-military fusion”, and the Chinese government provided huge funding for these projects.

Scientists who argue for a ban on AI weapons make the following points:

In the current situation it is easy to see how poorer countries have already lost any chance of confronting such threats. If the US does not like the foreign policy of some ‘rogue state’ it will not need to organise a coup: instead it can deploy AI weapons with face-recognition algorithms to eliminate all the ‘unfriendly’ members of that government.

In August 2018 Venezuelan president Nicolás Maduro claimed he had survived an assassination attempt involving explosive drones. Although this cannot be confirmed independently, footage of his speech at an event marking the 81st anniversary of the national army shows the president suddenly looking upwards - he seems startled - and dozens of soldiers running away. Venezuela blamed elements within the US for instigating “a rightwing plot”.

It is easy to envisage a situation in which, rather than using sanctions, the US employs AI weapons to destroy the civil and military infrastructure of a ‘rogue state’, while claiming that there was no ‘collateral damage’.

In December 2021 an AI robot, the Megatron Transformer, made the headlines when it was used as a speaker in a debate at the Oxford Union, in a motion entitled: “This house believes that AI will never be ethical”. The Megatron came up with a very interesting conclusion:

AI will never be ethical. It is a tool and, like any tool, it is used for good and bad. There is no such thing as a good AI - only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral … In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI.7 


  1. www.ft.com/content/03b2c443-b839-4093-a8f0-968987f426f4.↩︎

  2. www.scmp.com/news/china/military/article/3159704/time-set-global-rules-ai-warfare-china-tells-un-weapons-review.↩︎

  3. www.independent.co.uk/news/world/americas/us-politics/biden-killer-war-robots-ban-b1972343.html.↩︎

  4. www.nature.com/articles/521415a.↩︎

  5. core.ac.uk/download/pdf/214077458.pdf.↩︎

  6. www.gov.uk/government/publications/unmanned-aircraft-systems-jdp-0-302.↩︎

  7. www.bbc.co.uk/news/technology-59687236.↩︎