Will killer robots be the Kalashnikovs of tomorrow?

A comment piece by Tom Simpson, a former Royal Marines Commando and Associate Professor of Philosophy and Public Policy at BSG. Prof Simpson responds to the publication of an open letter from artificial intelligence experts that called for a ban on research and development into offensive automated weapons. Professor Simpson argues that regulation (rather than a ban) for these weapons is needed, and he says that "serious efforts should be invested in domestic and international arms control regimes".

TerminatorKiller robot from the film Terminator.

(Credit: Sarunyu L / Shutterstock.com)

There is a visceral unease that autonomous weapons systems—‘killer robots’—raise in both public and expert minds.

In an open letter released today, a group of Artificial Intelligence experts have renewed calls for a ban on research and development into offensive automated weapons. Their letter argues that such R and D will spark an arms race and autonomous weapons will become the Kalashnikovs of tomorrow, in the hands of terrorists and warlords.

It is unquestionable that the world will be a better place if these weapons never exist. What is less clear is that this justifies a self-imposed ban on their development by the audience of the letter—the governments, universities and companies of the West. (That is where the signatories are overwhelmingly from.) Their argument applies also to war, and when you do so, it is clearly specious. War is not beneficial to humanity. But this does not mean that any individual nation is unjustified in preparing for it. Humanity has, so far, worked out only how to make war unlikely at a local level. A universal ban has not yet ‘stuck’. Until it has, readiness for war cannot be wrong.

The Kalashnikov analogy is pertinent. Consider the position, say, of those responsible in the late 1940s for arming the US infantry. If highly effective rifles that are simple to operate are in the offing, should they not have developed the M16? Likewise, we know that within 20 years this low-end technology could be in the wrong hands, and we need to prepare ways of maintaining our country’s security against this eventuality.

One of the applications of small-scale automated weapons could be ‘swarm’ attacks, where hundreds of such units attack in simultaneous, coordinated assaults. Defending against swarms will require your own swarm. Whether the algorithm is set to identify machines rather than people is a matter of detail. So there is no relevant distinction between ‘offensive’ versus ‘defensive’ killer robots.

This does not mean that we should just throw up our hands in despair at the impossibility of constraining the harm caused by low-end killer robots in 20 years’ time. Rather, serious efforts should be invested in domestic and international arms control regimes.

It is not accidental that it is the Kalashnikov that has been the tool of choice for so many deaths in nasty, small wars, rather than the M16. The Kalashnikov has been available and geopolitics largely explains why it is the weapon of choice. The Soviet Union armed its favoured insurgents with Kalashnikovs and after the collapse of the Soviet Union these weapons have emerged in a variety of failed states and dictatorships.

The more general point is that a self-imposed ban on the development of automated weapons is justifiable only if the ban can be made ‘to stick’—with those who are rivals and enemies too. This assumes that the weapons are not ‘wrong in themselves’, like dum-dum bullets which kill through undue suffering.

Such a ban is perhaps a plausible approach in the cases where high-end automated weapons are to be used, and where industries and testing programmes are visible with limited actors involved. It is deeply implausible, however, that such a ban will stick for low-end technology—the kind you can build in your basement—precisely because this is within reach of so many.