Skip to content
Join our Newsletter

The bad thing about LAWS

One of my daughters once proposed that my T-shirt should read: "I don't support war, but war supports me." And it's true, I suppose.
opinion_gwynne1-1-5618bccc63ec3b08

One of my daughters once proposed that my T-shirt should read: "I don't support war, but war supports me." And it's true, I suppose.

I write about lots of other things too, but I have been studying war, writing about wars, going to wars (but never fighting in one) for the whole of my adult life, partly because international relations are so heavily militarized, but also because for anybody who is interested in human behaviour, war is as fascinating as it is horrible.

So you might assume that I would leap into action, laptop in hand, when I learned that almost 3,000 "researchers, experts and entrepreneurs" have signed an open letter calling for a ban on developing artificial intelligence (AI) for "lethal autonomous weapons systems" (LAWS), or military robots for short. Instead, I yawned. Heavy artillery fire is much more terrifying than the Terminator.

The people who signed the letter included celebrities of the science and high-tech worlds like Tesla's Elon Musk, Apple co-founder Steve Wozniak, cosmologist Stephen Hawking, Skype co-founder Jaan Tallinn, Demis Hassabis, chief executive of Google DeepMind and, of course, Noam Chomsky. They presented their letter in late July to the International Joint Conference on Artificial Intelligence, meeting this year in Buenos Aires.

They were quite clear about what worried them: "The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow."

"Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populations, warlords wishing to perpetrate ethnic cleansing, etc."

"Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity."

Well, no, it wouldn't be beneficial for humanity. Few arms races are. But are autonomous weapons really "the key question for humanity today"? Probably not.

We have a few other things on our plate that feel a lot more "key," like climate change, nine civil wars in the Muslim parts of the world (Afghanistan, Iraq, Syria, southeastern Turkey, Yemen, Libya, Somalia, Sudan and northeastern Nigeria) — and, of course, nuclear weapons.

The scientists and experts who signed the open letter were quite right to demand an international agreement banning further work on autonomous weapons, because we don't really need yet another high-tech way to kill people. It's not impossible that they might succeed, either, although it will be a lot harder than banning blinding laser weapons or cluster bombs.

But autonomous weapons of the sort currently under development are not going to change the world drastically. They are not "the third revolution in warfare, after gunpowder and nuclear arms," as one military pundit breathlessly described them. They are just another nasty weapons system.

What drives the campaign is a conflation of two different ideas: weapons that kill people without a human being in the decision-making loop, and true AI. The latter certainly would change the world, as we would then have to share our world for good or ill with non-human intelligences — but almost all the people active in the field say that human-level AI is still a long way off in the future, if it is possible at all.

As for weapons that kill people without a human being choosing the victims, those we have in abundance already. From land mines to nuclear-tipped missiles, there are all sorts of weapons that kill people without discrimination in the arsenals of the world's armed forces. We also have a wide variety of weapons that will kill specific individuals (guns, for example), and we already know how to "selectively kill a particular ethnic group," too.

Combine autonomous weapons with true AI, and you get the Terminator, or indeed Skynet. Without that level of AI, all you get is another way of killing people that may, in certain circumstances, be more efficient than having another human being do the job. It's not pretty, but it's not very new either.

The thing about autonomous weapons that really appeals to the major military powers is that, like the current generation of remote-piloted drones, they can be used with impunity in poor countries. Moreover, like drones, they don't put the lives of rich-country soldiers at risk. That's a really good reason to oppose them – and if poor countries realize what they are in for, a good opportunity to organize a strong diplomatic coalition that wants to ban them.

Gwynne Dyer is an independent journalist whose articles are published in 45 countries.