Skip to main content
xYOU DESERVE INDEPENDENT, CRITICAL MEDIA. We want readers like you. Support independent critical media.

As Lethal Autonomous Weapons Loom Large, Experts Call for Human Intervention

Ahead of the UN’s Group of Governmental Experts meet to discuss Lethal Autonomous Weapons Systems in November, a panel discussion was held in Delhi.
weapons

Credits: Defense Advanced Research Projects Agency, U.S. Department of Defense

Imagine being in the vicinity of an automated gun system that makes decisions on its own, identifies the target and opens fire.

This July, Russian arms manufacturer Kalashnikov – inventor of the AK-47 – announced it was developing a fully autonomous combat module based on Artificial Intelligence (AI) — specifically, neural network technology, or computer systems modeled on animal brains that can learn from past experience and example. 

Over the past few years, debates and concerns have escalated over the rapid developments in Lethal Autonomous Weapons Systems (LAWS), which can potentially identify and attack targets without human intervention — and the implications of these “killer robots” being deployed in warfare and conflict situations. 

In 2015, more than 1,000 AI and Robotics experts, including Tesla and Space X founder Elon Musk and physicist Stephen Hawking, had signed an open letter that warned of a “military artificial intelligence arms race” and called for an outright ban on “offensive autonomous weapons beyond meaningful human control”.

In 2016, a Review Conference of the Convention on Certain Conventional Weapons (CCW) of the United Nations (UN) – which bans or restricts the use of certain weapons “considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately” – decided to establish a Group of Governmental Experts (GGE) on LAWS.

But after the GGE’s first meeting this August was postponed, 116 AI and Robotics experts from 26 countries shot off another open letter to the UN, urging the GGE to ban LAWS and to “work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies.”

“Lethal autonomous weapons threaten to become the third revolution in warfare” after gunpowder and nuclear arms, said the letter. 

“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways,” it added.

The GGE will now take place from November 13 to 17 this year, and be chaired by Amandeep Singh Gill, the Ambassador of India to the UN Conference on Disarmament, Geneva.  

Ahead of the November meet, on October 3, a panel discussion on the topic of “Command and CTRL- Emerging Regime on Lethal Autonomous Weapons” was held in Delhi. 

The session was part of ‘CyFy 2017: The India Conference on Cyber Security and Internet Governance’, organised by the Observer Research Observation, a think tank founded by Dhirubhai Ambani and currently funded by Mukesh Ambani’s Reliance Industries Limited (RIL).

The panel focussed on the question of what would it take to create a legal regime around LAWS within the framework of international humanitarian law. 

The most crucial issue, the panelists agreed, was deciding upon and ensuring the requisite level of human control and accountability when it came to deployment of autonomous weaponry. 

The panelists had also been UN GGE representatives — scientist John Mallery from the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology (MIT), Neil Davison of the Arms Unit under the Legal Division of the International Committee of the Red Cross (ICRC), and Arun Mohan Sukumar who heads the ORF’s Cyber Initiative.

Introducing the session, Jeremy England, who heads the ICRC New Delhi Regional Delegation, pointed out that weapon systems were no longer the preserve solely of military debate. 

He outlined how over the past 150 years, there had arisen an overwhelming trend of civilians being the principal victims of weapons systems.

“If you look at Syria over the last 6 years, 400 thousand civilians have been killed, 7 million displaced,” said England.

Speaking first, Arun Mohan Sukumar made the point that despite the threat posed by LAWS, “international legal regimes to articulate the uses or the rules of conduct for these weapons systems” were likely to come about only in the aftermath of a “scary” event involving autonomous weapons, but that the GGE was a “great start”. 

Neil Davison emphasised “the role and responsibilities of humans” in LAWS.

Davison said the first challenge was the lack of a “common conception of what we’re talking about” when we speak of autonomous weapons systems. 

He said the critical distinction between LAWS and other weapons systems was that the “targetting functions”, which are “most relevant to decisions to kill and destroy”, were transferred to machines.

Davison said it would be useful to learn from examples of already existing weapons systems where there was already some degree of autonomous targetting, such as missile and rocket defence systems. A lot can also be learned from discussions about autonomous systems in other sectors, such as self-driving cars, he added.

Turning to the legal problems, he said it was “critical” to remember that the legal obligations lie with human beings and cannot be transferred to machines. 

Davison said a key problem in legal compliance could arise when there is “uncertainty or unpredictability between when you activate a weapon and what ultimately happens.”

“How do you then make a judgement about distinguishing between civilians and civilian objects and the legitimate military targets?” 

The ethical argument only reinforced the legal argument and the military ‘command and control’ argument for human control, he said. 

“My plea would be, for the GGE and this discussion, to start with the obligations for humans and what this means about how we decide to develop technology, because technology doesn’t develop itself,” he ended.

Scientist John Mallery laid out the technical aspects of LAWS, including the engineering and computational perspectives.

“The machine-human distinction is a red herring, what you actually are concerned with is that you want correct operation, whether it’s a human or a machine,” he said.

“Whenever you design a system, there’s a set of options and context in which they’re going to operate which you assume. The difficulty arises whenever you exceed those assumptions. So if you a build a system for a purpose and it gets outside the assumption set, you’ve got a problem.”

Mallery elaborated on the parts of autonomous weapons system, meaning a “self-directed system”, that includes navigation, locomotion, sensing and perception, target identification, target selection, and the release of force on target. 

“We need to understand each of these functions for any kind of regulatory scheme or design scheme we might do,” he said.

He said in such machines, the sensing part was way more important, rather than the effector part.

“I expect the problem area with LAWS would be that the error range would get higher as the technology proliferates because the actors will not be prepared to engage with the level of hard engineering that we need to actually make the systems work correctly,” Mallery said.

Disclaimer: The views expressed here are the author's personal views, and do not necessarily represent the views of Newsclick.

Get the latest reports & analysis with people's perspective on Protests, movements & deep analytical videos, discussions of the current affairs in your Telegram app. Subscribe to NewsClick's Telegram channel & get Real-Time updates on stories, as they get published on our website.

Subscribe Newsclick On Telegram

Latest