Sending machines to war instead of people is already commonplace, but the general public is not quite on board yet.
There is a lot of talk about self-driving cars and whether they are safe enough to exist alongside humans on the streets – but what about military robots?
Defense systems are usually purchased with government funds. Therefore, the public should have the right to have their say, however civilians are scarcely included in military decisions. A European Union study, led by the Estonian Military Academy, aimed to change that.
Estonian, Austrian and Norwegian scientists looked at what the people have to say about unmanned ground vehicles, or in other terms, self-driving weapon systems.
“Most civilians would only agree with the use of self-driving military vehicles if they were controlled by a human at all times,” Wolfgang Wagner, a social psychology professor at the University of Tartu concluded.
He explained that the technology is still new and developing. Self-driving vehicles, for instance, are not yet ready to tackle difficult terrain and are vulnerable to glitches.
At the same time, governments all over the world are pushing for it. Russia, for instance, has declared that they are developing a wide range of unmanned ground vehicle platforms. Even Russia’s president Vladimir Putin has emphasised the importance of artificial intelligence in the military.
Western countries don’t lag behind with many military robots are already in use in Europe. Australia’s prime minister Scott Morrison recently promised to send unmanned aerial and ground systems to Ukraine, amongst other support.
All this has created the necessity to understand how artificial intelligence-based military systems comply with the law. Someone needs to be accountable for the decisions the machine is making, after all. You cannot lock up a machine if it makes the wrong decision!
“In both the public and the academic discourse on technology, the development of intelligent systems is often portrayed as something inevitable and out of control,” Wagner has said.
This thinking partially derives from how robots have been portrayed as killer machines in science fiction movies.
Therefore, people who took part in the series of studies led by the Estonian Military Academy generally said they preferred the defense systems to be controlled by a human at all times, such as with a remote control from a distance.
From the legal perspective, similar concerns appeared. Janar Pekarev from the Estonian Military Academy has indicated that a weapon system that could select and engage targets without human intervention poses a serious challenge in terms of international humanitarian law.
The fundamental ethical question is whether people can delegate life-and-death decisions and accountability to artificial agents, the researchers concluded. Wouldn’t it be against basic human rights to classify a human being as a mere military target?
Based on his literature review, Janar Pekarev, a researcher at Estonian Military Academy, concluded that even if autonomous weapon systems were able to follow the principles of international law of armed conflict, some violations would still most likely occur. In case that happens, who would be held legally responsible?
Based on the law of armed conflict she extensively studied, Camilla Guldahl Cooper, an associate Professor in operational law at the Norwegian Defense University College, is a little more hopeful.
“Unmanned systems can be applied in war in a lawful manner,” she told Research in Estonia. “It requires a lot of awareness.” As part of the EU study, she concluded that there need to be limits as to what the machine is capable of in a war situation. In case of a risk to a civilian casualty, for instance, depending on how it’s programmed, the machine would have to stop or a human would have to be involved. In short, taking care of the civilians needs to be programmed into the machine – a human would have to take the responsibility.
The trust issue will be overcome once people understand the unmanned military systems better, Cooper believes. It’s simply a matter of clearly stating who is responsible for what exactly.
For this, people must be able to trace back how the robots made the decisions they did.
“If you create a black box where you don’t know what is going on, then it’s not lawful,” Cooper said. Being able to control the machine is in the interest of everyone. Once the control is gone, the machine can turn against anyone, including your own people.
“Nobody wants an uncontrollable weapon!” she said.
Current artificial intelligence systems do, in any case, include human control on many levels: in giving them abilities, in equipping and applying them, turning them on, giving them targets, giving them ammunition. Cooper: “It’s just another system with a new capability.”
The rules of war are ancient, she pointed out, a long part of our culture. Some of them are written in the bible and the Quran, like how one shouldn’t attack children or mistreat prisoners. EU and NATO countries are trying exceptionally hard to follow the rules, she said.
Technological development in the military is a recurring aspect of warfare, we just don’t know everything about it yet, which makes it understandably scary, but they could also protect human lives in the middle of the chaos.
First though, we need to explore what the robots are able to do for us, because “once you see the potential, that’s where you start seeing the limitations,” as Cooper pointed out. “But we won’t be able to see the limitations before seeing the potential.”
Written by: Marian Männi
This article was funded by the European Regional Development Fund through Estonian Research Council.