EDI, a fighter jet based on F/A-37 Talon, is a fictional lethal autonomous weapon (LAW) designed to carry out airborne attacks on enemies and come back home in time for dinner without any human intervention. Why?
A machine pilot would not suffer from the physical limitations a human pilot would have to endure, while also being able to calculate alternatives faster and more accurately. Moreover, it would not have human ego.
Sounds safe, right? More on this later! Let us first take a look at what it takes to build weapons that have the licence to kill.
Fundamentals: What sets autonomous weapons apart
The biggest impact that LAWs have is that these let us eliminate us from the battlefield, that is, these remove the human operator sitting behind the gun. This is fine as long as the machines do not take it literally and go about eliminating all humans. Unfortunately, machines tend to take things literally, more often than not (like when you run a program on a computer).
It means that LAWs not only need to be able to have the sensor capability to select and identify targets, but understand friends from foes, and also decide which type of tool to use in order to take down a particular kind of target with minimal collateral damage. Finally, it would have to be able to activate the selected tool (guns, rockets, lasers, electronic jamming, flares and so on) and deliver it with reasonable accuracy.
Now, the first and last parts of this flow, which are the sensors and actuators side of it, have been taken care of in many of our other articles as well as by Internet of Things (IoT) as a concept. But why do we need to have artificial intelligence (AI) strong enough to take humans out of the loop?
Looking into a black box
The capability to learn and adapt is a very serious business and can go either way. If you remember, the day after Microsoft introduced an innocent female AI chat robot to Twitter, it transformed itself into an evil Hitler-loving character that spewed hate and swore at Twitterati who tried to speak to it. Why?
Neural networks like the one in the fictional EDI do not perform rule based calculations.
Instead, these learn by exposure to large data sets. These data sets could be easily powered by the Big Data generated from the IoT.
As a result, internal structure of the network that generates output can be opaque to designers—a black box. Even more unsettling, for reasons that may not be entirely clear to AI researchers, the neural network sometimes can yield odd, counterintuitive results like the Twitter bot we just talked about.
“A study of visual classification AIs using neural networks found that while AIs were able to generally identify objects as well as humans, in some cases AIs made confident identifications of objects that were not only incorrect but that looked vastly different from the purported object to human eyes. The AIs interpreted images, which to the human eye looked like static or abstract wavy lines, as animals or other objects, and asserted greater than 99.6 per cent confidence in their estimation,” explains Paul Scharre, senior fellow and director of 20YY Future of Warfare Initiative, in Autonomous Weapons and Operational Risk.