A milestone for military AI?

August 26, 2020

On August 19, a human F-16 fighter pilot engaged with an artificial intelligence (AI) algorithm in a virtual simulation of a dogfight. The human pilot lost all five rounds. So was this a historical moment for military use of AI?

AI WINS AGAIN

For some observers, and for the United States Defense Advanced Research Projects Agency (DARPA) that organized the simulation, this was a watershed moment. It was not the first time a human pilot lost to AI; that happened in 2016 with AI system ALPHA. However, this most recent simulation used more advanced AI.

In the virtual dogfight, AI agents engaged with each other and then with the human pilot. In this particular test case, AI systems were not allowed to “learn” from their experiences, but merely employed programmed capabilities. In reality, AI systems can “learn” a lot by doing, processing data that allows them to optimize their next move.

BUT IS AI READY FOR BATTLE?

Other analysts were not sure that the event revealed anything more than how AI could be used in simulations. Real-world environments are more unpredictable, filled with ‘noise’. These observers felt that AI tech needed further development before it would be useful in military engagements.

Duke University professor Missy Cummings, a former U.S. Navy pilot, called the event “AI theater.” Cummings noted that DARPA chose a dogfighting simulation because observers think it’s “cool”—and it’s easier to program. It DID make for good drama, as viewers of DARPA’s YouTube channel could attest.

Still, the symbolic importance of this event should not be overlooked. It highlighted again that AI use by militaries is only going to evolve and that the issues surrounding use of AI-enabled or supported weapons systems are not a distant concern. Most crucially, the aim of the simulation was to build trust in AI. Trust is key to the adoption of AI by the military and here there is a well noted gap among military personnel.

During the simulation, DARPA Air Combat Evolution (ACE) manager Col. Dan “Animal” Javorsek stated, “If the champion AI earns the respect of an F-16 pilot, we’ll have come one step closer to achieving effective human-machine teaming in air combat, which is the goal of the ACE program.” The exercise was then envisioned as part of a broader trust building goal of the ACE program. According to DARPA, “the program will scale the tactical application of autonomous dogfighting to more complex, heterogeneous, multi-aircraft, operational-level simulated scenarios informed by live data, laying the groundwork for future live, campaign-level Mosaic Warfare experimentation.” Illustrating that AI systems can perform well or even better than humans in complex environments and operations will undoubtedly be key to convincing soldiers to trust the machines.

BALANCING AI AND HUMAN CONTROL

Human-machine teaming gets to the heart of a lot of current discussion. Many interested parties are keen to ensure appropriate human control over weapons systems and platforms. In their view, the team leader should always be human. During the August simulation, Javorsek sought to reassure observers that humans will control weapons and that the level of autonomy in new systems is really not worrisome.

But the fact that increasingly autonomous systems are being viewed as normal IS worrisome. It raises questions about the extent of human control over the machine teammate when humans are placing more and more trust in AI. Why would the military trust a human operator when an AI system can respond more quickly and with greater accuracy?

As University of Washington professor Ryan Calo points out, “There’s tension between meaningful human control and some of the advantages that artificial intelligence confers in military conflicts.” In this case, how long can we expect the military to voluntarily commit to principles regarding human control?

The August simulation should then motivate countries to move the autonomous weapons discussion at the United Nations Convention on Certain Conventional Weapons forward even under the current challenging conditions. With regulation rapidly falling behind developments in technology, the simulation reveals the clear need for the immediate regulation and institution of global norms on the application of AI in warfare. And this is action that is firmly in human hands.


From Blog

Related Post

Get great news and insight from our expert team.

How to use open-source intelligence to get to the truth

No Canadian leadership on autonomous weapons

Let's make some magic together

Subscribe to our spam-free newsletter.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.