Regulating military use of AI

June 15, 2023

By Branka Marijan

Published in The Ploughshares Monitor Volume 44 Issue 2 Summer 2023

In a recent pitch video for its new Artificial Intelligence Platform (AIP), U.S. technology company Palantir previewed a chatbot that can launch a military drone, provide information on enemy movements, jam enemy communications, and offer options for a battlefield attack. The AIP uses large language models – the same technology that powers OpenAI’s ChatGPT. And if Palantir’s vision seems too disquieting to be realizable, we cannot simply dismiss it. Because Palantir is not alone. Other tech companies are also promoting gamified versions of warfare to militaries around the world. And some of these militaries are only too eager to sign on.

First out of the gate

The military advantages for early adopters of these developing AI systems were much discussed at the Responsible Artificial Intelligence in the Military Domain (REAIM) Summit that I attended at The Hague in the Netherlands this past February (see the Spring 2023 Monitor). As Palantir’s CEO Alex Karp told the Summit audience, “The country that wins the AI battles of the future will set the international order.”

Beyond assessing data with incredible speed, Karp and others claim that AI can understand the entire battlefield in ways that are useful to the client and unexpected by the opponent. The value of AI is already being displayed on contemporary battlefields. In a February 1 Reuters article, “Ukraine is using Palantir’s software for ‘targeting,’ CEO says,” we see how Ukraine has used Palantir’s AI software to its advantage against the stronger Russian military. In the article, Karp claimed that Ukraine had gained a more accurate view of the battlefield, easily finding Russian military targets and determining the best way to use its own resources.

Growing tensions between the United States and China and between most Western states and Russia encourage the belief that if Western states do not adopt AI military technologies, they will be giving autocratic regimes an advantage. This understanding also contributes to pushback against efforts to regulate these technologies. Some military analysts are even convinced that more authoritarian regimes and those less friendly to the Western-led global order will not follow regulations to control military AI.

From all these perspectives, unassisted human decision-making can be seen as both too slow and too subject to error, riddled with bias and emotion. The apparent remedy is seen in the Palantir video, which illustrates a significant aspect of the evolving character of warfare: the changing role of human decision-making. In the video, a human oversees the system, receiving information from the chatbot and making decisions. However, in this new style of warfare, the human simply approves or rejects actions that are recommended by the chatbot. It is not even clear that the human understands how the system has made the assessment or formed the recommendations.

Human vs. AI weaknesses

Proponents of military AI technologies stress that human military personnel get tired, make mistakes, respond emotionally. Chatbots and other AI tools are supposed to fix these human weaknesses but have their own shortcomings. AI researchers note that large language models provide inaccurate information; indeed, they “make things up” or even “hallucinate.” A New York Times piece entitled “When A.I. chatbots hallucinate,” posted on May 1, references an internal Microsoft document that states that AI systems are “built to be persuasive, not truthful.”

In safety-critical contexts, such as combat zones, the use of “persuasive” systems must be of great concern. If systems hallucinate in these environments and the overseeing human does not understand how the system reaches decisions or does not have the time to assess the decision, the consequences could be deadly and catastrophic.

As Paul Scharre notes in “AI’s inhuman advantage,” posted on the platform War on the Rocks in April, militaries are introducing systems that react in ways that humans would not and that humans do not expect, because the systems have “alien cognition.” Scharre notes that such cognition can give the systems an “inhuman advantage” (although, if cooperation is required between humans and these systems, such cognition can be a disadvantage). But it also contributes to further dehumanization, treating warfare largely as a game with no societal consequences to consider.  

Proponents say that the technology will advance and improve and that problems with accuracy will be addressed, if not eliminated. Significant efforts are being pursued by leading tech companies, including OpenAI, Google, and Microsoft. Yet, as has been well noted, the results of improving these systems could pose another challenge for users: over-trusting the system or automation bias. OpenAI noted in a paper, GPT-4 System Card, posted on March 23, that a display of accuracy by an AI system, on a topic with which the user has some familiarity, might lead the user to place unqualified and unearned trust in that system when used for other tasks.

Human strengths and responsibilities

In ongoing discussions on autonomous weapons at the United Nations, the establishment of human control over weapon systems is seen as critical in determining responsibility and accountability in the selecting and engaging of targets. Because only humans can be held accountable for any actions that are taken or any disasters that result. Not machines.

And there are also real benefits in retaining real human control. A human understanding of context can play a critical role in warfare. Well-trained human soldiers can recognize a non-combatant or signals of surrender that a machine could miss or misinterpret. Properly trained military personnel can understand moral and ethical gradations in a way that machines simply cannot.

Establishing appropriate human control

As more decisions are relegated to AI-enabled platforms or shaped by them, the ability to hold human decision-makers accountable becomes more difficult. Human lives could come to be treated as lines of code in a trajectory of dehumanization and detachment.

Clearly, approving or rejecting an action is not a sufficient level of human control over weapon systems. But this is precisely what the new AI tools are offering in the name of speed and efficiency.

While Palantir has received a great deal of attention for its work with law enforcement and militaries, it is far from the only company keen to work with militaries and the wider defence sector and introduce AI into their operations.  One of the challenges for regulators is that AI tools are widely available, and any number of them could be employed by military and security institutions.

Another challenge lies in catching regulation up with the tech. Former Google CEO Eric Schmidt has suggested that tech companies should self-regulate, because they understand the technology as no one else can. But no one who pays attention to the rollout of various technologies subscribes to such a solution.

OpenAI is not clear on whether the Pentagon and intelligence agencies can even use ChatGpt. The company’s ethics guidelines ban military and other “high risk” use by governments. However, in “Can the Pentagon use ChatGPT? OpenAI won’t answer,” posted by Sam Biddle on The Intercept’s platform in May, readers learn that “the AI company is silent on ChatGPT’s use by a military intelligence agency.”

To truly respond to high-risk uses of AI technologies, such as use by the military, states must negotiate international instruments and develop comprehensive national policies. However, James Vincent, writing for The Verge, assessed the recent United States Senate hearings on AI as “too friendly.” Vincent noted that experts warn of the danger of “regulatory capture,” which lets the industry “write lax rules that lead to public harm.”  Despite their seeming disinterest, policymakers must set limits and guide technologists. Not the other way around.

From Blog

Related Post

Get great news and insight from our expert team.

No items found.

Let's make some magic together

Subscribe to our spam-free newsletter.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.