Responsible AI for defence applications

June 7, 2021

By Branka Marijan

Published in The Ploughshares Monitor Volume 42 Issue 2 Summer 2021

Responsible uses of artificial intelligence (AI) have been featured prominently in recent national discussions and multilateral forums. According to the Organisation for Economic Co-operation and Development (OECD), 60 countries have multiple initiatives and more than 30 have national AI strategies that consider responsible use. However, the use of AI for national defence has not generally been tackled yet.

In October 2020, the United States launched the AI Partnership for Defense with Australia, Canada, Denmark, Estonia, Finland, France, Israel, Japan, Norway, South Korea, Sweden, and the United Kingdom. The intent is to create standards of ethical and responsible uses of AI and, likely, to promote better integration and interoperability among military partners. This effort is widely understood to be motivated by the common desire to respond effectively to the adoption and use of AI by China and Russia.

But the partnership does not mean that all partners agree on all aspects of responsible AI use. Views differ on autonomous weapons, for example. France recently released a position paper that seemed to differentiate between fully autonomous and partially autonomous lethal weapon systems. The label “partially autonomous” obscures the fact that critical decisions, such as the selection and engagement of targets, would be handled by a weapon system. Canada is presumably committed to maintaining significant human control, as indicated in the Foreign Minister’s mandate in 2019 to support international efforts to ban fully autonomous weapons.

Thus, it seems that the partners will take somewhat different paths to reach the goal of responsible AI.

HOW MILITARIES USE AND PLAN TO USE AI

The need to develop norms and legal rules for the use of AI is growing. According to the 2020 U.S. Congressional Research Service (CRS) report Artificial Intelligence and National Security, militaries around the world are developing and using AI for, inter alia, the collection and analysis of data, back-end functions such as logistics, and cyber operations. The CRS notes, “Already, AI has been incorporated into military operations in Iraq and Syria,” where the Pentagon’s Project Maven—essentially AI algorithms used to identify targets—was employed.

Project Maven revealed that militaries are actively seeking out AI tools for data analysis and, particularly, recognition of objects and individuals. These tools are largely developed by civilian industry. This point was highlighted in 2018, when employees of Google protested its involvement in Project Maven, particularly the development of algorithms to analyze drone footage.

Employing the commercial sector is problematic for most militaries, because civilian tech is often not ideal in military contexts and serious adjustments are always necessary. Increasingly, militaries are looking to their own defence research arms to develop new technologies, recruiting, as necessary, talent from the outside, and also developing partnerships with AI researchers in universities and industry.

Canada will need to navigate among these different approaches. Close U.S. ties will make it difficult for Canada to develop policies that do not focus on interoperability. But Canada must attempt to ensure that its own policies are in line with national obligations and serve its own economic interests.

Defence companies have established their own AI divisions, while investing in other capabilities provided by companies that generally focus on the civilian sector. For example, DarwinAI, a Waterloo-based technology company that works on explainable AI, partnered with U.S. defence firm Lockheed Martin in 2020. The intent is to produce military systems in which the decisions made by AI can be penetrated and understood by the military end-user. The aim is to avoid “black box AI,” which is NOT understood by the user.

Many countries have expressed concern about black-box decisions and proprietary algorithms that cannot be audited. But being explainable is not enough. Even certain explainable functions might need to be regulated or prohibited if they breach international or national laws.

IS A NORMATIVE FRAMEWORK EMERGING?

So far, AI Partnership for Defense members are all committed to AI systems that are safe, reliable, and legal. But they remain tempted by the promise of speedier responses and reduced risk for their fighting forces. The best evidence of their commitment to responsible AI will be seen if they create specific standards, agreements, and regulations that reflect a thorough consideration of the impacts of AI on military operations, including global security concerns and the protection of civilians. So, far, after only two meetings of the AI Partnership, there have not been clear indicators on where AI partners will draw their lines in the sand.

Both the European Union (EU) and the United States aim to lead in shaping norms related to responsible use of AI in defence applications. The AI Partners, including Canada, that are still developing national AI policies, particularly on defence, will need to consider EU and U.S. standards.

The developing EU model prioritizes privacy; its General Data Protection Regulation is touted as an example of responsible AI. But there is not yet one uniform EU model. Germany, which is currently not in the Partnership, is concerned about military uses of AI, while France wants more new tech, including AI, in its military.

The United States seems more interested in interoperability and data sharing among allies. These operations raise questions about national obligations to protect data  and how interoperability works between allies whose militaries are adopting AI at different speeds and with varying degrees of willingness. As might be expected, not all U.S. agencies involved with AI view its use by the military in the same way. At one extreme is the National Security Commission on Artificial Intelligence, an independent U.S. commission established in 2018 that recently released a report calling for a much more aggressive adoption of new technologies by the U.S. military. The report argues that only such an approach will ensure that the United States can compete globally, especially with China and Russia.

Canada will need to navigate among these different approaches. Close U.S. ties will make it difficult for Canada to develop policies that do not focus on interoperability. But Canada must attempt to ensure that own policies are in line with national obligations and serve its own economic interests.

And there needs to be a conversation that goes well beyond the current 13 Partners. A global conversation on the use of AI in defence applications is critical and urgently needed.

The Convention on Certain Conventional Weapons (CCW), which has focused on lethal autonomous weapons since 2014, has been perhaps less successful in achieving regulation than hoped, with no new agreement and talks largely stalled. However, the CCW has allowed a much wider group of countries to better understand advancements in AI technologies and potential concerns.

Perhaps it is time to consider a different venue for that global conversation to truly ensure responsible applications of emerging technologies.


What is black box AI?

These are essentially models whose decision-making is not understood by humans, including the designers of these systems. The black-box models combine variables in ways that arrive at a prediction or recommended action that cannot be easily disentangled by humans. Using such systems in decision-making in safety-critical contexts and in military operations raises a number of concerns about unpredictable and unreliable decisions and actions. Other concerns focus on who will be held accountable when an AI system makes a mistake or acts in ways that may not have been anticipated.

From Blog

Related Post

Get great news and insight from our expert team.

How to use open-source intelligence to get to the truth

No Canadian leadership on autonomous weapons

Let's make some magic together

Subscribe to our spam-free newsletter.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.