By Cesar Jaramillo and Branka Marijan
The world today is faced with two existential dangers: climate change and nuclear weapons. A third – artificial intelligence– could soon be added. While we don’t know exactly how AI will develop, it is all but certain that it will disrupt the trajectory of human evolution and global security.
AI is not a single technology but an array of applications that have the unique ability to erode human control, leading to profound uncertainty about its ultimate impact. Recent breakthroughs in large language models like ChatGPT have demonstrated AI’s potential to supercharge disinformation campaigns, aid in cyberattacks, and even facilitate the possible creation of biochemical weapons.
Despite the fragility of AI systems, their widespread global adoption is driving rapid improvement. Still, this rush to deploy AI tools, driven by unearned trust in their capabilities (automation bias), a desire for efficiency, and fear of falling behind, proceeds despite the flaws. Clearly, regulation is needed.
Immediate concerns that demand attention include the introduction of biased and error-prone AI technology into critical contexts such as war zones, and the use of AI by law enforcement and border agencies. Already such use has produced wrongful arrests and amplified disinformation campaigns in various countries.
While government regulation lags, large tech companies consolidate their power in the new AI universe. Earlier this year, Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI adopted a set of voluntary guidelines, highlighting the absence of robust government regulations. When the European Union created the AI Act, lobbying by U.S. companies like OpenAI resulted in diluted regulations.
Several governments are seeking to become leaders in the development of global norms and standards. The United Kingdom is taking a first step by planning the first major global summit on AI safety; it also chaired a United Nations Security Council meeting on AI risk.
In the United States, Senate majority leader Chuck Schumer has initiated a plan to help Congress better understand the “unprecedented challenge” posed by AI through nine panels that include experts from industry, academia, and civil society. The United States has also updated its directive on autonomy in weapons systems and issued a political declaration on the responsible use of military AI.
Militarized AI must be at the top of the list for regulation. It is already in use, guiding targeting decisions on battlefields in Ukraine and elsewhere. AI technology enables growing autonomy; Lockheed Martin recently flew a fighter jet for 17 hours without human intervention. AI also controls loitering munitions, also known as kamikaze drones, used in the Libyan civil war.
The possibility of fully autonomous weapons or “killer robots” that could select and engage targets without human intervention is alarming. The United Nations Convention on Certain Conventional Weapons (CCW) is engaged in efforts to control such developments but has made little progress because of a lack of political will and procedural hurdles. Meanwhile, the Pentagon's Replicator initiative, announced in August, aims to deploy thousands of autonomous systems in response to China’s advancements in this field.
Canada must proactively confront the dangers posed by AI, particularly its militarization. The proposed Artificial Intelligence and Data Act (AIDA) being considered by the House Standing Committee on Industry and Technology as part of Bill C-27 lacks clear references to AI military applications. AIDA is focused on “high impact” AI, but the scope is left unclear.
One possible course of action to address high-risk applications within security institutions and the military is the immediate establishment of a task force, high-level commission, or parliamentary hearings. These initiatives should involve experts from a diverse array of disciplines, including the sciences, technology, national and international law, ethics, and public policy.
Regulations need not hinder the benefits of AI but must be comprehensive and put public safety first. A fragmented approach or one that excludes key stakeholders will not suffice. Moreover, regulations governing military applications must have an international scope to prevent conflicting national laws that could jeopardize global stability.
The advent of AI has ushered in a technological revolution of unprecedented scale. The ability of AI to learn, adapt, and perform complex tasks with incredible speed and efficiency both promises and threatens to transform life in unimaginable ways. With AI's power growing exponentially in all spheres of human activity, including the military, the requirement for scrutiny and effective policy responses should be self evident.
In this new era of technological advancement, Canada must ensure the responsible development of AI to ensure its responsible development, protect its citizens and contribute to global security.