A leadership role on ethical AI is Canada’s to take or lose

February 27, 2020

This month, Georgetown University’s Center for Security and Emerging Technology issued a report highlighting the need for the United States and its allies to ensure that artificial intelligence is used to support liberal democratic values, and to protect against uses of AI that bolster authoritarianism. According to the report, Canada is an optimal partner for the US and placed to take a lead in privacy-preserving machine learning and in shaping global norms and standards for AI.

Canada is certainly capable of promoting global norms, with a federal government commitment to fund AI research, an active AI community, and a rapidly developing tech sector. Expert help is available from leading AI researchers in Canadian universities and industries. Research institutes and civil-society groups also have expertise on various applications of AI.

The government’s commitment was on display on February 25, when Minister of Industry Navdeep Bains announced a $5-million grant for joint Canadian-British research into the challenges of AI. In an interview following the announcement, Bains identified data privacy as critical to maintaining the benefits of AI. He went on to say, “As we move forward with that, hopefully we’ll be able to tackle some of these issues with respect to weaponization, military and ethical use of AI.”

This remark falls short of a crucial point. Weaponization of AI, as well as military, security, and ethical uses of data and new technologies cannot be afterthoughts. The government needs to commit resources, promote interdepartmental knowledge-building and -sharing, and empower individuals within the government to develop new tools and policies on precisely these concerns.

We cannot afford to shy away from the more controversial or problematic applications of AI. We must acknowledge and respond to the reality that the same data profile that can assist individual citizens can be used to track and target them. Data can be used to identify and track specific groups or ethnic communities that are guilty of no crime.

We are increasingly aware of the harm that can be inflicted on individuals because of biases embedded in AI technology developed to assist with the provision of social benefits and healthcare, and in law enforcement and our legal system.

The Canadian government is starting to pay attention. A new parliamentary investigation with all-party support will look into the uses and abuses of facial recognition technology. But we should be farther down this path. In March 2017, Canada became the first country to release a national AI strategy. By now, we should have national directives and guidance on how to use AI and new security and military technologies.

And time is of the essence. It was recently reported that some Canadian police services had used Clearview AI facial recognition technology, without authorization, and, in Toronto’s case, without the knowledge of the Chief of Police. Now, the federal privacy commissioner and several provincial commissioners will probe the use of Clearview AI technology. Meanwhile, a broader national discussion on facial recognition technology is picking up steam.

Canada needs to get up to speed in other ways as well. The United States Department of Defense has just adopted five broad principles on ethical uses of artificial intelligence. The Pentagon has committed to further steps, including a bureaucratic hub and an ethics subcommittee of the high-level AI Steering Committee. It is not clear that Canada has any similar structures in place; if they exist, it is not known if they cut across governmental departments.

There is also a need to follow up on policy promises. The new mandate letter for Foreign Affairs Minister François-Phillipe Champagne advised the Minister to “advance international efforts to ban the development and use of fully autonomous weapons systems.” Yet when he outlined Canada’s foreign policy priorities at a recent event in Montreal, he neglected any mention of these weapons. What could be more ethical than a commitment not to develop AI-enabled weapons that can take human lives?

If Canada is to lead on ethical AI, it must be heard and seen to lead. Even the United States recognizes that Canada has an important role to play. Let’s hope that Canada is as insightful and takes on the much-needed leadership role on the global stage.


From Blog

Related Post

Get great news and insight from our expert team.

How to use open-source intelligence to get to the truth

No Canadian leadership on autonomous weapons

Let's make some magic together

Subscribe to our spam-free newsletter.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.