Last month, 116 robotics experts, including Tesla and SpaceX CEO Elon Musk, signed a letter calling on the United Nations to take action on lethal autonomous weapons systems or killer robots. Musk has become the chief figure warning about the military uses of artificial intelligence (AI), and has brought increased media attention to the issue.
Recently, Musk pointed to remarks made by Russian President Vladimir Putin. In his video to 16,000 Russian students, Putin stated, “Artificial intelligence is the future, not only for Russia, but for all humankind…. Whoever becomes the leader in this sphere will become the ruler of the world.” Musk tweeted: “Competition for AI superiority at national level most likely cause of WW3."
We don’t know if AI competition will lead to war, but it is true that countries such as China, Russia and the United States see AI as central to their security. A number of state and non-state actors are working on developing weapons and security systems with AI. China, for example, has published a strategy to become the central node of global AI innovation by 2030 and is committed to investing in AI R&D for national defence. Russia is playing catch-up. Other countries are open to developing military uses for AI in the future.
However, some analysts are skeptical about the current possibilities of AI. They don’t believe that the technology is advanced enough to worry about. And Musk has been strongly criticized for “creating an echo chamber for hysteria.” Carleton University professor Stephanie Carvin described him as a “fear mongering technological determinist with no understanding of politics.”
Facebook founder Mark Zuckerberg seemed to have Musk in mind when he stated that individuals peddling doomsday scenarios are “pretty irresponsible.”
University of North Carolina professor Zeynep Tufekci wants us to focus on how AI will be used by those in power. Tufekci points to a machine learning system that can identify 69% of protestors who are wearing disguises, such as scarves over their faces or hats and glasses. How will governments, democratic and autocratic, use this new tool?
Now consider the development of increasingly autonomous weapons systems. We are already seeing weapons that can perform tasks on their own, such as takeoff and landing in the case of drones. While current technology may not be able to identify, select, and target independently, such capabilities are being tested.
Ultimately, the current state of technology should not guide the discussion on killer robots or military uses of AI. As Musk claimed in previous statements, tech advances outstrip regulation. We need proactive global regulation, which stipulates that a human should always be in charge of critical functions of weapons systems, such as the taking of a human life.
Also needed is a broader discussion on the military and security implications of emerging technologies. We owe Musk thanks, at least, for bringing this issue to the world’s attention.
However, critics have a point. Twitter soundbites won’t provide a solution. What is needed is broader discussion among governments, civil society, and the tech community. One such opportunity is the upcoming session on lethal autonomous weapons at the United Nations Convention on Conventional Weapons in November. It should be used not only to voice concerns about potential risks of emerging technologies, but to develop regulations to protect the world from the gravest threats.