The rapid establishment of AI Safety Institutes by Western governments has overlooked the governance of military AI use, despite the growing potential for serious safety risks.
What Happened: Concerns regarding the lack of governance over military AI use were raised by Marietje Schaake, International Policy Director at Stanford University's Cyber Policy Center and Special Adviser to the European Commission, in an op-ed in the Financial Times on Tuesday.
As per Schaake, AI Safety Institutes have been announced by the U.K., U.S., Japan, and Canada. The U.S. Department of Homeland Security also recently introduced an AI Safety and Security Board. However, none of these bodies oversee the military application of AI.
Schaake also highlighted that with the boost from venture capitalists, defense tech is also flourishing at an unregulated speed.
“But though it's easy to point the finger at private companies who hype AI for warfare purposes, it is governments who have let the ‘deftech' sector escape their oversight,” she added.
Schaake pointed out that AI safety risks are already evident on the modern battlefield. For instance, an AI-enabled program, Lavender, was reportedly used by the Israel Defense Forces to identify targets for drone attacks, resulting in considerable collateral ...