Role of AI on the Battlefield Debated as Putin’s Stakes New Policy Position
As world leaders band together to shape shared policies around the development of artificial intelligence, policymakers are looking to leverage the technology on the battlefields of the future—including Russian President Vladimir Putin.
The West should not be allowed to monopolize AI, Putin said during a recent conference on AI, signaling that he would advance an ambitious Russian AI strategy, according to a Friday report by Reuters.
“In the very near future, as one of the first steps, a presidential decree will be signed, and a new version of the national strategy for the development of artificial intelligence will be approved,” Putin said during the Artificial Intelligence Journey conference.
The competition among Microsoft, Google, and Amazon to bring more advanced AI to the masses has been compared to a nuclear arms race, even as an actual AI arms race is unfolding between the United States and China. On that front, top U.S. military contractors—including Lockheed Martin, General Dynamics, and Raytheon—are developing AI tech for military operations.
Another company working on combat AI is San Diego-based Shield AI, recently featured in the Netflix documentary Unknown: Killer Robots.
Shield AI is an American aerospace and defense technology company founded by brothers Brandon Tseng and Ryan Tseng, along with Andrew Reiter in 2015. Shield AI is responsible for the Nova line of unmanned aerial vehicles (UAV) that the U.S. military already uses in urban environments where GPS or radio frequencies are unavailable.
Meet the Company Building AI Fighter Jets for the U.S. Military
While automated war machines may give visions of T-800 from the Terminator Series, Logan says the goal of bringing AI to the battlefield is about saving lives.
“The success of Nova is you could push a button and go explore that building, and Nova would go fly into that building, and it would go into a room, spin around 360 degrees, perceive the environment, and make decisions based on what to do and then continue to explore,” Shield AI Director of Engineering Willie Logan told Decrypt. “The whole goal of that was to provide [soldiers] on the ground insights into what was in the building before they had to walk in themselves.”
Shield AI calls its AI software “the hivemind.” As Logan explained, the difference between an AI-powered UAV and one guided by humans is that instead of a human telling the UAV how to fly and wait for the operator to identify a target, the AI is programmed to look for the target and then monitor the object once it’s discovered.
In addition to adding AI brains to drones, Shield AI partnered with defense contractor Kratos Defense to add an AI pilot to its unmanned XQ-58A fighter jet—the Valkyrie. In October, Shield AI announced the raise of $200 million in investments, giving the company a $2.7 billion valuation.
The Pentagon Is Accelerating AI and Autonomous Technology
The U.S. military has invested heavily in leveraging AI, including generative AI, to conduct virtual military operations based on military documents fed into the AI model.
In August, the Department of Defense Deputy Secretary of Defense Kathleen Hicks unveiled the Pentagon’s Replicator initiative that aims to “field attritable autonomous systems at scale of multiple thousands, in multiple domains, within the next 18 to 24 months.”
Others developing battlefield AI include European AI defense developer Helsing, which announced raising $223 million in Series B funding in September, including from Swedish airplane and car manufacturer Saab, creator of the Gripen fighter jet.
Logan said that while the idea of killer robots may be good for a Hollywood blockbuster, AI is about keeping humans out of harm’s way while keeping humans in the loop.
“I really highlight the shield part of Shield AI,” Logan said. “By giving the United States this capability, [Shield AI] is providing a deterrence.” Logan cautioned that even if the United States said it won’t develop AI tools for war, that does not mean other countries won’t.
“I think if we can be in the forefront of it and design it in a way that we think is the right way for the world to use this,” Logan said. “We can help deter bad actors from doing it the wrong way.”