AI Firms Should Adopt KYC Policies to Combat Misuse, Microsoft Exec Suggests
With reports that China and Russia are using AI to target Americans, Microsoft President Brad Smith told U.S. lawmakers today that Know Your Customer (KYC) policies—standard in traditional finance—could play a part in national security.
“We’ve been advocates for those,” Smith said. “So that if there is abuse of systems, the company that is offering the [AI] service knows who is doing it, and is in a better position to stop it from happening.”
Smith was testifying before the U.S. Senate Committee on the Judiciary about the potential dangers of Artificial Intelligence. During the hearing—which also included NVIDIA Chief Scientist and Senior Vice President of Research William Dally and Professor of Law at Boston University Woodrow Hartzog—Smith said KYC could fight the misuse of artificial intelligence in spreading misinformation and interfering in elections.
In response to a question about foreign interference from Senator Marsha Blackburn of Tennessee, Smith said that all companies developing AI technology should ensure that foreign governments do not use generative AI tools in this way.
Smith had previously urged lawmakers in May to move faster on AI rules and for companies to do more to better safeguard users and the technology—including endorsing a requirement that developers obtain a license before deploying artificial intelligence tools. In June, the United Nations expressed concern about the use of AI-generated deepfakes in conflict zones fueling hate.
AI Deepfakes Pose ‘Real Risk’ to Markets, Says SEC Chair Gary Gensler
Smith said that while Microsoft has observed “prolific activities” from China and Iran, the most global actor is Russia, who has spent billions on a global influence operation. He told the Senators those activities have grown since Russia invaded Ukraine in 2022.
“Part of it targets the United States,” Smith said. “I think their fundamental goal is to undermine public confidence in everything that the public cares about in the United States.”
Smith added that these activities are also seen in the South Pacific and Africa.
AI Deepfakes Are a Threat to Businesses Too—Here’s Why
Know Your Customer, better known as KYC policies, have been around in banking since the 1980s but became a hot-button issue in the Web3 space, primarily due to the varying amount of data a customer must submit when opening an account. U.S.-based Cryptocurrency exchanges—including Coinbase, Binance.US, Kraken, and Gemini—have KYC policies and take measures like geo-blocking users in restricted regions.
Smith’s endorsement of KYC policies in AI comes at a time when AI technology has entered the mainstream, with hundreds of generative AI platforms on the market on mobile devices and desktop computers.
In addition to KYC measures, Smith also suggested using AI as a defensive tool to detect when the technology is being used, adding that Microsoft has invested heavily in that area.
The Microsoft executive added that policymakers must do their part.
“We’re seeking to be a voice with many others, that calls on governments to lift themselves to a higher standard, so that they’re not using this kind of technology to interfere in other countries, and especially in other countries’ elections,” Smith said.