AI Misinformation Could Rock the 2024 Elections—Here’s How OpenAI Plans to Fight It
With the threat of artificial intelligence to democracy being a top concern for policymakers and voters worldwide, OpenAI laid out its plan Monday to help ensure transparency on AI-generated content and improve reliable voting information ahead of the 2024 elections.
After the launch of GPT-4 in March, generative AI and its potential misuse, including AI-generated deepfakes, have become a central part of the conversation around AI’s meteoric rise in 2023. In 2024, we could see serious consequences from such AI-driven misinformation amid prominent elections, including the U.S. presidential race.
“As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency,” OpenAI said in a blog post.
OpenAI added that it is “bringing together expertise from our safety systems, threat intelligence, legal, engineering, and policy teams to quickly investigate and address potential abuse.”
Snapshot of how we’re preparing for 2024’s worldwide elections:
• Working to prevent abuse, including misleading deepfakes
• Providing transparency on AI-generated content
• Improving access to authoritative voting informationhttps://t.co/qsysYy5l0L— OpenAI (@OpenAI) January 15, 2024
In August, the U.S. Federal Election Commission said it would move forward with consideration of a petition to prohibit AI-generated campaign ads, with FEC Commissioner Allen Dickerson saying, “There are serious First Amendment concerns lurking in the background of this effort.”
For U.S. customers of ChatGPT, OpenAI said it will direct users to the non-partisan website CanIVote.org when asked “certain procedural election related questions.” The company says implementing these changes will inform its approach globally.
“We look forward to continuing to work with and learn from partners to anticipate and prevent potential abuse of our tools in the lead-up to this year’s global elections,” it added.
In ChatGPT, OpenAI said it prevents developers from creating chatbots that pretend to be real people or institutions like government officials and offices. Also not allowed, OpenAI said, are applications that aim to keep people from voting, including discouraging voting or misrepresenting who is eligible to vote.
AI-generated deepfakes, fake images, videos, and audio created using generative AI went viral last year, with several featuring U.S. President Joe Biden, former President Donald Trump, and even Pope Francis becoming the focus of the images shared on social media.
To stop its Dall-E 3 image generator from being used in deepfake campaigns, OpenAI said it will implement the Coalition for Content Provenance and Authenticity’s content credentials that add a mark or “icon” to an AI-generated image.
“We are also experimenting with a provenance classifier, a new tool for detecting images generated by Dall-E,” OpenAI said. “Our internal testing has shown promising early results, even where images have been subject to common types of modifications.”
Last month, Pope Francis called on global leaders to adopt a binding international treaty to regulate AI.
“The inherent dignity of each human being and the fraternity that binds us together as members of the one human family must undergird the development of new technologies and serve as indisputable criteria for evaluating them before they are employed, so that digital progress can occur with due respect for justice and contribute to the cause of peace,” Francis said.
To curb misinformation, OpenAI said ChatGPT will begin providing real-time news reporting globally, including citations and links.
“Transparency around the origin of information and balance in news sources can help voters better assess information and decide for themselves what they can trust,” the company said.
Last summer, OpenAI donated $5 million to the American Journalism Project. The previous week, OpenAI inked a deal with the Associated Press to give the AI developer access to the global news outlet’s archive of news articles.
OpenAI’s comments about attribution in news reporting come as the company faces several copyright lawsuits, including from the New York Times. In December, the Times sued OpenAI and Microsoft, OpenAI’s largest investor, alleging that millions of its articles were used to train ChatGPT without permission.
“OpenAI and Microsoft have built a business valued into the tens of billions of dollars by taking the combined works of humanity without permission,” the lawsuit said, “In training their models, Defendants reproduced copyrighted material to exploit precisely what the Copyright Act was designed to protect: the elements of protectable expression within them, like the style, word choice, and arrangement and presentation of facts.”
OpenAI has called the New York Times’ lawsuit “without merit,” alleging that the publication manipulated its prompts to make the chatbot generate responses like the Times’ articles.
Edited by Andrew Hayward