WASHINGTON — At first glance, images circulating online showing former President Donald Trump surrounded by groups of Black people smiling and laughing seem normal, but a closer look reveals otherwise.
Unusual lighting and overly precise details indicate that they were all produced using artificial intelligence. The photos, which have not been associated with the Trump campaign, appeared as Trump tries to win over Black voters who polls indicate remain loyal to President Joe Biden.
The doctored images, featured in a recent BBC investigation, offer further proof to support warnings that the use of AI-generated visuals will only grow as the November general election nears. Experts stated that they underscore the risk that any group — Latinos, women, older male voters — could be targeted with realistic images intended to mislead and confuse, emphasizing the need for regulation around the technology.
In a report released this week, researchers at the nonprofit Center for Countering Digital Hate utilized several popular AI programs to demonstrate how simple it is to create convincing deepfakes that can deceive voters. The researchers were able to create images of Trump meeting with Russian operatives, Biden tampering with a ballot box, and armed militia members at polling places, even though many of these AI programs claim to have rules prohibiting this type of content.
The center analyzed some of the recent deepfakes of Trump and Black voters and concluded that at least one was originally created as satire but is now being shared by Trump supporters as proof of his backing among Blacks.
Imran Ahmed, the CEO and founder of the center, stated that social media platforms and AI companies need to take more action to protect users from the detrimental effects of AI.
“If a picture is worth a thousand words, then these dangerously susceptible image generators, combined with the poor content moderation efforts of mainstream social media, represent as powerful a tool for bad actors to mislead voters as we’ve ever seen,” Ahmed said. “This is a wake-up call for AI companies, social media platforms and lawmakers – act now or put American democracy at risk.”
The images raised concerns on both the right and left that they could deceive people about the former president’s support among African Americans. Some in Trump’s circle have expressed frustration at the circulation of the fake images, believing that the fabricated scenes undermine Republican outreach to Black voters.
“If you see a photo of Trump with Black folks and you don’t see it posted on an official campaign or surrogate page, it didn’t happen,” said Diante Johnson, president of the Black Conservative Federation. “It’s nonsensical to think that the Trump campaign would have to use AI to show his Black support.”
Experts anticipate additional attempts to use AI-generated deepfakes to target specific voter groups in crucial swing states, such as Latinos, women, Asian Americans, and older conservatives, or any other demographic that a campaign aims to attract, mislead, or intimidate. With numerous countries holding elections this year, the challenges posed by deepfakes are a worldwide problem.
In January, voters in New Hampshire got a robocall that copied Biden’s voice and falsely told them that voting in the state’s primary would make them unable to vote in the general election. A political consultant later admitted making the robocall, which might be the first known attempt to use AI to interfere with a U.S. election.
According to a February study by researchers at Stanford University, such content can have a damaging effect even when it’s not believed, especially on Black communities. When people realize they can’t trust images they see online, they may start to disregard legitimate sources of information.
The researchers wrote that as AI-generated content becomes more common and hard to tell apart from human-generated content, individuals may become more doubtful and distrustful of the information they get.
Even if it doesn’t manage to fool a large number of voters, AI-generated content about voting, candidates, and elections can make it more difficult for anyone to distinguish truth from fiction, causing them to dismiss legitimate sources of information and contributing to a loss of trust that’s undermining faith in democracy and widening political division.
Even though false claims about candidates and elections are not new, AI makes it quicker, cheaper, and easier than ever to create realistic images, video, and audio. When put on social media platforms like TikTok, Facebook, or X, AI deepfakes can reach millions before tech companies, government officials, or legitimate news outlets are even aware of their existence.
Joe Paul, a business executive and advocate who has worked to increase digital access among communities of color, said that “AI simply accelerated and pressed fast forward on misinformation.” Paul pointed out that Black communities often have a history of mistrust with major institutions, including in politics and media, which makes Black communities more skeptical of public narratives about them and fact-checking meant to inform the community.
Paul said that digital literacy and critical thinking skills are one defense against AI-generated misinformation. “The goal is to empower folks to critically evaluate the information that they encounter online. The ability to think critically is a lost art among all communities, not just Black communities.”