Prompts and Circumstances
Deepfakes and disinformation: Welcome to the world's first AI-influenced elections
From phony robocalls to forged videos, AI has become a new tool for political persuasion as billions of global voters head to the polls, with autocracy on the ballot.
NEW YORK —After a jury found Donald Trump guilty of 34 felony charges last week for illegally falsifying business documents to manipulate the outcome of his 2016 election to the White House, Bronze Age Pervert—the alter ego of Romanian-American social media influencer Costin Alamariu—tweeted a deepfake video depicting a scene of armed men storming buildings and gunning people down. He said the video portrayed the kind of “well-planned neutralization operation” that will take place if Trump gets reelected.
Trump’s MAGA mass responded even more darkly to Trump’s felony conviction. Some Proud Boys’ chapters ran another deepfake video (since removed) showing an AI-doctored image of President Joe Biden crouching behind a speaker’s podium to avoid a confrontation with a rifle-wielding man wearing a t-shirt emblazoned with one word: “War.”
But voter, don’t be fooled. It’s not just Trump’s faithfuls using AI-doctored media to share support and recruit more. Many other politicians now running for office— in Britain and Romania and from Sri Lanka to Venezuela and beyond— are also experimenting with AI to gain perceived political advantage in some of the world’s more angrier courts of public opinion.
Is it working?
Some AI-powered campaigns are helping candidates gain unprecedented levels of speed, reach and influence among voters, while other AI-aided strategies have backfired. Results so far are mixed.
Just ask Polish Prime Minister Donald Tusk, whose campaign strategists posted a highly controversial deepfake audio clip of his opponent on social media during this country’s recent election campaign. (It actually helped. Tusk won.)
Or consider Claudia Sheinbaum, who just yesterday won a landslide victory in Mexico’s rough and tumble presidential election, becoming that country’s first female president. She won despite being the target of deepfake videos, voice clones, doctored photos, sexism and misogynist tropes created by rivals using AI to depict her as being too conservative and less fit for the job. [“You’ll see that it’s my voice, but it’s a fraud,” Sheinbaum told local reporters after one deepfake of her supposedly pitching an investment scam went viral.]
So how much, dear readers, should you and I worry about AI’s influence during the 2024 elections in our towns, cities, states and countries of origin?
Don’t panic yet, says former Google CEO Eric Schmidt. But in a few years, Schmidt cautions, things might get tricky. “AI now has the ability to write programs,” he explains, “which means AI will be able to generate entire interest groups of people who don’t exist to support a shared cause or candidate—and won’t that be challenging?”
He smiles and nods. “Welcome to the brave and bombastic—and evolving new world of political AI.”
Plunder and Progress
Deepfakes aren’t new. Most of us know by now that a deepfake is a term being used to refer to misleading (or completely inaccurate), AI-generated content—faked videos, altered photos and voice-cloned audio clips made and used mostly by nefarious actors to manipulate public opinion.
So what is new? AI is being used for the first time in global elections this year, and that’s one reason AI is being used by some political operatives now to create a new level of “nasty”—to help diminish a rival’s reputation, twist his/her words to say something politically damaging— or worse, to threaten violence and erode public trust. Campaigning for a public office has always been tough. But for the very first time, we’re experiencing a global election year in which our new technology tools are being tested to elevate both the risk and effectiveness of bad actors’ efforts to distort reality, dissuade some local candidates from running—and for some, to ignore the rule of law in the process.
“This summer, autocrats from Moscow to Beijing and Myanmar are challenging the global political order and are using AI to tighten their grip on power,” says Vivian Schiller, the former CEO of National Public Radio and now the Vice President and Executive Director of Aspen Digital, which empowers policymakers, civic organizations, companies, and the public to be responsible stewards of technology and media. “Those who wish to spread lies and disinformation have never had the tools to do this at the speed and scale that this year’s more sophisticated forms of AI are enabling.”
Adds Schmidt: “Emotion and powerful videos drive voting behavior, and (U.S. and other nations’) social media companies are weaponizing that because [most social media users] respond not so much to the content, but rather to the emotion.”
You and AI
Here’s an update on how AI might be influencing your vote and media consumption this year—and the political campaigns you might be following more closely this summer. [New Rule: Be ever-curious, but verify.]
Covert influence operations. OpenAI, the creator of ChatGPT4 and ChatGPT-4o, announced Friday it has shut down five global influence operations, including deepfake operations in Russia, China, Iran and Israel which had been exploiting the company’s deepfake technology during the past three months to manipulate public and political opinion. A Russian political campaign called Doppelganger was using OpenAI tools to build support in Western Europe for its war against Ukraine. China’s Spamouflage campaign was using AI to create social media posts disparaging critics of the Chinese government. A recent Israeli campaign, nicknamed Zeno Zeno, deployed a commercial company in Israel to use OpenAI technologies to manage political campaigns and post anti-Islamic messages. “The content posted by these operations focused on Russia’s invasion of Ukraine, the conflict in Gaza, politics in Europe, the United States and criticism of the Chinese government by Chinese dissidents and foreign governments,” OpenAI acknowledges on its website.
Voice clones. AI-generated content, particularly voice cloning technology, now poses “an alarming threat to the integrity of upcoming elections,” according to Attack of the Voice Clones, a new report released Friday by the Center for Countering Digital Hate. The Center says new audio cloning tools are easy to manipulate a candidate’s voice to mislead voters. According to the Center’s report, “deepfake audio will inevitably be more widely weaponized to mislead voters this summer.” Cliff Watts, head of Microsoft Threat Analysis Center, says “audio is the medium people should worry about the most, as it’s far more difficult for people to discern clues that it’s fake.” President Biden’s voice has already been audio-cloned by bad actors. A few years ago, this technology did not exist.
Global Propaganda. The Economist, in a recent investigative report published in February, says autocracies are now using AI and other technologies “to export autocracy to their diasporas.” The UK-based magazine said AI and other technologies are being used to amplify individuals’ voices and their connections to their homeland, but governments are also starting to use AI to “to monitor, intimidate and censor citizens … and to control ideas and large numbers of people abroad as well as at home.” In just the first few months of this year, AI technologies helped Russia create a fake interview that seemed to show a Ukranian security official taking credit for a terrorist attack in Moscow. At the outset of the full-scale invasion against Ukraine, hackers uploaded another deepfake, this one featuring a doctored video of Ukrainian President Volodymyr Zelensky urging the Ukrainian army to surrender—propaganda made for Russians living at home and abroad, and intended to destabilize Ukraine’s civil society.
Gender-focused attacks. During a recent panel on AI’s influence over this year’s global elections, sponsored by Columbia University’s School of International and Public Affairs, panelists said there has been an increase in the use of AI-generated, deepfake videos to discourage female candidates, government officials and independent political journalists opposed to autocratic governments and policies. “If you’re a woman, gender disinformation is being used to pound you into silence if you’re in position of power,” said Maria Ressa, a Nobel Peace Prize-winning journalist and now a distinguished fellow at SIPA. Vera Jourova, the chief of the European Union’s commission charged with overseeing the use of AI by EU companies and institutions—said she was the victim of a deepfake which used her likeness to create a fake pornographic video for the purpose of intimidating her for her policy decisions. “It didn’t work. It didn’t change how I do my job,” she said.
Deepfake detection. According to Jumio’s new 2024 Online Identity Consumer Study of more than 8,077 adult consumers—split evenly across the United Kingdom, the United States, Singapore and Mexico—nearly three quarters of respondents said they worry daily about being fooled by a deepfake. Only 15% of consumers said they’ve never encountered a deepfake video, audio or image before, while 60% said they have encountered at least one deepfake during the past year. Those surveyed who expressed the most worry about AI among the four nations polled were consumers/voters in Mexico (89%) and Singapore (88%), followed by those in the UK (57%) and the United States (55%). Men were more confident in their ability to spot a deepfake than women.
What now?
In some countries, there has been a surge of grassroots actions and public-private partnerships created to step up fact-checking, create new commissions focused on screening AI influence in campaigns and federal hearings on the dangers of this new technology. [Also see Sam Gregory’s work and videos about deepfakes and solutions.] But efforts to regulate AI are, mostly, just getting started.
On the federal level in the United States, says Eric Schmidt, there isn’t a lot of hope this year for creating legislative guardrails to dissuade election interference by bad actors in the November 5th presidential election. Reforming Section 230 of the Communications Decency Act to allow digital platforms to be held liable for the distribution of fakes “would be nice,” Schmidt says, but it is doubtful it would gain much support in this year’s strongly divided U.S. Congress during this year’s deeply contentious election year.
“Regardless,” says Schmidt, “these problems are not unsolvable. We are capable of solving these challenges, and we must. AI is complex technology but it’s not quantum physics. Our democracy, and our ability to nurture it to work better in the next election, will hang in the balance.”
What’s your take on the use of deepfakes in political campaigns? Have you seen any deepfakes worth sharing with us and our expanding NewRules community? Let us know!