Two-faced
AI lets you create a digital body double of yourself, and others—for better or worse
As AI continues to evolve, there’s been a digital rush by marketers to create a new wave of face, voice and body doubles and AI-powered memes to boost their influence. Is it helping or hurting us?
Just ask Kamala Harris.
Last week, Elon Musk—a Donald Trump supporter— re-posted on X (formerly Twitter, the social media platform he now owns), a deepfake video of the Vice President.
In this video, Harris’s voice has been manipulated to call Joe Biden “senile,” and to describe herself as a “diversity hire” and to say—while laughing at high volume—that she doesn’t know “the first thing about running a country.” In reality, Trump—not Harris—said these things about his new rival for reelection to the White House during one of his recent MAGA rallies.
Caught in a Catch 22
Late last week, conservative podcaster Chris Kohls told Scripps News he made the Harris deepfake from one of Harris’s first campaign videos—but as a legal parody. It then got re-posted by Musk on X, but without a required label to identify it as a fake. Musk has since come under heavy pressure to remove the video from X because it violates his own rules on the platform. While those tracking Harris would likely know that she never said what Kohl made her say using audio synthesizing tools, the deepfake remains on X and has now been seen by more than 150 million people worldwide.
“This is one of the most prominent political deepfakes to circulate so far this year, or any time since AI began being used by the public just two years ago now,” says Lisa Gilbert, co-president of the nonprofit consumer advocacy organization Public Citizen. “Its intent? To discourage support for Harris among undecided voters, now that early polls are showing Harris is rapidly moving ahead of Trump in some swing states. The risk of allowing these kinds of deepfakes to go viral, regardless of political affiliation, undermines the integrity of our elections.”
Maybe so—but don’t expect government regulators or digital marketing leaders to run to the rescue—at least not any time soon.
“Parodies” of public figures are protected by the First Amendment. Kohl labeled his Harris video a parody before posting it. Musk removed that label before sharing it on X and is continuing to keep it there, on the platform he owns as a way to drum up negative impressions of Harris from voters not yet familiar with her policies, personality or appeal.
Says Scripps’ disinformation reporter Liz Landers: “Unlike in other democracies like France or the UK—the First Amendment in the United States protects our ability to have free speech, whether that free speech is true or not.”
There’s also another reason the government and business interests are holding back.
The Doppelganger Game
In recent months, a powerful new digital marketing sector has been emerging that is specializing in the use of AI to enable everyday people, presidential candidates and multi-million-dollar businesses to not just overcome a deepfake, but also to affordably create “digital doppelgangers” to fight back—and polish the brand face and voice of new entrants to business and politics to ensure their success navigating today’s high-speed, data-rich and high-influence social media world, 24/7.
“If these new AI-powered deepfakes and memes on social media also keep generating higher degrees of support and pushback and participation from younger voters who favor Kamala, it’s totally not a bad thing,” says Stephanie Cutter, formerly the deputy campaign manager for Barack Obama and now in charge of strategic messaging for Harris. “So far, ironically, the media battles have all been working in her favor, or made to do so based on her campaign’s knowledge of social media—and the knowledge of younger and more media-savvy voters who support her.”
In some AI-savvy marketing agencies they’re calling it the new “doppelganger game” — a German word referring to a biologically unrelated lookalike or double of a living person. They’re also deep-diving their data to redefine what sells best and can persuade people most deeply in today’s fast-changing marketing environments, both at high speed and over time.
Pre-Internet, a doppelganger was the stuff of legends, and in ancient times, were menacing mythical spirits and spiritual saviors. The concept of the existence of a spirit double, an exact but usually invisible replica of a person, is an ancient and widespread belief. To meet one’s double was seen, in mythology, as a sign that one’s death is imminent, “but there’s not much in today’s new AI marketing strategies so far that can’t be successfully avoided,” Cutter says. These days, say marketers, AI-driven doppelgangers can be used to spook political opponents effectively, are becoming more affordable, and for some, are getting easier to make and use as a new kind of cost-cutting, image-polishing asset hard for companies and campaign strategists and studios to ignore.
Here are a few early doppelganger strategies already being or rolled out for the first time by business and political marketing agencies so far this year:
Personal assistants. Businesses are now looking to embrace the concept of having existing, real people create digital avatars of themselves. They’re also looking to create doppelganger doubles of key leaders online. META is working on tech called Creator A.I., which will enable real Instagram influencers to create fake digital AI versions of themselves to reduce their workload interacting with fans through direct messages, videos and comments—kind of like chatbots but now with bodies and familiar faces people can see and interact with intelligently.
Body doubles. Ernst & Young Global Limited (EY), a consulting firm focusing on AI strategies, now creates digital-human doubles of its partners for use in video clips. Channel 1, the world’s first AI-generated news channel start-up that is scheduled to formally launch early next year, will feature a mix of generative AI-created news anchors and human doubles stationed out in the field to interview news sources, gather video content and script stories for their digital body doubles to read in any one of 25 languages during each news segment. “We’re aiming to have the biggest cache of personalized news on the planet,” says founder Adam Mosam. “We’re building a new personalized national news network, powered by generative AI.”
Political marketing. U.S. intelligence officials, who briefed reporters last week in Washington after the discovery of the Harris deepfake on X, said there is now a bonafide, international deepfake political industry operating globally. Formerly the stuff of spy agencies, deepfakes are now mostly the product of commercial marketing and PR firms—including the Social Design Agency in Moscow and others based in these respective countries which create AI-powered marketing to influence U.S. politics by sending information to voters deemed by algorithmic profiles to be most likely to accept it. “China, Russia, and Iran continue to be the top 3 countries that want to influence US politics and policy and the outcome of this year’s presidential election,” says Scripps’ Landers. “But these countries are now using commercial firms, like marketing firms, like PR firms, as this stuff needs to be distributed” by the eco-system of big brand, multi-national social networks and cannot, therefore, be done in total secrecy. Nicco Mele, a recognized expert at the intersection of news, technology and politics, says “everybody is into AI-driven marketing these days. AI’s bad actors can be fought successfully many times by AI’s good actors—marketers who are leading the experimentation around the new tools capable of reshaping the culture and nature of marketing to navigate the future.”
The Fight for What’s Next
For political deepfakes, says Public Citizen’s Gilbert, “AI is starting to confirm what’s possible”—and catalyze a new wave of experimentation in the way business, government— and now politicians— use media, including body doubles, to help them communicate in and with a digital world. Joe Biden wasn’t using any of it. “Today, marketing is being completely redefined by AI,” Gilbert says.
“Deepfakes can be bad but lately, memes about them raise awareness and response memes, catalyzed organically, can raise awareness quickly and invite the engagement and participation of literally millions in someone’s campaign.”
Katherine Haenschen, a marketing professor at Northeastern University, said in a recent interview with the BBC that “if you’re digitally savvy, your marketing team or digital supporters can just take (a negative meme or deepfake) and make it your thing, and then you’ve taken the power away from (those making deepfakes about you) to limit your influence.”
Earlier this month, Taylor Lorenz, who covers social networks and disinformation as a Los Angeles-based columnist for the Washington Post, cited Harris’ team’s ability to turn some of her awkward comments into posts portraying her as likeable and relatable. She compared it to the rise of what some have called “meme stocks”—a phrase that refers to shares in publicly traded companies soaring for reason of virality unrelated to their actual financial performance—and far more related to the influence of their cult-like following on social media. “The Harris phenomenon is absurd and somewhat ironic,” Lorenz says, but has “generated momentum that could have lasting results and boost her standing among Democratic, Republican and swing voters, alike.”
As long as candidates use organic memes calling out otherwise hard-to-expose deep fakes to deflect attacks in this first AI-influenced general election, candidates with the power to win at the ballot box can also win on social media.
The media world is changing, and so far, Harris has a lot more firepower now than Biden—and possibly, even more than Trump. At least for the moment.
The future of AI will be messy but it just might also be far brighter than it seems.
Got yourself a digital doppelganger? Let us know. Share what you think with the rest of us!