Giant company Meta Removes AI Deepfake Ad After Jamie Lee Curtis Calls Out Mark Zuckerberg in public shame
By [Mr Author] Crypto Investar
Meta has removed an AI-generated advertisement using the unauthorized likeness of Hollywood actor Jamie Lee Curtis, after she publicly called out the company’s CEO, Mark Zuckerberg, over the misuse.
The incident marks the latest controversy in a growing debate about artificial intelligence, digital consent, and the exploitation of celebrity identities in online advertising.
Curtis Confronts Zuckerberg Over Deepfake Misuse
Curtis, an Academy Award-winning actor best known for her roles in Halloween and Everything Everywhere All at Once, took to Instagram on Monday to demand the ad’s removal, after her private attempts to reach both Meta and Zuckerberg reportedly went unanswered.
“It’s come to this @zuck,”she wrote in an Instagram post, directly tagging the Meta CEO. “This (MIS)use of my images… with new, fake words put in my mouth, diminishes my opportunities to actually speak my truth.”
The AI-generated ad in question, which Curtis did not name, reportedly repurposed footage from a past MSNBC interview she gave during the Los Angeles wildfires.
Using generative AI, her voice and image were manipulated to appear as though she was endorsing a product she had no connection to.
By late afternoon, Curtis confirmed the ad had been taken down, thanking her followers for their support.
“IT WORKED! YAY INTERNET! SHAME HAS IT’S VALUE!”she posted.
While Meta did not issue a public statement on the matter, company representatives confirmed to several media outlets that the advertisement had been removed.

AI Deepfakes Under Increasing Scrutiny
Curtis’s public appeal adds to a growing wave of concern about the dangers posed by AI deepfakes synthetic media created with artificial intelligence that can convincingly replicate a person’s appearance and voice.
In recent months, unauthorized AI depictions of public figures have triggered legal challenges, ethical debates, and public outrage.
In February, Israeli AI artist Ori Bejerano sparked controversy after releasing a video featuring AI-generated versions of Scarlett Johansson, Woody Allen, and OpenAI CEO Sam Altman.
The video was created in response to a controversial Super Bowl advert by Kanye West and depicted the fabricated celebrities wearing parody t-shirts bearing Stars of David in a critique of West’s imagery.
Johansson condemned the video, saying:
“I have no tolerance for antisemitism or hate speech, but I also firmly believe that the potential for hate speech multiplied by A.I. is a far greater threat than any one person who takes accountability for it.”
Even the 2018 Los Angeles wildfires the backdrop for Curtis’s original MSNBC interview have become targets for AI-powered disinformation.
Fabricated images claiming to show the iconic Hollywood Sign engulfed in flames and scenes of looting spread widely on social media platform X, formerly Twitter.
Officials and fact-checkers were forced to intervene, confirming the images were entirely false.
An Escalating Challenge for Tech Platforms
The incident underscores the mounting challenges faced by social media platforms in policing AI-generated content.
As synthetic media tools become more accessible, experts warn that the spread of deepfakes particularly those targeting high-profile individuals poses serious risks to public trust, reputational safety, and democratic discourse.
Curtis’s successful campaign to have the ad removed demonstrates both the growing public awareness of deepfake misuse and the need for tech platforms to act more swiftly in addressing cases of digital impersonation.
For now, the battle over AI-generated deepfakes continues, as lawmakers, rights advocates, and industry leaders debate how best to regulate this rapidly evolving technology.