Meta’s Oversight Board Forces Removal of Fake Ronaldo AI Video
Meta’s independent Oversight Board has stepped in to take down a Facebook post featuring an AI-altered video of Brazilian football star Ronaldo Nazário. The clip, which promoted an online game called Plinko, used a badly synced voiceover to make it seem like Ronaldo was endorsing the app—something he never actually did.
The Board ruled that the post broke Meta’s rules on fraud and spam. But they didn’t stop there. They also called out the company for letting the video slip through as an ad in the first place. “Meta’s own policies ban using a celebrity’s image to trick people into clicking,” the Board said in their Thursday statement.
Why This Case Matters
This isn’t just about one fake video. It’s part of a bigger problem—AI-generated content that makes it look like people are saying or doing things they never agreed to. Scammers and fraudsters are getting better at this, and platforms like Facebook are struggling to keep up.
In this case, the video claimed users could earn more money playing Plinko than working regular jobs in Brazil. It racked up over 600,000 views before anyone took action. Even after being reported, Meta didn’t prioritize reviewing it. The user who flagged it had to escalate the issue twice before the Oversight Board finally stepped in.
Meta’s Deepfake Problem Isn’t New
Celebrity deepfakes have been a headache for Meta for a while now. Just last month, actress Jamie Lee Curtis called out Mark Zuckerberg on Instagram after an AI-generated ad used her face without permission. Meta took down the ad but left the original post up—a move that left a lot of people scratching their heads.
The Board pointed out that only certain teams at Meta have the authority to remove this kind of content, which might explain why so much of it slips through. They’re urging the company to enforce its anti-fraud rules more evenly across the board.
Lawmakers Are Starting to Pay Attention
Governments aren’t ignoring the issue, either. Back in May, former President Donald Trump signed the Take It Down Act, a bipartisan law that gives platforms 48 hours to remove non-consensual AI-generated images—especially deepfake pornography and other abusive content.
Ironically, Trump himself became a target this week. A viral deepfake showed him seriously suggesting that dinosaurs should patrol the U.S.-Mexico border. It was absurd, but it spread fast.
The Oversight Board’s decision is a small step, but it highlights just how hard it is to police this stuff. AI keeps getting better, and the rules—both corporate and legal—are still playing catch-up.
