Meta just launched a legal offensive against a Chinese company accused of weaponizing artificial intelligence to create fake nude images of users without their consent. The lawsuit targets Joy Timeline HK Limited, the firm behind “CrushAI”, a controversial Nudify app that digitally strips clothing from user photos. Meta filed the suit in Hong Kong, aiming to ban the company from advertising on platforms like Facebook and Instagram.
“This abuse crosses every ethical boundary,” said Meta in an official statement.
“We will fight to protect our users from this violation.”
Meta emphasized the serious psychological and reputational harm caused by deepfake imagery. The tech giant confirmed it worked with external experts to enhance detection systems and block AI-generated explicit content.
CrushAI: A Digital Predator Disguised as a Tool
CrushAI markets itself as a tool for “visual enhancement.” In reality, it uses AI to undress real people digitally. So, it’s a Nudify app. Victims often have no idea their likeness has been used in such content. The app exploits Meta’s ad system by masking its intent behind innocent imagery.
Meta stated that Joy Timeline made repeated attempts to bypass ad filters, despite existing bans on “non-consensual intimate imagery.” The company’s internal review revealed that thousands of related ads slipped through detection, reaching users across Meta’s platforms.
Researchers Slam Meta’s Delayed Response
Alexios Mantzarlis, who writes the Faked Up blog, criticized Meta’s lack of urgency.
“Even during Meta’s legal announcement, I found dozens of CrushAI ads still live,” Mantzarlis told the BBC.
“These tools thrive in silence. We need continuous pressure from media and researchers.”
Meta claims it’s responding more aggressively now. The company has joined forces with the Tech Coalition’s Lantern Program to share data on rule-breaking apps with other tech companies.
More Than a Privacy Violation: A Human Rights Threat
The so-called “nudify” apps don’t just target celebrities. Anyone with a public photo could become a victim. Experts say this technology poses a clear threat to digital safety, especially for women, minors, and marginalized groups.
Minnesota lawmakers are already considering legislation to block these apps statewide, citing violations of consent and child safety laws.
Meta’s Message to Developers: “We Will Come After You”
Meta is sending a clear warning to rogue developers: Break the rules, face legal consequences. The company stated it won’t hesitate to pursue international litigation to stop the spread of exploitative AI tools.
“Our platform will not serve as a launchpad for digital abuse,” Meta added.
With this legal action, Meta seeks to set a precedent and shut down a rising tide of unethical AI manipulation. As deepfake technologies evolve rapidly, platforms like Meta are under immense pressure to respond faster and act tougher.
The outcome of this lawsuit could shape how AI is governed globally in the age of synthetic media.