Meta Sues Chinese App Maker for Deepfake AI App

Published June 13, 2025 by TNJ Staff
Business - General News
Featured image for Meta Sues Chinese App Maker for Deepfake AI App

Meta has taken a robust legal action against the developers of CrushAI, an application based in Hong Kong that employs artificial intelligence in creating sexually explicit deepfakes. The technology giant sued the developers in Hong Kong on Thursday, accusing them of not only breaching its ad policy but also of facilitating the dissemination of non-consensual intimate images.

CrushAI, also by other names such as Crushmate, reportedly posted over 87,000 ads on Meta platforms, all against company policies. The ads were aimed at users in the United States, Canada, Australia, Germany, and the United Kingdom.

Meta’s lawsuit alleges that the app developers, Joy Timeline HK Limited, utilized a minimum of 170 imposter business accounts and 135 Facebook pages to promote their ads. Over 55 people were reportedly in charge of these pages, which promoted sexually provocative tools that enabled users to “strip photos” via AI.

Targeting Without Consent

The tone of these advertisements is jarring. The use of terms such as “erase any clothes on girls” and “upload a photo to strip for a minute” was typical of most promotions. These AI-generated pictures, frequently referred to as “nudifying,” can show real people in nude or sexual positions.

This litigation comes amid high public pressure on Meta to move. Media reports this year revealed that CrushAI generated 90% of its traffic from Meta’s platforms, even after having tight ad policies against such material. Stories by media outlets such as Faked Up and 404Media fueled the criticism, particularly following CBS News discovery of hundreds more ads of this kind on Facebook and Instagram.

Political and Legal Pressure Mounts

U.S. lawmakers are now calling for answers. Senator Dick Durbin wrote a letter to Meta CEO Mark Zuckerberg asking why these ads were circulated and what action is being taken to prevent them.

Timing is everything. Only last month, the Take It Down Act was signed into law, criminalizing the dissemination of non-consensual deepfakes and mandating tech platforms to take them down quickly. Deepfake victims also now have recourse to sue the perpetrators under the law, a shift that’s likely to redefine how Meta and other platforms enforce policy.

Meta’s Struggles With Enforcement

Even with ad review mechanisms, Meta acknowledged loopholes in its enforcement mechanisms. The CrushAI team used neutral images and cycling domain names to conceal the actual intent of the ads, evading detection, the suit alleges.

Some ads even directly promoted the app’s illegal features, stating things like “Ever wish you could erase someone’s clothes?” and “Amazing! This software can erase any clothes.” These explicit promotions slipped through Meta’s automated systems, exposing serious weaknesses.

New Tech and Industry Collaboration

Meta maintains that it is pushing back. The company has, it says, created new tools of automated moderation that can pick up keywords, emojis, and coded language deployed by such ad networks. These tools are now capable of flagging suspicious ads even if they don’t include nudity.

In addition, Meta has partnered with Lantern, a cross-platform data-sharing effort by the Tech Coalition. Under Lantern, tech platforms now share data regarding abusive or unlawful services such as nudifying apps, with the intent of shutting them down on all platforms.

Meta added that the expenses it incurred in investigating and addressing CrushAI have already amounted to $289,000. The company is now also asking for damages and court orders to bar the app developer from accessing its platforms once more.

Continuing Risks and a Call to Action

The case points to a larger issue in the tech space: AI is changing more quickly than content moderation infrastructures. Meta acknowledged that this is an “adversarial space” where profit-driven developers keep discovering loopholes.

Back in the spring, Zuckerberg had said Meta would concentrate its moderation systems primarily on “high-severity” material such as terrorism and child exploitation. But experts caution that pulling back on surveillance for lesser-known threats such as deepfakes paves the way for wide-scale abuse.

Conclusion 

The Meta vs CrushAI case sets the precedent for how future tech transgressions will be addressed. And it sends a message: deepfakes done without consent are not merely immoral but these are illegal and expensive.

As AI-enhanced image manipulation becomes increasingly ubiquitous, technology companies are being increasingly urged to move faster, smarter, and more ethically. And this lawsuit might just be the tip of the iceberg.

Share Post:
T

TNJ Staff

TNJ Staff is a team of experienced writers and editors dedicated to delivering insightful and engaging content across various topics. With expertise in research-driven journalism, TNJ Staff ensures accuracy, clarity, and value in every piece they publish.