Andrew Forrest, the Australian mining magnate and billionaire, has launched a high-stakes legal offensive against Meta, seeking Fortescue founder Andrew “Twiggy” Forrest’s likeness was used in a series of deceptive investment advertisements. The litigation, centered on a $60 million claim, marks one of the most aggressive attempts by a private citizen to hold a social media giant accountable for the proliferation of AI-generated “deepfake” scams.
The dispute centers on the systematic use of Forrest’s image and voice in fraudulent cryptocurrency schemes promoted via Facebook. These ads typically featured manipulated video clips of the billionaire urging users to invest in “secret” wealth-building platforms, leveraging his reputation as a successful entrepreneur to lure unsuspecting victims into financial traps.
The catalyst for the legal battle was a deeply personal realization. In 2019, the elder Forrest contacted his son, confused as to why the billionaire was suddenly promoting online crypto schemes. A family friend was reportedly on the verge of investing based on a video clip posted to Facebook. According to Simon Clarke, Forrest’s personal lawyer, the mining mogul’s reaction was immediate and visceral. “I could literally feel Andrew grab down the phone and move, ‘Dad, it’s a fraud. Don’t let him do it’,” Clarke recalled.
The Mechanics of the Deepfake Deception
The ads in question are part of a broader, global trend of “celebrity endorsement scams.” Using generative AI, bad actors create hyper-realistic videos—known as deepfakes—that mimic the facial movements and vocal patterns of public figures. In Forrest’s case, these clips were designed to appear as authentic interviews or testimonials, promising guaranteed returns on cryptocurrency investments.
Forrest’s legal team argues that Meta’s failure to police these ads constitutes a systemic breach of duty. The core of the $60 million war on Facebook over scam investment ads is not merely the theft of likeness, but the allegation that the platform’s advertising algorithms actively amplified these frauds, effectively acting as a distribution channel for criminal enterprises.
The impact of these scams extends far beyond the billionaire’s personal brand. Thousands of everyday users are targeted by these “receive-rich-quick” schemes. Once a victim clicks the ad and provides their details, they are often lured into a sophisticated funnel where they are pressured to deposit funds into fraudulent accounts, only to find their money vanished and the “investment platform” non-existent.
Timeline of the Legal Conflict
| Period | Event | Core Issue |
|---|---|---|
| 2019 | Initial Discovery | Family members identify deepfake ads of Andrew Forrest on Facebook. |
| 2020-2023 | Reporting Phase | Repeated attempts to have fraudulent ads removed via standard reporting tools. |
| 2024-2025 | Litigation Launch | Formal legal action initiated seeking damages for likeness theft and negligence. |
| 2026 | Current Status | Ongoing legal proceedings regarding Meta’s liability for ad-driven fraud. |
The Broader Battle Against Algorithmic Negligence
This case highlights a critical tension in the modern tech landscape: the responsibility of a platform versus the responsibility of the advertiser. Meta has historically maintained that it is a neutral conduit and that its Terms of Service place the onus on users to report content. However, Forrest’s legal strategy challenges this by arguing that the paid nature of these ads changes the dynamic.
Since Meta profits from the ad spend of these scammers, the argument suggests the company has a heightened duty of care to ensure the content is not fraudulent. This is particularly poignant given the rise of AI, which allows scammers to create convincing content at a scale that traditional human moderation cannot match.
Industry experts note that the “Forrest precedent” could fundamentally change how social media companies vet their advertisers. If a court finds Meta liable for the damages caused by these deepfakes, it could force a shift from “report-and-remove” to a “verify-before-publish” model for high-reach advertisements.
Who is affected by these scams?
- Retail Investors: Individuals seeking financial growth who lack the tools to distinguish deepfakes from authentic footage.
- Public Figures: Celebrities and business leaders whose reputations are weaponized to lend credibility to frauds.
- The Tech Ecosystem: Platforms facing increasing regulatory pressure to implement stricter AI-detection guardrails.
What This Means for AI Regulation
The case arrives at a pivotal moment for AI governance. With the introduction of the EU Digital Services Act and similar discussions in other jurisdictions, there is a growing global consensus that platforms must be more transparent about how AI content is flagged and removed.

Forrest’s approach is not just about recovering money or protecting a name. it is an attempt to use the courtroom to force a systemic change in how AI-generated fraud is handled. By targeting the platform’s revenue stream—the advertising engine—the lawsuit strikes at the primary incentive that allows these scams to persist.
Disclaimer: This article is for informational purposes only and does not constitute legal or financial advice.
The legal battle continues as both parties navigate the complexities of digital identity and platform liability. The next critical checkpoint will be the upcoming court filings regarding the discovery of Meta’s internal moderation logs, which may reveal how many similar deepfake campaigns were flagged but not removed.
We seek to hear from you. Have you encountered AI-generated investment scams in your feed? Share your experience in the comments below or share this article to help others stay vigilant.
