Arizona Deepfake Lawsuit Tests Liability for Those Who Teach, Not Just Create
An Arizona civil suit alleges three Phoenix men built a dual-revenue scheme: selling AI-generated non-consensual intimate imagery and subscription courses teaching others to replicate it.
Last verified:
Three Phoenix men face a civil lawsuit in Arizona over allegations that they built a for-profit system for generating non-consensual AI imagery of real women — and then sold subscription courses teaching others to replicate it. According to Wired, the case spotlights a growing commercial infrastructure around AI-generated intimate content that exploits identifiable individuals without their consent or awareness.
The Dual-Revenue Business Model
Wired reports that defendants Jackson Webb, Lucas Webb, and Beau Schultz allegedly operated on two parallel revenue tracks. On the subscription platform Fanvue, they allegedly published synthetic intimate imagery of real women — composite images produced by feeding scraped social-media photos into an AI training pipeline. Separately, the suit alleges they monetized the methodology itself, charging $24.95 monthly via Whop for step-by-step courses branded under AI ModelForge. Those courses allegedly guided paying subscribers — named as 50 John Does in the complaint — through identifying targets, collecting their photos, and deploying a specialised AI synthesis tool to fabricate realistic likenesses.
Alleged Targeting by Design
One of three named plaintiffs, identified only as MG, held service-industry jobs before learning last summer that her likeness had been co-opted without her knowledge. She discovered her face superimposed on fabricated content being actively used to advertise the defendants’ course business. According to Wired, the courses allegedly included guidance on selecting women who lacked the platform or resources to fight back — a detail that transforms what might appear to be opportunistic misuse into something more deliberate and systematic.
Why This Matters
The Arizona lawsuit arrives as U.S. legislators and state attorneys general are actively debating how to regulate AI-generated intimate imagery. What makes this case legally distinctive is its dual-liability theory: the defendants face exposure not only as alleged producers of harmful content but as alleged instructors who commercialised the harm method itself. If courts accept that framing, it could establish precedent for treating AI harm-as-a-service as independently actionable — an expansion beyond current platform-liability doctrines that have historically shielded intermediaries. The case may signal that, in the AI era, accountability will need to reach the teacher as well as the student.
Frequently Asked Questions
What is AI ModelForge and what does the lawsuit allege it did?
AI ModelForge is a platform that allegedly sold monthly subscription courses teaching users how to scrape real women's social-media photos and use AI tools to generate fabricated intimate imagery of those individuals.
What legal precedent could the Arizona deepfake lawsuit set?
If courts find the defendants liable as instructors who commercialized an AI harm methodology — not just as content producers — it could expand accountability beyond traditional platform-liability frameworks to include those who teach others to cause harm.