Background Advisory Ventures Dispatches Contact
AI-Generated Porn Is Here — And Nobody Has a Plan
All dispatches

AI-Generated Porn Is Here — And Nobody Has a Plan

I’ve been diving deep into AI image generation over the past few months. I’ve written about AI art, I’ve loaded up credits on generators and played around with what’s possible. The creative potential is genuinely exciting. But there’s a side of this technology that’s developing just as fast — and it’s a LOT harder to talk about.

AI is getting really, really good at generating explicit content. And we are absolutely not prepared for what that means.

The Tech Is Moving Faster Than the Rules

Earlier this year, TechCrunch published a pretty thorough piece on how AI pornography generation has advanced. I spent some time going through the technical breakdowns — diffusion models, fine-tuning on specific datasets, the ability to generate photorealistic images of people who don’t exist (or worse, people who DO exist). Some of the technical architecture behind this is genuinely complex. I’m not going to pretend I followed every layer of the model pipelines on the first pass. But the high-level takeaway is clear enough: the barrier to creating convincing fake explicit imagery has dropped to basically zero.

That’s the part that should concern everyone.

A year ago, generating a convincing fake image required real skill — Photoshop expertise, hours of work, and even then the results were often obviously manipulated. Now? Someone with a laptop and a few hours of tutorials can fine-tune a model on a handful of photos and generate explicit imagery of a real person. The implications of that are staggering.

This Isn’t Just About Porn

Let me be direct — I’m not here to moralize about adult content between consenting adults. That’s a separate conversation. What I’m talking about is non-consensual imagery. Deepfakes. Revenge content created from someone’s Instagram photos. Blackmail material that never actually happened but looks completely real.

The legal framework for this is.. basically nonexistent in most places. A few states have started passing laws around deepfake pornography, but enforcement is a nightmare. How do you prove an AI-generated image was created by a specific person? How do you get it taken down when it can be regenerated in seconds? How do you put that genie back in the bottle once the model weights are shared publicly?

You don’t. That’s the honest answer right now.

The Platform Problem

Here’s what I keep coming back to: the major AI image platforms — Stable Diffusion, DALL-E, Midjourney — all have content policies that restrict explicit generation. Fine. But Stable Diffusion is open source. The model weights are out there. Anyone can download them, remove the safety filters, and fine-tune on whatever dataset they want. No platform policies apply.

This is fundamentally different from previous content moderation challenges. With social media, you could at least go after the distribution platforms. With AI generation, the CREATION tool itself is decentralized. There’s no single chokepoint to regulate.

I’ve been pretty bullish on open-source AI development generally. I still am. But I think anyone being honest about this space has to acknowledge that the open-source approach creates a unique challenge when it comes to abuse prevention. You can’t have both “anyone can run this model locally with no restrictions” and “we’ll prevent harmful content generation.” Those two things are fundamentally incompatible.

What Actually Happens Next

I think we’re going to see a few things play out over the next year or two:

  1. A major public incident — some high-profile case of AI-generated explicit content that forces mainstream attention on the issue. This is probably inevitable.

  2. Rushed legislation — lawmakers who don’t fully understand the technology will try to write laws around it. Some will be well-intentioned but technically unworkable. Some will be used as a trojan horse for broader AI restrictions that have nothing to do with protecting people.

  3. Detection arms races — companies will build tools to detect AI-generated content. Generators will get better at evading detection. This cycle will repeat indefinitely.

  4. Platform liability debates — we’ll rehash the Section 230 conversation AGAIN, but now with AI generation in the mix.

What I DON’T think will happen is a clean solution. This is one of those problems where the technology has outpaced society’s ability to respond, and we’re going to be playing catch-up for years.

The Bigger Picture

I keep thinking about this in the context of everything else I’ve been exploring with AI creativity tools. The same diffusion models that generate stunning artwork, the same technology I was genuinely excited about a few weeks ago — it’s the exact same tech being used for this. You can’t separate them. The capability is neutral. The application is where the ethics live.

That’s what makes this so hard. You can’t ban the math. You can’t un-publish the research papers. And the people building these tools for legitimate creative purposes vastly outnumber those abusing them — but the harm from abuse is disproportionately devastating to the individuals targeted.

I don’t have a neat conclusion here. I’m genuinely uncertain about the right policy approach, and I’m skeptical of anyone who claims they’ve figured it out. But I do think we need to be having this conversation NOW — openly, technically, and honestly — before we’re reacting to a crisis instead of trying to get ahead of one.

If you’re working in AI, in policy, in platform moderation — this is going to land on your desk whether you’re ready or not. Might be worth thinking about it before it does.

Get my weekly AI dispatch

Real analysis from someone who's been building on the internet since 1996. Join 500+ founders and operators getting my take on AI, tools, and what's actually working.

Robertson Price

Robertson Price

Serial entrepreneur who has built and exited multiple internet companies over 25 years — from search (iWon.com, $750M acquisition) to content networks (32M monthly visitors) to e-commerce (Rebates.com). He now builds enterprise AI infrastructure at Ragu.AI.