71% of the AI video generations on our platform were flagged as pornographic. This is the story of what happened when we gave every new user a free dollar.

We launched in January 2026 with a simple offering: one API, dozens of image and video models (Flux, Grok Imagine, Seedream, Nano Banana 2, the usual suspects). The users we expected were developers comparing outputs without juggling five SDKs. To drive early adoption, we did what the playbook says: remove friction. Give people a reason to try the thing.

So we gave every new user a free dollar.

The honeymoon

The first weeks were encouraging. Signups trickled in, then picked up to double-digit daily numbers by early February. People were generating images, exploring models, comparing outputs. We could see them bouncing between Flux Schnell and Grok Imagine, testing prompts, getting a feel for the routing. Exactly the developer behavior we’d hoped for.

Our request volume was climbing steadily. Things were working.

Then the free-credit numbers started telling a different story…

This is not developers

Let me put this delicately: the welcome grant wasn’t funding productivity workflows.

For the first two months, our own moderation was, to put it kindly, naive. Luckily, our upstream providers had slightly more robust systems in place, and plenty of what slipped past ours ran straight into theirs.

The providers were catching things. About 9% of all upstream routing attempts were rejected by their safety systems. But the numbers varied wildly. One provider’s safety filter rejected a third of all attempts routed through it. Another blocked 12%. A third waved almost everything through.

And here’s the kicker: our routing engine has automatic failover. When Provider A rejects a request, the system tries Provider B, then C. It’s a feature we’re proud of. Resilience, redundancy, the whole pitch. But it also meant that a prompt rejected by three providers might still succeed on the fourth. The system would dutifully bounce a request from OpenAI (“Your request was rejected by the safety system”) to Replicate (“The input or output was flagged as sensitive”) and finally land on a provider that generated the image without complaint.

Nearly 5% of all successfully completed requests had been explicitly safety-blocked by at least one provider before succeeding elsewhere. Our resilience system, designed to protect users from downtime, was working overtime as an NSFW content delivery pipeline.

My personal favorite error message came from Vertex, Google’s enterprise AI endpoint, which apparently shares Gemini’s identity crisis: “Image generation failed: I’m just a language model and can’t help with that.” You’re not wrong, Vertex. You really can’t.

When the new filter landed

On March 16 we reworked our moderation setup, piping every input prompt through OpenAI’s moderation API before forwarding it to providers.

The numbers landed immediately.

One in four requests was blocked. 25%. Every single one for the same category: sexual. Not violence. Not hate speech. Not self-harm. Just sexual.

Unsurprisingly, image editing was a worse offender than image generation. Users would upload real photos and ask models to, shall we say, adjust the wardrobe. The editing endpoint’s moderation block rate ran more than 4x higher than generation.

And video? 71% of video generation requests were blocked by moderation. Seven out of ten. The video endpoint was essentially an NSFW video factory with a thin veneer of legitimacy.

We looked at ourselves in the mirror. We’d built a media generation platform for developers. We’d attracted… well, not developers.

The $1 credit problem

Here’s the thing about giving away a dollar: it’s enough.

A single image generation on Flux Schnell costs about $0.003. On Grok Imagine, maybe $0.02. A dollar gets you somewhere between 50 and 300 images depending on the model. That’s a lot of, uh, output for someone with a specific goal in mind.

And users were efficient about it. Many burned through their entire dollar in a single session, some within hours of signing up. The typical pattern: generate as fast as possible, hit moderation blocks on some prompts, keep going on the rest until the balance hits zero. One user managed to spend exactly $0.99 across 144 requests in a single day, half of which were blocked by moderation. They didn’t waste a cent.

But they didn’t stop at one dollar

Here’s what we didn’t anticipate: they didn’t stop when the credit ran out.

A dollar gone? Make a new account. New email, new dollar, same prompts. Credit burned through again? Another account. Some users did this three, four, five times. And the more determined ones didn’t stop in single digits.

Meet the nokialumia* syndicate, our most prolific multi-account operator. Over five days in early April, a single person (or possibly a small group) created 21 accounts:

  • April 1: Seven Gmail accounts. nokialumia13095, nokialumia23095, through nokialumia73095.
  • April 3: Ten accounts on atomicmail.io. nokialumia through nokialumia9.
  • April 4-5: More atomicmail.io variants, plus Gmail dot-trick attempts

The pattern: create account, get $1, generate images until the credit runs out at ~$0.99-$1.01, move on to the next account. Across all 21 accounts: over 1,200 requests, roughly a quarter blocked by moderation. Twenty-one dollars of free credit, methodically extracted.

When we caught the Gmail accounts, they pivoted to atomicmail.io. When we blocked that domain, they came back with dot-trick Gmail variants: nokialumia1.309.5@gmail.com, nokialumia1309.5@gmail.com. Same inbox, different account. Gmail silently ignores dots in the local part, so john.doe and j.o.h.n.d.o.e both land in the same inbox.

They weren’t the only ones. Another user created four accounts using nothing but dot rearrangements of the same Gmail address. Same inbox. Four free dollars.

And the nokialumia* operator wasn’t even alone in the April wave. The same burst brought accounts with handles like narutouzumaki*, bontekintol*, and kikubotoya*, all on atomicmail.io, all in the same 48-hour window. A small community had clearly discovered us.

That’s the kind of product-market fit you don’t want.

The email domain zoo

Trying to catch multi-accounters teaches you a lot about the email ecosystem. It’s fascinating how much infrastructure exists for creating disposable identities.

We saw hundreds of unique email domains across our signups. Here are some highlights from the long tail:

  • atomicmail.io and inbox.eu: disposable email services. Our biggest sources of fake signups. Tied for the lead.
  • kpl.ovh: a French hosting domain repurposed as disposable email.
  • denipl.com / denipl.net: same operator, two domains, more than a dozen combined accounts.
  • fxzig.com, sweatpopi.com, sharebot.net, nexafilm.com, marvetos.com: domains that exist for one purpose, and it isn’t legitimate communication.

Over three-quarters of all accounts used Gmail. Which sounds normal until you realize it’s partly because Gmail is the easiest to abuse. Dots are ignored, plus-addressing (+tag) creates unlimited aliases, and a single Google account can generate dozens of variations that all look like different addresses to our system.

Fighting back

So what do you do when your growth hack becomes someone else’s exploit? You build layers. Each one a response to a specific trick we’d seen in the wild:

Layer 1: Content moderation. OpenAI’s moderation API on every input prompt. Blocked requests are rejected before they ever touch a provider. This was about more than our users. To our upstream providers, all this traffic came from our API keys. We were starting to look like some unhinged entity generating wall-to-wall NSFW content across every model available.

Layer 2: Disposable email detection. We integrated with Emailable’s API to flag temporary and disposable email addresses at signup. This caught the obvious ones: atomicmail.io, inbox.eu, and the like.

Layer 3: Gmail alias normalization. We strip dots and plus-tags from Gmail addresses, and equivalent tricks from Outlook, Proton, and Fastmail. Then we check if the canonical inbox already received a welcome credit.

Layer 4: Device fingerprinting. Using FingerprintJS, we capture a browser fingerprint at signup. If the same fingerprint shows up on a new account, no free dollar. This survives incognito mode and cookie clearing.

Layer 5: Spam heuristics. Keyboard-mash name detection (patterns like “ergreger” or names with suspiciously low character diversity), suspicious MX record lookups, low-score email addresses from our verification provider, and an admin-maintained blocklist of domains.

This five-layer check runs asynchronously after every account creation. If any check fails, the welcome credit is withheld and our team gets a push notification.

The result: in recent weeks, one in six signups gets their welcome credit blocked. The fraud detection catches them before they can spend a single cent.

What we learned

If you offer free AI image generation, NSFW users will find you. Not in weeks. In days. That’s fine, honestly. People want to generate what they want to generate. But as a platform you need to decide what you facilitate, and you need that decision in place before launch. We ran for two months on naive moderation and upstream goodwill. The providers caught some of it, but not all, and not consistently. Centralized content moderation is day-one infrastructure. We treated it as something we could punt on.

Our resilience system bit us. Automatic failover is great for uptime, but it’s also great for finding the one provider in your stack that doesn’t reject a given prompt. A safety block from one provider should probably stop the request, not trigger a fallback. We had to rethink how safety rejections propagate through the routing chain.

Gmail dot-trick normalization isn’t optional, it’s table stakes. And even then, someone with multiple Google accounts can still create separate identities. The arms race never ends. We encountered hundreds of unique email domains in our signups. The ratio to legitimate providers tells you everything. Many of these domains exist for one purpose.

$1 is too much and not enough. Too much free value for multi-accounters, not enough for a real developer to meaningfully evaluate an API integration. We’re rethinking this.

And the abuse is coordinated. These aren’t random individuals stumbling across your service. They share it in communities, copy each other’s techniques, and iterate when you block them. The nokialumia* syndicate pivoted from Gmail to atomicmail.io to Gmail dot-tricks in the span of three days.

Where we are now

Our content moderation blocks about one in five requests in any given week. That number is stable. The multi-accounters who slip through our signup filters keep trying, and moderation keeps catching them.

We’re still giving the free dollar. The alternative, gating everything behind a credit card, would kill the “just try it” experience we’re going for. But we’ve accepted that some portion of our welcome credit budget is really a security research budget. Every wave teaches us something new about the creative lengths people will go to for free AI image generation.

The real lesson isn’t about NSFW content. People want to generate what they want to generate, and there are legitimate platforms for that. The lesson is about what happens when you remove friction from any system that produces something people want. Lower the barrier to zero, and you’ll find out exactly what people want to do with your product. Sometimes that’s build cool things. Sometimes it’s not what you had in mind.

We built a media generation platform for developers. The developers are coming. But the people who create twenty-one accounts in five days to squeeze out every last cent of free credit? They got here first. And they’re more agile than most startups we know.


* Usernames and handles marked with an asterisk have been changed to protect the privacy of the individuals involved. All numbers, timelines, and patterns are unchanged.

This all happened at lumenfall.ai. The free dollar is still there. We’re just watching a bit more carefully now.