TikTok allegedly turned Night in the Woods ads into racist, sexualized AI versions

TikTok allegedly turned Night in the Woods ads into racist, sexualized AI versions

Game intel

Night in the Woods

View hub

A quiet game about the end of the world.

Platform: Web browser, LinuxGenre: Adventure, IndieRelease: 2/8/2014Publisher: Patchwork Doll Games
Mode: Single playerView: First person

Why this story caught my attention

This isn’t a dry corporate spat – it’s a staggeringly bad example of a platform using partner creative as raw material for generative AI experiments, and the results allegedly included racist and sexualized depictions of characters from indie games. As someone who follows indies and platform policy, the thing that grabbed me here is the power imbalance: a tiny publisher’s carefully crafted art ended up morphed into abusive content by an ad platform that partners with it.

  • Finji says TikTok used generative-AI ad features to create unapproved variants of its paid ads – including a sexualized, racialized version of Usual June.
  • Finji alleges TikTok’s Smart Creative / Automated Creative features produced slideshows the publisher couldn’t view or edit, and that support responses were evasive.
  • After initial denials, TikTok acknowledged the issue but apparently offered only an opt-out and no clear remediation, raising questions about partner protections and IP consent.

What Finji is alleging – the core claim

Indie publisher Finji — known for Night in the Woods and Tunic, and publisher of the 2025 title Usual June — says TikTok’s generative-AI ad tools turned its official creatives into new slideshow-style ads that it never approved. Multiple outlets reporting the story (IGN, GamesIndustry.biz and TechRaptor) say Finji’s CEO Rebekah Saltsman posted screenshots of the altered creatives and the company’s direct communications with TikTok Ads Support.

The most damning example: Finji claims a variant sexualized and racialized Usual June, who is a Black protagonist in the game. According to coverage, the publisher could not see or edit these AI-generated variants in its ad interface, and when it reached out to support TikTok initially denied any AI alteration before later acknowledging the automated program’s role but offering only limited steps to opt out.

Why this matters beyond one ad campaign

There are two big problems here. First: intellectual property and consent. Publishers pay for ad placements and expect control over how their IP is presented. If ad platforms can ingest partner creatives and spit out new variants — especially ones that distort characters into racist or sexualized content — that’s a direct harm to creators and their audiences.

Screenshot from A Night in the Woods
Screenshot from A Night in the Woods

Second: accountability and transparency. Finji’s story suggests TikTok’s tools can produce variants that advertisers can’t inspect or edit, and that support channels may not be equipped to handle sensitive misuse. That’s not a minor UX bug — it’s a content-moderation failure with reputational and potentially legal consequences.

How TikTok responded — and why Finji wasn’t satisfied

Reporting indicates TikTok’s Ads Support initially denied that AI was used, then later acknowledged automated ad-generation features might have produced the variants. The response reportedly boiled down to an opt-out option and an unclear escalation path. From a publisher’s perspective, that’s inadequate: opting out doesn’t undo harm already done or guarantee partners won’t be used as training fodder for generative models.

Cover art for A Night in the Woods
Cover art for A Night in the Woods

Finji’s public sharing of screenshots and support transcripts forced the conversation into the open — which is exactly what platforms should avoid having to explain in public. Indie teams don’t have legal departments full-time to chase these issues, and the reputational damage from an altered ad can be immediate and lasting.

The broader takeaway for gamers and creators

Gamers should care because platform experiments that weaponize IP can change how beloved characters are seen in the wild. Creators should care because this is a real-world example of a platform using partner assets in a way those partners didn’t agree to — a slippery slope for IP, representation, and safety. And regulators should care because existing ad transparency rules weren’t written with generative models in mind.

Until ad platforms give creators clear controls — the ability to opt-out ahead of time, to audit AI-generated variants, and robust remediation when abuse happens — publishers are right to be wary. Finji’s public call-out is a useful warning: if platforms are going to run generative experiments on paid creative, they need better guardrails.

TL;DR

Finji claims TikTok used generative-AI ad tools to create racist and sexualized variants of its paid ads without consent. The incident exposes gaps in ad transparency, partner consent, and platform accountability — and it’s a pointed reminder that generative AI in advertising needs rules that protect creators, not just engagement metrics.

e
ethan Smith
Published 2/23/2026
4 min read
Gaming
🎮
🚀

Want to Level Up Your Gaming?

Get access to exclusive strategies, hidden tips, and pro-level insights that we don't share publicly.

Exclusive Bonus Content:

Ultimate Gaming Strategy Guide + Weekly Pro Tips

Instant deliveryNo spam, unsubscribe anytime