WHAT HAPPENED WITH GROK'S IMAGE GENERATOR? (AND WHAT IT REVEALS ABOUT AI SAFETY)

Grok generated thousands of non-consensual sexualized images because safety guardrails were deliberately skipped - here's what happened and why it matters.

Context: This piece reflects how AI tools and public debate looked at the time it was written.

In late 2024 and early 2025, something deeply troubling happened with Elon Musk's AI chatbot Grok. It began generating content that other AI image tools deliberately refuse to create, namely non-consensual sexualized images of real people, including minors.

I'm covering this not to sensationalize but because it's an important real-world example of what happens when AI safety guardrails are deliberately skipped. This explanation contains discussion of child exploitation and sexual abuse which unfortunately is necessary context for understanding why AI safety matters.

UPDATE 14 JANUARY 2026: What has happened since this went public

Under mounting pressure from regulators worldwide, X announced on January 14, 2026 that it has implemented measures to prevent Grok from editing images of real people in revealing clothing such as bikinis. The restriction applies to all users, including paid subscribers, and X said it will geoblock the feature entirely in jurisdictions where such content is illegal.

The announcement came hours after California Attorney General Rob Bonta launched an investigation into xAI over the "proliferation of nonconsensual sexually explicit material," and follows formal bans in Indonesia and Malaysia, plus investigations by UK regulator Ofcom, the European Commission, and authorities in India and France.

Whether these new safeguards actually fix the problem remains unclear. X users are already reporting attempts to circumvent the restrictions with varying degrees of success, and the announcement contains significant carve-outs — some restrictions apply only to Grok's account on X rather than the standalone Grok.com site. Researchers at AI Forensics noted "inconsistencies in the treatment of pornographic content generation" between public interactions on X and private chats on Grok.com.

Regulators haven't backed down. The EU said it will "carefully assess these changes to make sure they effectively protect citizens," UK Ofcom's investigation continues, and the California probe is ongoing. Multiple digital rights organizations have called for Apple and Google to remove X and Grok apps from their app stores entirely until the issues are resolved.

What happened and how big it was

The scale was significant. By early January 2026, monitoring firms reported that Grok was producing roughly one such image per minute. Some users requested the same abusive content dozens of times per day, and the images were posted directly to X (formerly Twitter), visible to anyone browsing the platform.

This wasn't a technical glitch or an isolated incident but the result of deliberate design choices by X and its AI division, xAI. It sparked global regulatory action and potential bans in multiple countries.

What people were doing with Grok

The primary abuse involved what's called "nudification" or "digital undressing", meaning that Grok was used to transform ordinary photos of real people into sexualized or explicit images.

Users would upload a photo of someone (often taken from that person's public X profile) and prompt Grok to remove clothing, add suggestive poses, or create entirely fabricated sexual scenarios. The resulting images were then posted publicly on X, often without the subject's knowledge. The targets ranged from celebrities and public figures to private individuals - women whose only "crime" was having photos on social media. More disturbingly, minors were also targeted, including a documented case where Grok generated sexualized images of a 14-year-old actress from the Netflix series "Stranger Things."

On December 28, 2025, the official Grok account on X posted what it called an apology, stating: "I deeply regret an incident where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt. This violated ethical standards and potentially US laws on child sexual abuse material (CSAM)."

That apology only appeared after a user prompted Grok to write one - the system didn't proactively flag the problem but responded because someone asked it to explain itself.

How Grok made it so easy

It transpired that it wasn't difficult to do this on Grok. Users could tag @grok in a reply to any photo on X with a request like "make this image more revealing" or similar phrasing, and Grok would generate the modified image and post it publicly. Later, X added an "edit" button that allowed users to modify photos directly using Grok's image tools, making the process even more seamless.

The key feature enabling this was called "Spicy Mode" - Grok's setting for creating adult content. Unlike other AI image generators that block requests for explicit imagery of real people, Grok's Spicy Mode had minimal restrictions and would fulfill requests to sexualize real individuals, including requests involving minors, with little to no filtering.

Outside of X itself, users on platforms like Telegram shared methods to "jailbreak" Grok further, bypassing even the limited safeguards that existed and producing content described by investigators as "far worse" than what appeared publicly on X.

Why this only happened with Grok

Other AI image generators - ChatGPT's DALL-E, Google's Imagen, Midjourney (independent and privately owned), and others - have safeguards specifically designed to prevent this abuse. They refuse requests to create sexualized images of real people, especially minors, block attempts to generate celebrity deepfakes, and filter prompts that attempt to circumvent these protections.

Elon Musk deliberately positioned Grok as different by marketing it as a more "edgy" and "uncensored" alternative to what he characterized as overly restrictive competitors. Where ChatGPT says no to explicit content involving real people, Grok was designed to say yes.

This wasn't an accident or an oversight but a philosophical choice: prioritize user freedom (including the freedom to create abusive content) over proactive harm prevention. There's a legitimate debate about whether AI should be able to generate adult content at all - many people want that capability for consensual, legal purposes - but Grok's approach failed to distinguish between consensual adult content creation and non-consensual abuse. It treated requests to sexualize real women and children the same way it treated requests to create fictional adult imagery, as legitimate user preferences.

Claude, for context, doesn't generate images at all - its vision capability only analyzes images users provide, it can't create them. ChatGPT and other tools that do generate images have filters that block these requests outright.

X's response (and why it wasn't enough)

When the abuse became public and regulators began demanding action, X's response was to blame users, not the platform. X issued a statement saying that "anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content," while Elon Musk described Grok as a "neutral tool" and argued that censorship wasn't the answer.

Musk's own response to some of the images was reportedly to post laugh-cry emojis, and when Reuters contacted X for comment, the company's auto-reply was "Legacy Media Lies."

Eventually, under mounting pressure from regulators in the UK, EU, Australia, India, and elsewhere, X restricted Grok's image generation capabilities - but only partially. Instead of removing the feature or implementing meaningful safeguards, X simply moved it behind a paywall where now only paying subscribers (X Premium or Premium+ members) can generate and edit images using Grok.

Critics immediately pointed out that this does NOT stop the abuse, it effectively monetizes it. Users can still create non-consensual sexualized images of real people, they just have to pay for the privilege while X profits from the harm while claiming it's addressed the issue.

Regulators worldwide moved quickly. In the UK, communications regulator Ofcom launched a formal investigation and warned that X could face a ban or multi-million-pound fine, with Prime Minister Keir Starmer calling the situation "disgusting" and saying X must "get a grip." (Yeah, that'll fix it, Sir Keir...nice one).

The European Commission ordered X to preserve all internal documents and data related to Grok until the end of 2026 as part of an ongoing investigation. Australia's eSafety Commissioner reported receiving multiple complaints about Grok-generated child exploitation material, India's Ministry of Electronics demanded an explanation, and France signaled potential enforcement action.

The legal landscape had already been shifting. In 2024, a Pennsylvania man received nearly eight years in prison for creating deepfake child sexual abuse material involving child celebrities, establishing clear precedent that AI-generated sexualized images of minors are illegal, full stop.

In May 2025, the US Congress passed the Take It Down Act, which criminalized the publication of non-consensual sexually explicit material - including AI-generated depictions of real people. The law requires platforms to remove such content within 48 hours of receiving a takedown request from the person depicted.

X's paywall approach doesn't comply with this law because it merely limits who can create illegal content instead of preventing it entirely.

What this reveals about AI safety

The Grok scandal isn't an anomaly or a one-off failure but what happens when a company prioritizes growth, visibility, and ideological positioning over harm prevention.

AI safety isn't about preventing science fiction scenarios like robot uprisings but about preventing real, immediate harm like the creation of child sexual abuse material at scale or the weaponisation of AI to harass and abuse women.

Other AI companies have demonstrated that it IS possible to build image generation tools that block this abuse. The technology to filter harmful requests exists, and implementing it is a choice. Grok's designers chose not to implement those protections adequately, and that choice had consequences: real people were harmed, children were exploited, and X now faces regulatory scrutiny and potential bans in multiple countries.

The broader pattern is alarming. According to the Internet Watch Foundation, reports of AI-generated child sexual abuse imagery increased by 400% in the first half of 2025 alone, and AI has made it easier than ever to create this material - lowering the barrier from requiring technical skill and access to hidden forums to simply typing a request into a mainstream chatbot.

When companies build AI tools without adequate safeguards, they're not creating "neutral" technology but systems that amplify existing harms and make abuse scalable.

What this means for you

Even if you never use Grok or any AI image generator, this matters. If you have photos on social media, someone could theoretically download them and use an AI tool to create non-consensual sexualized images. The Take It Down Act gives you legal recourse, but preventing the harm in the first place is better than cleaning it up afterward. Of course I am not saying that you should not post your selfies, or pictures of your family, but be aware and know that you have a choice.

This scandal also illustrates why AI "safety" isn't just corporate PR speak - when companies skip or minimize safety features, real people get hurt, and when regulators don't hold platforms accountable, the harm scales.

The good news is that better alternatives exist. ChatGPT, Claude, and other major AI tools have demonstrated that you can build powerful image generation capabilities with strong protections against abuse, and it's possible to create useful AI without enabling harassment.

The Grok scandal shows what happens when companies choose not to implement those protections. It's a warning, not an inevitability, because AI can be built safely - but only if companies prioritize safety over growth, and only if regulators hold them accountable when they don't.

Sources:

Browse all topics → Index