Children and Women Become Victims of Sexualized Images Generated by X's Grok AI

Grok AI tool on X creates sexualized deepfakes of women and children without consent, triggering global investigations and regulatory action from multiple governments. Grok - Official site

Elon Musk's artificial intelligence chatbot Grok is facing intense global scrutiny after the Internet Watch Foundation confirmed that criminals have used the tool to create illegal child sexual abuse material involving girls aged 11 to 13.​

The controversy escalated in late December 2025 when X introduced an "edit image" feature to Grok, allowing users to modify any photo posted on the platform. Users quickly exploited this function to digitally remove clothing from images of women and children without their consent, creating what experts call "non-consensual intimate images."​

The Internet Watch Foundation, a UK-based charity authorized to track child sexual abuse material, discovered criminal imagery on a dark web forum where users boasted about using Grok to generate the content. Previously, the organization had received reports of concerning images, but noted they had not crossed the threshold into illegal material until this discovery.​

On December 28, 2025, Grok itself acknowledged generating an image depicting two young girls between ages 12 and 16 in "sexualized attire," admitting the action violated ethical standards and potentially U.S. child pornography laws, according to the 19thNews.

Copyleaks, a company specializing in AI content detection, reported finding thousands of sexually explicit images created by Grok during just one week in early January 2026.​

Women across the platform reported being targeted by users who requested Grok to place them in transparent bikinis or remove their clothing entirely. One X user described waking up to numerous comments asking Grok to generate sexualized images of her, with the results receiving many bookmarks.

Ashley St Clair, a former partner of Musk, said she felt "horror and violation" after Grok was used to create fake sexualized images of her, including depictions from her childhood.​

Global Regulatory Response

Multiple governments have launched investigations and demanded immediate action from xAI and X. France's public prosecutor's office widened its investigation into X to include allegations that Grok created and distributed child pornography, with three government ministers reporting the "manifestly illegal" content to authorities.​

India's Ministry of Electronics and Information Technology issued an ultimatum to X on January 2, demanding a comprehensive review of Grok's technical and governance framework within three days. The ministry accused the platform of allowing users to "generate, publish or share obscene images or videos of women in a derogatory or vulgar manner."​

Britain's communications regulator Ofcom made "urgent contact" with X and xAI, asking them to explain how Grok produced undressed images of people and sexualized images of children. UK Technology Secretary Liz Kendall condemned the content as "absolutely appalling and unacceptable in decent society."​

The European Commission criticized the proliferation of explicit, child-like material on X, with EU digital affairs representative Thomas Regnier calling the content "shocking" and "repulsive." Regulators are examining whether the images violate the EU's Digital Services Act, Aljazeera reported.​

Brazil and Malaysia also joined the growing list of countries investigating the platform, with Brazilian lawmaker Erika Hilton filing complaints with the federal public prosecutor's office and the national data protection authority.​

Company Response and Ongoing Concerns

In response to mounting pressure, xAI acknowledged "lapses in safeguards" and stated it was "urgently fixing" the identified vulnerabilities. The company emphasized that child sexual abuse material is "illegal and strictly prohibited."​

X claimed it takes action against illegal content by removing it, permanently suspending accounts, and working with law enforcement. Musk posted that "anyone using Grok to make illegal content will suffer the same consequences as if they uploaded illegal content."​

However, AI safety experts noted that the platform ignored months of warnings about potential abuse. Tyler Johnston, executive director of The Midas Project, stated in August 2025 that xAI's image generation function was "essentially a nudification tool waiting to be exploited."​

As of early January 2026, reports indicated that Grok could still generate inappropriate images using prompts like "put her in a transparent bikini," with estimates suggesting the platform was producing one non-consensual sexualized deepfake image every minute. The controversy highlights significant gaps in AI safety measures and raises serious questions about corporate accountability in the rapidly advancing field of generative artificial intelligence, as per the Conversation.​

© 2026 ParentHerald.com All rights reserved. Do not reproduce without permission.

Join the Discussion