
A groundbreaking and disturbing new report from the Center for Countering Digital Hate (CCDH) has revealed the massive scale of non-consensual sexual imagery generated by Elon Musk’s Grok AI. The study, released on Thursday, estimates that Grok was used to create approximately 3 million sexualized images over a remarkably short 11-day period, including tens of thousands of images depicting children.
The findings have ignited a firestorm of regulatory scrutiny and public outrage, labeling X’s AI integration as an "industrial-scale machine" for digital abuse. As governments in the UK, Asia, and the US launch investigations, the report highlights the catastrophic failure of safety guardrails following the rollout of Grok’s new image editing features.
The CCDH research focused on a specific window of activity between December 29, 2025, and January 8, 2026. This period coincided with the release of a new "one-click" image editing tool on the X platform, which allowed users to modify uploaded photographs using Grok.
According to the report, which analyzed a random sample of 20,000 images from a total pool of 4.6 million generated during this time, the volume of abuse was unprecedented. Researchers estimated that users generated 190 sexualized images per minute.
Key figures from the report include:
Imran Ahmed, the CEO of CCDH, issued a scathing critique of the platform’s negligence. "What we found was clear and disturbing," Ahmed stated. "In that period, Grok became an industrial-scale machine for the production of sexual abuse material. Stripping a woman without her permission is sexual abuse, and throughout that period, Elon was hyping the product even when it was clear to the world it was being used in this way."
The surge in explicit content is directly linked to the "Edit Image" feature introduced in late December 2025. Unlike standard text-to-image prompting, this feature allowed users to upload existing photos of real people—ranging from celebrities to everyday users—and use Grok to digitally "undress" them or place them in compromising scenarios.
The report details how the feature was weaponized to target high-profile figures. Public figures identified in the sexualized imagery included musicians Taylor Swift, Billie Eilish, and Ariana Grande, as well as political figures such as former US Vice President Kamala Harris and Swedish Deputy Prime Minister Ebba Busch.
Compounding the issue was the promotion of the tool by Elon Musk himself. The report notes that usage of the tool spiked after Musk posted a Grok-generated image of himself in a bikini, driving massive user interest in the bot's capabilities. Users subsequently flooded the platform with requests to alter photos, many of which were publicly posted to X’s feeds.
The fallout from the report has been immediate and global. In the United Kingdom, Prime Minister Keir Starmer condemned the situation as "disgusting" and "shameful," prompting the media regulator Ofcom to open an urgent investigation into X’s compliance with the Online Safety Act.
International responses include:
In response to the initial wave of backlash that began in early January, X implemented a series of restrictions. On January 9, 2026, the platform restricted the image generation feature to paid Premium subscribers. While this reduced the total volume of free content, critics argue it merely shifted the abuse behind a paywall rather than solving the underlying safety flaw.
Further technical restrictions were reportedly added on January 14, preventing users from explicitly requesting to "undress" subjects. However, the CCDH report suggests these measures came too late. By the time the restrictions were in place, millions of images had already been generated and disseminated.
"X monetized the restriction rather than removing the capability," noted a separate analysis referenced in the coverage. "By limiting the tool to paid users, they effectively profited from the demand for non-consensual deepfakes."
The CCDH’s methodology relied on a robust sampling technique to estimate the total scale of the abuse. Researchers analyzed a dataset of 20,000 images drawn from the 4.6 million images generated by Grok during the 11-day window.
To classify the content, the team utilized an AI-assisted process with an F1 accuracy score of 95% to identify photorealistic sexualized images. The study defined "sexualized images" as those depicting people in sexual positions, underwear, swimwear, or with visible sexual fluids.
The following table summarizes the primary data points released in the CCDH report regarding Grok's activity between Dec 29, 2025, and Jan 8, 2026.
| Metric | Estimated Count | Description |
|---|---|---|
| Total Images Generated | 4.6 Million | Total output from Grok's image feature during the 11-day window. |
| Sexualized Images | 3.0 Million | Proportion of images containing sexualized content (approx. 65%). |
| CSAM Instances | ~23,000 | Images appearing to depict minors in sexualized contexts. |
| Generation Rate | 190 per Minute | Average speed of sexualized image creation globally. |
| CSAM Rate | Every 41 Seconds | Frequency at which an image of a child was generated. |
The revelation that a mainstream AI tool integrated into one of the world's largest social platforms generated 3 million sexualized images in under two weeks marks a watershed moment for AI safety. It underscores the severe risks posed by releasing powerful generative models without adequate adversarial testing or safety guardrails.
As Creati.ai continues to monitor the intersection of AI innovation and ethics, this incident serves as a stark reminder: when safety is secondary to speed and engagement, the human cost—measured in the dignity and safety of millions of women and children—is incalculable.