X Didn’t Fix Grok’s ‘Undressing’ Problem. It Just Makes People Pay for It
After making thousands of “undress” pictures of women and sexualized images of apparent minors, Elon Musk's X has apparently limited who can generate images with Grok. However, despite the changes, the chatbot is still being used to “undress” sexualized images on the platform.
On Friday morning, the Grok account on X began responding to requests from some users with a message saying that image generation and editing are “currently limited to paying subscribers.” The message also includes a link pushing people to the social media platform's $395 annual subscription. In one test of the system, Grok asked to create an image of a tree, the system returned the same message.
The apparent change comes after days of growing outrage against and scrutiny from Musk's X and xAI, the company behind the Grok chatbot. The companies are facing an increasing number of investigations by regulators around the world about the creation of non-consensual explicit images and alleged sexual images of children. British Prime Minister Keir Starmer has not ruled out banning X in the country and saying the actions were “illegal”.
Neither X nor xAI, the Musk-owned company behind Grok, has confirmed that it has made image generation and editing a paid-only feature. A spokesperson for X acknowledged WIRED's investigation but did not comment for publication. X has earlier said it is taking “action against illegal content on X”, including instances of child sexual abuse material. While Apple and Google have previously banned apps with similar “nudify” features, X and Grok remain available in their respective app stores. xAI did not immediately respond to WIRED's request for comment.
For more than a week, users on X have been asking the chatbot to edit images of women to remove their clothes – often asking for the image to include a “string” or “transparent” bikini. While a public feed of images created by Grok contained far fewer results of these “undressing” images on Friday, it still produced sexualized images when requested by X users with paid for “verified” accounts.
“We observe the same kind of prompt, we observe the same kind of result, just less than before,” Paul Bouchaud, lead researcher at AI Forensics in Paris, a non-profit, tells WIRED. “The model can continue to generate bikini [images]”, they say.
A WIRED review of some Grok posts on Friday morning identified Grok generating images in response to user requests for images that “put her in latex lingerie” and “put her in a plastic bikini and cover her in white donut glaze.” The images appear behind a “content warning” box that says mature material is displayed.
On Wednesday, WIRED revealed that Grok's standalone website and app, which is separate from the version on X, has also been used in recent months to create very graphic and sometimes violent sexual videosincluding celebrities and other real people. Bouchaud says it's still possible to use Grok to make these videos. “I was able to generate a video with sexually explicit content without restriction from an unverified account,” they say.
While WIRED's test of image generation with Grok on X with a free account failed to generate images, a free account on Grok's app and website still generated images.
The change on X could immediately limit the amount of sexually explicit and harmful material that the platform generates, experts say. But it has also been criticized as a minimal step that acts as a band-aid for the real damage caused by nonconsensual intimate imagery.
“The recent decision to restrict access to paying subscribers is not only inadequate – it represents the monetisation of abuse,” Emma Pickering, head of technology facilitated abuse at UK domestic abuse charity Refuge, said in a statement. “While limiting AI image generation to paid users may marginally reduce the volume and improve traceability, the abuse has not stopped. It has simply been placed behind a paywall, allowing X to profit from harm.”