xAI’s Grok is amplifying social media’s worst impulses. So who’s responsible?
What happens when you combine the world's most toxic social media sink with the most bumbling, uninhibited and… intentionally “sharp” AI chatbot?
It's very similar to what we're seeing on X right now. Users have been feeding images into xAI's Grok chatbot, which has a powerful and largely uncensored image and video generator to create explicit content, even from ordinary people. The prevalence of deepfake porn on the platform has become so extreme that xAI's Grok chatbot is now estimated to spit out a non-consensual sexual image every single minute. In recent weeks, thousands of users have jumped on the grotesque trend of using Grok to undress primarily women and children – yes, children – without their consent, using a fairly obvious workaround.
To be clear, you can't ask Grok – or most mainstream AIs – for nudes. But you can ask Grok to “undress” a picture someone posted on X, or, if that doesn't work, put him in a tiny, invisible bikini. There are laws against this type of abuse in the US, and yet the team at xAI is almost…blase about it. Inquiries from several journalists to the company about the matter were received automatically.”Old media lies” messages in response. xAI CEO Elon Musk, who just has successfully raised $20 billion in funding for the company, shared deepfake bikini photos of (content warning). yourself until recently.
While Musk on January 4th warned that users will “suffer consequences” if they use Grok to create “illegal images.” xAI did not respond to Vox's request for comment as of Friday morning.
Nobody should be surprised here. It was only a matter of time until toxic mud This is what the website formerly known as Twitter has become in combination with xAI's Grok – which was explicitly the case marketed for its NSFW prowess – creating a new form of sexual violence. Musk's company has essentially created a deepfake porn machine that makes creating realistic and offensive images of anyone as easy as writing a reply in
You may be wondering, because I think we all do this several times a day now: How is this all legal? To be clear: it isn't. But advocates and legal experts say current laws still fall far short of the protections victims need and that the sheer volume of deepfakes being created on platforms like X makes it very difficult to enforce existing protections.
Sign in Here to explore the world's big, complicated problems and the most efficient ways to solve them. Shipped twice a week.
“The prompts that are allowed or not allowed when using a chatbot like Grok are the result of conscious and intentional decisions by the technology companies deploying the models,” said Sandi Johnson, senior legislative policy advisor at the Rape, Abuse and Incest National Network.
“In any other context, someone will be held responsible if they turn a blind eye to harm to which they actively contribute,” she said. “Tech companies should not be held to different standards.”
First, let's talk about how we got here.
“Perpetrators using technology to commit sexual abuse is nothing new,” Johnson said. “They’ve been doing this forever.”
But AI has cemented a new kind of sexual violence through the rise of deepfakes.
Deepfake porn of female celebrities – created in their likeness but without their consent and using more primitive AI tools – was created has been circulating on the internet for yearslong before ChatGPT became a household name.
But recently also so-called Nudify Apps and websites have made it extremely easy for users, including teenagers, to convert harmless photos of friends, classmates, and teachers into deepfake explicit content without the subject's consent.
The situation has become so dire that advocates like Johnson persuaded Congress to pass the law last year Take It Down Actthat criminalizes non-consensual deepfake porn and requires companies to remove such material from their platforms within 48 hours of reporting or face possible fines and injunctions. The provision will come into force in May of this year.
Even if companies like
“For these tech companies, it’s always been about breaking things and fixing them later,” Johnson said. “You have to think about that when you’re single [deepfake] If an image is created, it will cause irreparable damage.”
X has made deepfakes a feature
The most social media And large AI platforms have complied to the greatest extent possible with new state and federal regulations on deepfake porn, and particularly child sexual abuse material.
Not just because such materials are “blatantly and radioactively illegal,” said Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, “but also because they are gross and most companies don't want their brand to be a one-stop shop for them.”
But Musk's xAI seems to be the exception.
Since the company's debut “spicy mode“Observers have had the opportunity to generate videos over the past year
Most “nudify” apps require users to first download a photo, perhaps from Instagram or Facebook, and then upload it to the platform they are using. If they want to share the deepfake, they must download it from the app and send it via another messaging platform like Snapchat.
These multiple points of friction presented some critical opportunities for regulators to intercept non-consensual content with a sort of Swiss cheese-style defense system. Maybe they couldn't stop everything, but they could get some “nudity” apps forbidden from app stores. They managed to get meta to do this take action against advertising who sell apps to teenagers.
But under
“This would be less important if it was a social media community for nuns, but it is a social media community for Nazis,” said Pfefferkorn, referring to X's right-wing extremist pivot in recent years. The result is a non-consensual deepfake crisis that appears to be spiraling out of control.
In the last few days, users have created 84 times more sexualized deepfakes according to independent deepfake and social media researcher Genevieve Oh. And these images can be shared much more quickly and widely than anywhere else. “The emotional and reputational damage to the person depicted is now exponentially greater” than other deepfake sites, said Wayne Unger, an assistant professor of law specializing in emerging technologies at Quinnipiac University, “because X has hundreds of millions of users, all of whom can see the image.”
It would be virtually impossible for has fired most of its presenters when Musk took over in 2022.
Will X be held responsible for any of this?
If the same type of criminal imagery appeared in a magazine or online publication, the company could be held liable, subject to hefty fines and possible criminal charges.
Social media platforms like X don't have to face the same consequences because Section 230 The Communications Decency Act of 1996 protects internet platforms from liability for much of what users do or say on their platforms – although with some notable exceptions, including child pornography. The clause has been a cornerstone of free speech online – a world in which platforms are held liable for everything on them would be far more limited – but Johnson says the clause has also become a “financial shield” for companies unwilling to moderate their platforms.
However, with the advent of AI, that shield may finally begin to break, Unger said. He believes that companies like xAI should not be subject to Section 230 because they are no longer just hosts of hateful or illegal content, but are essentially the creators of it through their own chatbots.
“X has made a design decision that allows Grok to generate sexually explicit images of adults and children,” he said. “The user may have asked Grok to generate it,” but the company “decided to launch a product that it could even produce.”
Unger does not expect that xAI – or industry groups like NetChoice – will back down without a legal fight against any attempts to further regulate content moderation or regulate easily misused tools like Grok. “Maybe they will admit the smaller part of it” since there are laws [child pornography] are so strong, he said, but “at least they'll argue that Grok should be able to do it for adults.”
In any case, the public outrage in response to the Grokpocalypse deepfake porn may finally force a reckoning on an issue that has long been in the shadows. All over the world like countries India, FranceAnd Malaysia have begun examining the sexualized images that flood X. Finally Musk posted on X that those who create illegal content will face consequences, but this goes deeper than just the users themselves.
“It’s not a computer that does this,” Johnson said. “These are conscious decisions made by the people who run these companies and they must be held accountable.”