AI Models Get Brain Rot, Too

After all, AI models can look a bit like humans.
A new one to study from the University of Texas at Austin, Texas A&M, and Purdue University show that large language models fed a diet of popular but low-quality social media content experience a type of “brain rot” that may be familiar to anyone who has spent too long doomscrolling on X or TikTok.
“We live in an age where information grows faster than attention spans — and much of it is designed to capture clicks, not convey truth or depth,” says Junyuan Hong, an incoming assistant professor at the National University of Singapore who worked on the study as a graduate student at UT Austin. “We asked ourselves: What happens when AIs are trained on the same game?”
Hong and his colleagues fed different types of text to two open-source large language models in pre-training. They examined what happened when the models were fed a mix of highly “engaging,” or widely shared, social media posts and posts that contained sensational or hyped text such as “wow,” “look,” or “today only.”
The researchers then used several different benchmarks to measure the impact of this “junk” social media diet on two open source models: Meta's Llama and Alibaba's Qwen.
The models fed junk text experienced a kind of AI brain rot – with cognitive decline, including reduced reasoning skills and degraded memory. The models were also less ethically aligned and more psychopathic according to two measures.
The results reflect research on human subjects, which shows that low quality online content has a detrimental effect on cognitive abilities of people. The pervasiveness of the phenomenon saw “brain rot” as the Oxford Dictionary word of the year in 2024.
The results are important for the AI industry, Hong says, because modelers can assume that social media posts are a good source of training data for their models. “Training on viral or attention-grabbing content can be like scaling data,” he says. “But it can quietly cause reasoning, ethics, and long-context attention.”
The fact that LLMs are suffering from brain rot seems especially worrying when AI itself is generating more and more social media content, much of which is apparently optimized for engagement. The researchers also found that models affected by low-quality content cannot be easily improved through training.
The findings also suggest that AI systems built around social platforms, such as Grok, may suffer from quality control issues if user-generated messages are used in training without regard for the integrity of the messages.
“As more AI-generated slop spreads across social media, it pollutes the data that will teach future models,” Hong says. “Our findings show that once this type of 'brain rot' sets in, later clean training cannot completely undo it.”
This is an edition of Will Knight's AI Lab newsletter. Read previous newsletters over here.