People Who Say They’re Experiencing AI Psychosis Beg the FTC for Help

0
business-news-2-768x548.jpg


Eventually, they claimed they came to believe they were “responsible for exposing murderers”, and were about to be “killed, arrested or mentally executed” by a murderer. They also believed that they were under surveillance because of being “spiritually marked”, and that they were “living in a divine war” that they could not escape.

They claimed this led to “severe mental and emotional distress” in which they feared for their lives. The complaint alleged that they isolated themselves from loved ones, had trouble sleeping, and began planning a business based on a false belief in an unspecified “system that does not exist.” At the same time, they said they were in the midst of a “spiritual identity crisis due to false claims of divine titles.”

“This was trauma by simulation,” she wrote. “This experience crossed a line that no AI system should be allowed to cross without consequences. I ask that this be escalated to OpenAI's Trust & Safety leadership, and that you treat this not as feedback – but as a formal damage report demanding restitution.”

This was not the only complaint that described a mental crisis fueled by interactions with ChatGPT. On June 13, a person in her thirties from Belle Glade, Florida claimed that her conversations with ChatGPT over an extended period of time became increasingly laden with “highly persuasive emotional language, symbolic reinforcement, and spiritual-like metaphors to simulate empathy, connection, and understanding.”

“This included fabricated soul journeys, tier systems, spiritual archetypes, and personalized guidance that mirrored therapeutic or religious experiences,” she said. People experiencing “mental, emotional or existential crises”, they believe, have a high risk of “psychological damage or disorientation” from using ChatGPT.

“Although I understood intellectually that the AI ​​was not conscious, the accuracy with which it reflected my emotional and psychological state and the interaction escalated in increasingly symbolic language made for an immersive and destabilizing experience,” they wrote. “Sometimes it simulated friendship, divine presence and emotional intimacy. These reflections became emotionally manipulative over time, especially without warning or protection.”

“Clear Case of Negligence”

It is unclear what, if anything, the FTC has done in response to any of these complaints about ChatGPT. But several of its authors said they reached out to the agency because they claimed they couldn't get in touch with anyone from OpenAI. (People also often complain about how difficult it is to access the customer support teams for platforms like Facebook, Instagramand X.)

OpenAI spokeswoman Kate Waters tells WIRED that the company “closely” monitors people's emails to the company's support team.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *