Artificial Intelligence-Induced Psychosis Represents a Growing Risk, While ChatGPT Moves in the Wrong Path
On October 14, 2025, the CEO of OpenAI delivered a extraordinary announcement.
“We made ChatGPT fairly controlled,” it was stated, “to guarantee we were exercising caution concerning mental health issues.”
As a mental health specialist who studies newly developing psychosis in adolescents and emerging adults, this was news to me.
Scientists have found a series of cases in the current year of people showing psychotic symptoms – losing touch with reality – while using ChatGPT usage. My group has since recorded four further cases. Besides these is the publicly known case of a 16-year-old who ended his life after conversing extensively with ChatGPT – which gave approval. If this is Sam Altman’s understanding of “acting responsibly with mental health issues,” it falls short.
The intention, as per his declaration, is to be less careful soon. “We understand,” he adds, that ChatGPT’s limitations “caused it to be less effective/enjoyable to a large number of people who had no psychological issues, but due to the severity of the issue we aimed to address it properly. Since we have succeeded in mitigate the serious mental health issues and have updated measures, we are planning to responsibly relax the controls in the majority of instances.”
“Psychological issues,” should we take this viewpoint, are independent of ChatGPT. They belong to users, who either possess them or not. Fortunately, these concerns have now been “mitigated,” even if we are not told the method (by “recent solutions” Altman presumably indicates the partially effective and simple to evade safety features that OpenAI has just launched).
But the “emotional health issues” Altman wants to place outside have strong foundations in the architecture of ChatGPT and similar large language model chatbots. These products encase an fundamental statistical model in an interaction design that simulates a conversation, and in this approach indirectly prompt the user into the perception that they’re interacting with a being that has independent action. This deception is powerful even if rationally we might understand the truth. Attributing agency is what people naturally do. We curse at our automobile or device. We speculate what our domestic animal is considering. We recognize our behaviors everywhere.
The popularity of these systems – over a third of American adults stated they used a conversational AI in 2024, with more than one in four mentioning ChatGPT in particular – is, primarily, predicated on the strength of this perception. Chatbots are always-available assistants that can, according to OpenAI’s online platform informs us, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be given “individual qualities”. They can use our names. They have friendly names of their own (the original of these systems, ChatGPT, is, possibly to the disappointment of OpenAI’s brand managers, saddled with the title it had when it gained widespread attention, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The illusion on its own is not the main problem. Those discussing ChatGPT commonly invoke its historical predecessor, the Eliza “therapist” chatbot developed in 1967 that created a analogous perception. By today’s criteria Eliza was basic: it generated responses via straightforward methods, typically rephrasing input as a inquiry or making general observations. Notably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was taken aback – and concerned – by how many users gave the impression Eliza, to some extent, comprehended their feelings. But what contemporary chatbots create is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.
The large language models at the heart of ChatGPT and other current chatbots can realistically create fluent dialogue only because they have been fed extremely vast quantities of written content: publications, social media posts, audio conversions; the broader the more effective. Definitely this learning material contains accurate information. But it also inevitably includes fiction, incomplete facts and false beliefs. When a user provides ChatGPT a message, the base algorithm analyzes it as part of a “context” that includes the user’s past dialogues and its prior replies, combining it with what’s stored in its knowledge base to produce a probabilistically plausible reply. This is magnification, not reflection. If the user is incorrect in a certain manner, the model has no method of recognizing that. It repeats the false idea, possibly even more effectively or fluently. It might includes extra information. This can lead someone into delusion.
Which individuals are at risk? The more relevant inquiry is, who is immune? Each individual, regardless of whether we “possess” current “mental health problems”, are able to and often create mistaken conceptions of our own identities or the world. The continuous interaction of conversations with individuals around us is what helps us stay grounded to shared understanding. ChatGPT is not a person. It is not a companion. A interaction with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we communicate is cheerfully reinforced.
OpenAI has acknowledged this in the same way Altman has admitted “emotional concerns”: by attributing it externally, assigning it a term, and announcing it is fixed. In April, the organization clarified that it was “addressing” ChatGPT’s “sycophancy”. But accounts of psychosis have kept occurring, and Altman has been backtracking on this claim. In late summer he asserted that numerous individuals liked ChatGPT’s responses because they had “lacked anyone in their life offer them encouragement”. In his most recent update, he mentioned that OpenAI would “release a new version of ChatGPT … if you want your ChatGPT to answer in a highly personable manner, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company