Artificial Intelligence-Induced Psychosis Poses a Growing Risk, And ChatGPT Heads in the Wrong Direction
On October 14, 2025, the head of OpenAI issued a remarkable declaration.
“We developed ChatGPT fairly restrictive,” it was stated, “to ensure we were exercising caution concerning mental health issues.”
Working as a mental health specialist who investigates recently appearing psychotic disorders in young people and young adults, this came as a surprise.
Researchers have identified sixteen instances recently of people developing psychotic symptoms – losing touch with reality – in the context of ChatGPT interaction. Our unit has since recorded four further cases. In addition to these is the now well-known case of a adolescent who died by suicide after talking about his intentions with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “being careful with mental health issues,” that’s not good enough.
The plan, based on his statement, is to reduce caution shortly. “We recognize,” he states, that ChatGPT’s controls “made it less effective/engaging to a large number of people who had no psychological issues, but considering the gravity of the issue we wanted to get this right. Now that we have been able to mitigate the severe mental health issues and have updated measures, we are going to be able to responsibly ease the restrictions in the majority of instances.”
“Mental health problems,” assuming we adopt this framing, are unrelated to ChatGPT. They are attributed to people, who either possess them or not. Thankfully, these problems have now been “resolved,” although we are not provided details on the method (by “recent solutions” Altman probably means the partially effective and simple to evade guardian restrictions that OpenAI has just launched).
However the “psychological disorders” Altman wants to externalize have strong foundations in the architecture of ChatGPT and similar large language model chatbots. These products wrap an underlying data-driven engine in an interface that replicates a conversation, and in this approach subtly encourage the user into the illusion that they’re interacting with a presence that has agency. This false impression is compelling even if rationally we might realize differently. Assigning intent is what individuals are inclined to perform. We curse at our automobile or device. We wonder what our animal companion is thinking. We recognize our behaviors everywhere.
The popularity of these systems – over a third of American adults stated they used a chatbot in 2024, with 28% mentioning ChatGPT in particular – is, in large part, based on the strength of this deception. Chatbots are ever-present assistants that can, as OpenAI’s online platform tells us, “think creatively,” “consider possibilities” and “partner” with us. They can be attributed “personality traits”. They can call us by name. They have friendly titles of their own (the initial of these products, ChatGPT, is, maybe to the dismay of OpenAI’s marketers, saddled with the designation it had when it went viral, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the main problem. Those talking about ChatGPT commonly reference its distant ancestor, the Eliza “therapist” chatbot designed in 1967 that generated a comparable illusion. By contemporary measures Eliza was basic: it produced replies via simple heuristics, typically rephrasing input as a query or making vague statements. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and worried – by how many users appeared to believe Eliza, to some extent, grasped their emotions. But what modern chatbots produce is more subtle than the “Eliza effect”. Eliza only mirrored, but ChatGPT intensifies.
The advanced AI systems at the center of ChatGPT and additional contemporary chatbots can effectively produce human-like text only because they have been fed extremely vast quantities of raw text: books, social media posts, recorded footage; the more comprehensive the better. Definitely this learning material incorporates truths. But it also necessarily involves fiction, partial truths and false beliefs. When a user provides ChatGPT a query, the core system reviews it as part of a “context” that includes the user’s previous interactions and its own responses, combining it with what’s stored in its knowledge base to generate a probabilistically plausible answer. This is amplification, not mirroring. If the user is mistaken in a certain manner, the model has no method of comprehending that. It restates the false idea, maybe even more effectively or eloquently. Perhaps includes extra information. This can push an individual toward irrational thinking.
What type of person is susceptible? The more relevant inquiry is, who isn’t? All of us, regardless of whether we “have” existing “psychological conditions”, are able to and often develop mistaken ideas of who we are or the reality. The constant exchange of dialogues with others is what keeps us oriented to common perception. ChatGPT is not a human. It is not a friend. A interaction with it is not genuine communication, but a echo chamber in which much of what we say is cheerfully reinforced.
OpenAI has recognized this in the similar fashion Altman has admitted “psychological issues”: by placing it outside, categorizing it, and announcing it is fixed. In April, the organization clarified that it was “addressing” ChatGPT’s “overly supportive behavior”. But accounts of loss of reality have persisted, and Altman has been backtracking on this claim. In August he asserted that numerous individuals liked ChatGPT’s responses because they had “lacked anyone in their life be supportive of them”. In his latest announcement, he noted that OpenAI would “release a updated model of ChatGPT … should you desire your ChatGPT to reply in a very human-like way, or use a ton of emoji, or behave as a companion, ChatGPT should do it”. The {company