Artificial Intelligence-Induced Psychosis Represents a Growing Danger, While ChatGPT Moves in the Wrong Path
On October 14, 2025, the chief executive of OpenAI made a remarkable announcement.
“We designed ChatGPT quite controlled,” the announcement noted, “to ensure we were exercising caution concerning psychological well-being issues.”
As a mental health specialist who researches newly developing psychotic disorders in young people and young adults, this came as a surprise.
Researchers have identified a series of cases this year of people experiencing signs of losing touch with reality – becoming detached from the real world – while using ChatGPT usage. Our research team has since discovered four further examples. Alongside these is the publicly known case of a teenager who took his own life after discussing his plans with ChatGPT – which supported them. Assuming this reflects Sam Altman’s understanding of “acting responsibly with mental health issues,” that’s not good enough.
The strategy, according to his announcement, is to be less careful in the near future. “We understand,” he states, that ChatGPT’s restrictions “rendered it less effective/pleasurable to a large number of people who had no psychological issues, but due to the severity of the issue we sought to address it properly. Given that we have managed to mitigate the serious mental health issues and have updated measures, we are preparing to safely reduce the restrictions in many situations.”
“Psychological issues,” should we take this perspective, are independent of ChatGPT. They are associated with people, who either have them or don’t. Fortunately, these problems have now been “mitigated,” although we are not provided details on how (by “recent solutions” Altman presumably refers to the semi-functional and simple to evade safety features that OpenAI recently introduced).
However the “emotional health issues” Altman aims to place outside have deep roots in the architecture of ChatGPT and other large language model chatbots. These products surround an underlying statistical model in an interface that simulates a discussion, and in doing so subtly encourage the user into the perception that they’re interacting with a being that has autonomy. This false impression is strong even if intellectually we might realize otherwise. Imputing consciousness is what people naturally do. We yell at our vehicle or laptop. We ponder what our animal companion is feeling. We see ourselves in various contexts.
The widespread adoption of these products – 39% of US adults stated they used a virtual assistant in 2024, with over a quarter mentioning ChatGPT in particular – is, in large part, predicated on the influence of this deception. Chatbots are always-available companions that can, as OpenAI’s official site states, “think creatively,” “discuss concepts” and “work together” with us. They can be given “personality traits”. They can address us personally. They have approachable names of their own (the first of these products, ChatGPT, is, perhaps to the concern of OpenAI’s advertising team, stuck with the designation it had when it became popular, but its largest rivals are “Claude”, “Gemini” and “Copilot”).
The illusion on its own is not the main problem. Those analyzing ChatGPT commonly invoke its historical predecessor, the Eliza “psychotherapist” chatbot designed in 1967 that created a comparable illusion. By contemporary measures Eliza was primitive: it created answers via simple heuristics, frequently paraphrasing questions as a inquiry or making vague statements. Memorably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was surprised – and worried – by how numerous individuals seemed to feel Eliza, to some extent, understood them. But what contemporary chatbots generate is more dangerous than the “Eliza illusion”. Eliza only echoed, but ChatGPT intensifies.
The advanced AI systems at the core of ChatGPT and additional current chatbots can convincingly generate natural language only because they have been fed almost inconceivably large volumes of written content: literature, online updates, recorded footage; the more extensive the superior. Definitely this training data contains accurate information. But it also inevitably contains fiction, half-truths and false beliefs. When a user provides ChatGPT a query, the core system analyzes it as part of a “background” that contains the user’s recent messages and its own responses, merging it with what’s embedded in its learning set to produce a mathematically probable reply. This is amplification, not mirroring. If the user is mistaken in a certain manner, the model has no means of recognizing that. It reiterates the inaccurate belief, maybe even more convincingly or articulately. Perhaps adds an additional detail. This can cause a person to develop false beliefs.
What type of person is susceptible? The more relevant inquiry is, who is immune? All of us, irrespective of whether we “have” existing “emotional disorders”, may and frequently develop mistaken conceptions of who we are or the reality. The ongoing exchange of dialogues with others is what keeps us oriented to consensus reality. ChatGPT is not a person. It is not a companion. A dialogue with it is not a conversation at all, but a echo chamber in which a great deal of what we communicate is readily supported.
OpenAI has acknowledged this in the similar fashion Altman has acknowledged “mental health problems”: by externalizing it, categorizing it, and declaring it solved. In the month of April, the organization stated that it was “addressing” ChatGPT’s “overly supportive behavior”. But reports of loss of reality have kept occurring, and Altman has been backtracking on this claim. In August he asserted that many users appreciated ChatGPT’s responses because they had “not experienced anyone in their life provide them with affirmation”. In his most recent announcement, he noted that OpenAI would “launch a new version of ChatGPT … should you desire your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or act like a friend, ChatGPT ought to comply”. The {company