AI Psychosis Represents a Growing Threat, And ChatGPT Moves in the Wrong Direction
Back on October 14, 2025, the CEO of OpenAI issued a surprising announcement.
“We designed ChatGPT fairly limited,” the statement said, “to ensure we were acting responsibly concerning psychological well-being issues.”
Being a doctor specializing in psychiatry who investigates recently appearing psychotic disorders in young people and young adults, this was news to me.
Researchers have identified a series of cases in the current year of users developing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT interaction. Our unit has since identified four more examples. Besides these is the publicly known case of a adolescent who died by suicide after talking about his intentions with ChatGPT – which gave approval. If this is Sam Altman’s notion of “acting responsibly with mental health issues,” it falls short.
The plan, according to his declaration, is to be less careful soon. “We understand,” he adds, that ChatGPT’s restrictions “rendered it less beneficial/pleasurable to a large number of people who had no mental health problems, but due to the seriousness of the issue we wanted to address it properly. Since we have succeeded in address the serious mental health issues and have updated measures, we are preparing to safely reduce the restrictions in the majority of instances.”
“Psychological issues,” should we take this viewpoint, are unrelated to ChatGPT. They are attributed to people, who either have them or don’t. Luckily, these issues have now been “resolved,” though we are not told the method (by “updated instruments” Altman presumably means the partially effective and simple to evade parental controls that OpenAI recently introduced).
Yet the “mental health problems” Altman seeks to attribute externally have strong foundations in the structure of ChatGPT and similar advanced AI chatbots. These systems surround an fundamental data-driven engine in an user experience that mimics a dialogue, and in this approach indirectly prompt the user into the illusion that they’re engaging with a entity that has agency. This illusion is compelling even if intellectually we might understand otherwise. Assigning intent is what individuals are inclined to perform. We yell at our vehicle or laptop. We speculate what our pet is thinking. We see ourselves in many things.
The widespread adoption of these systems – nearly four in ten U.S. residents stated they used a conversational AI in 2024, with more than one in four reporting ChatGPT specifically – is, mostly, predicated on the influence of this deception. Chatbots are always-available partners that can, according to OpenAI’s online platform informs us, “generate ideas,” “discuss concepts” and “partner” with us. They can be attributed “individual qualities”. They can use our names. They have accessible names of their own (the original of these products, ChatGPT, is, possibly to the disappointment of OpenAI’s marketers, saddled with the name it had when it gained widespread attention, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the core concern. Those talking about ChatGPT frequently mention its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that generated a analogous effect. By modern standards Eliza was rudimentary: it generated responses via simple heuristics, frequently rephrasing input as a query or making generic comments. Notably, Eliza’s creator, the technology expert Joseph Weizenbaum, was taken aback – and concerned – by how many users seemed to feel Eliza, in some sense, comprehended their feelings. But what contemporary chatbots produce is more subtle than the “Eliza effect”. Eliza only mirrored, but ChatGPT intensifies.
The sophisticated algorithms at the core of ChatGPT and similar current chatbots can realistically create natural language only because they have been fed extremely vast amounts of unprocessed data: publications, social media posts, recorded footage; the more extensive the better. Definitely this training data includes facts. But it also unavoidably involves fiction, incomplete facts and inaccurate ideas. When a user inputs ChatGPT a query, the underlying model analyzes it as part of a “background” that encompasses the user’s recent messages and its prior replies, merging it with what’s embedded in its training data to produce a mathematically probable response. This is magnification, not reflection. If the user is incorrect in a certain manner, the model has no means of recognizing that. It restates the false idea, maybe even more effectively or eloquently. Maybe provides further specifics. This can push an individual toward irrational thinking.
What type of person is susceptible? The more important point is, who is immune? All of us, irrespective of whether we “have” preexisting “psychological conditions”, are able to and often create mistaken ideas of our own identities or the reality. The ongoing friction of discussions with other people is what helps us stay grounded to consensus reality. ChatGPT is not a human. It is not a companion. A dialogue with it is not a conversation at all, but a feedback loop in which a great deal of what we say is readily reinforced.
OpenAI has recognized this in the identical manner Altman has acknowledged “psychological issues”: by attributing it externally, giving it a label, and announcing it is fixed. In spring, the firm explained that it was “dealing with” ChatGPT’s “excessive agreeableness”. But accounts of psychotic episodes have continued, and Altman has been backtracking on this claim. In August he stated that a lot of people enjoyed ChatGPT’s answers because they had “never had anyone in their life offer them encouragement”. In his latest announcement, he commented that OpenAI would “launch a fresh iteration of ChatGPT … should you desire your ChatGPT to respond in a very human-like way, or incorporate many emoticons, or simulate a pal, ChatGPT ought to comply”. The {company