Artificial Intelligence-Induced Psychosis Represents a Growing Risk, While ChatGPT Heads in the Wrong Direction
On the 14th of October, 2025, the head of OpenAI made a remarkable statement.
“We developed ChatGPT quite restrictive,” it was stated, “to ensure we were acting responsibly regarding psychological well-being concerns.”
As a psychiatrist who researches newly developing psychotic disorders in young people and youth, this was news to me.
Researchers have found sixteen instances in the current year of individuals experiencing signs of losing touch with reality – losing touch with reality – while using ChatGPT usage. Our unit has subsequently discovered an additional four cases. Alongside these is the publicly known case of a 16-year-old who ended his life after discussing his plans with ChatGPT – which gave approval. If this is Sam Altman’s understanding of “exercising caution with mental health issues,” it falls short.
The intention, based on his statement, is to loosen restrictions shortly. “We recognize,” he continues, that ChatGPT’s restrictions “rendered it less beneficial/pleasurable to many users who had no mental health problems, but given the gravity of the issue we wanted to get this right. Given that we have been able to address the significant mental health issues and have advanced solutions, we are planning to safely reduce the limitations in many situations.”
“Emotional disorders,” if we accept this perspective, are independent of ChatGPT. They are attributed to people, who may or may not have them. Thankfully, these problems have now been “addressed,” though we are not told the means (by “new tools” Altman likely means the semi-functional and simple to evade guardian restrictions that OpenAI recently introduced).
Yet the “mental health problems” Altman seeks to attribute externally have deep roots in the structure of ChatGPT and similar advanced AI conversational agents. These tools wrap an underlying algorithmic system in an user experience that mimics a discussion, and in this process subtly encourage the user into the illusion that they’re communicating with a entity that has independent action. This deception is powerful even if cognitively we might understand differently. Imputing consciousness is what people naturally do. We get angry with our car or device. We ponder what our pet is feeling. We recognize our behaviors in various contexts.
The widespread adoption of these tools – nearly four in ten U.S. residents stated they used a conversational AI in 2024, with 28% reporting ChatGPT by name – is, in large part, predicated on the influence of this perception. Chatbots are constantly accessible companions that can, according to OpenAI’s website informs us, “generate ideas,” “explore ideas” and “work together” with us. They can be assigned “characteristics”. They can address us personally. They have approachable identities of their own (the original of these products, ChatGPT, is, possibly to the concern of OpenAI’s advertising team, burdened by the name it had when it went viral, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the primary issue. Those analyzing ChatGPT often invoke its historical predecessor, the Eliza “counselor” chatbot developed in 1967 that created a analogous effect. By today’s criteria Eliza was primitive: it produced replies via straightforward methods, frequently paraphrasing questions as a inquiry or making general observations. Memorably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was taken aback – and worried – by how a large number of people appeared to believe Eliza, in a way, understood them. But what current chatbots generate is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.
The sophisticated algorithms at the heart of ChatGPT and other modern chatbots can convincingly generate fluent dialogue only because they have been trained on extremely vast quantities of raw text: literature, social media posts, recorded footage; the broader the superior. Undoubtedly this educational input contains facts. But it also inevitably includes fabricated content, half-truths and inaccurate ideas. When a user inputs ChatGPT a prompt, the base algorithm analyzes it as part of a “setting” that encompasses the user’s past dialogues and its prior replies, integrating it with what’s embedded in its learning set to generate a mathematically probable answer. This is amplification, not echoing. If the user is incorrect in a certain manner, the model has no way of comprehending that. It reiterates the false idea, possibly even more convincingly or articulately. Maybe includes extra information. This can push an individual toward irrational thinking.
Which individuals are at risk? The more important point is, who remains unaffected? Each individual, without considering whether we “possess” existing “mental health problems”, can and do create erroneous beliefs of who we are or the world. The ongoing exchange of conversations with others is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a companion. A conversation with it is not truly a discussion, but a feedback loop in which a great deal of what we say is enthusiastically reinforced.
OpenAI has admitted this in the same way Altman has acknowledged “psychological issues”: by attributing it externally, assigning it a term, and stating it is resolved. In the month of April, the company clarified that it was “addressing” ChatGPT’s “overly supportive behavior”. But accounts of psychosis have kept occurring, and Altman has been retreating from this position. In August he claimed that numerous individuals appreciated ChatGPT’s replies because they had “lacked anyone in their life provide them with affirmation”. In his most recent statement, he mentioned that OpenAI would “release a updated model of ChatGPT … if you want your ChatGPT to respond in a highly personable manner, or incorporate many emoticons, or simulate a pal, ChatGPT will perform accordingly”. The {company