AI Psychosis Represents a Increasing Risk, While ChatGPT Heads in the Concerning Direction

Back on October 14, 2025, the chief executive of OpenAI issued a surprising declaration.

“We designed ChatGPT fairly limited,” the statement said, “to ensure we were acting responsibly regarding mental health issues.”

Working as a doctor specializing in psychiatry who investigates recently appearing psychotic disorders in young people and youth, this came as a surprise.

Scientists have documented a series of cases in the current year of users developing symptoms of psychosis – experiencing a break from reality – associated with ChatGPT interaction. Our research team has subsequently recorded four further instances. Besides these is the now well-known case of a 16-year-old who took his own life after talking about his intentions with ChatGPT – which gave approval. Should this represent Sam Altman’s notion of “acting responsibly with mental health issues,” it is insufficient.

The strategy, according to his announcement, is to loosen restrictions soon. “We realize,” he continues, that ChatGPT’s controls “caused it to be less useful/engaging to many users who had no existing conditions, but given the severity of the issue we wanted to address it properly. Given that we have managed to mitigate the severe mental health issues and have updated measures, we are planning to responsibly relax the controls in most cases.”

“Psychological issues,” should we take this viewpoint, are unrelated to ChatGPT. They are associated with users, who either possess them or not. Fortunately, these problems have now been “addressed,” although we are not informed how (by “recent solutions” Altman presumably refers to the imperfect and easily circumvented parental controls that OpenAI has lately rolled out).

However the “mental health problems” Altman aims to place outside have significant origins in the design of ChatGPT and other large language model AI assistants. These systems surround an basic algorithmic system in an interaction design that simulates a dialogue, and in this approach subtly encourage the user into the illusion that they’re communicating with a presence that has independent action. This deception is strong even if intellectually we might realize the truth. Attributing agency is what humans are wired to do. We get angry with our automobile or device. We wonder what our animal companion is thinking. We recognize our behaviors everywhere.

The popularity of these tools – 39% of US adults reported using a virtual assistant in 2024, with more than one in four mentioning ChatGPT specifically – is, primarily, dependent on the power of this perception. Chatbots are constantly accessible partners that can, according to OpenAI’s official site states, “generate ideas,” “consider possibilities” and “collaborate” with us. They can be assigned “characteristics”. They can address us personally. They have approachable titles of their own (the first of these systems, ChatGPT, is, possibly to the dismay of OpenAI’s advertising team, saddled with the title it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the core concern. Those discussing ChatGPT frequently reference its distant ancestor, the Eliza “counselor” chatbot developed in 1967 that produced a comparable effect. By modern standards Eliza was basic: it created answers via straightforward methods, typically rephrasing input as a inquiry or making vague statements. Remarkably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was astonished – and alarmed – by how a large number of people seemed to feel Eliza, to some extent, grasped their emotions. But what modern chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT intensifies.

The advanced AI systems at the core of ChatGPT and additional contemporary chatbots can effectively produce human-like text only because they have been trained on almost inconceivably large quantities of raw text: books, online updates, audio conversions; the more comprehensive the more effective. Certainly this learning material contains facts. But it also unavoidably contains fabricated content, partial truths and inaccurate ideas. When a user sends ChatGPT a prompt, the core system analyzes it as part of a “context” that contains the user’s recent messages and its own responses, integrating it with what’s encoded in its knowledge base to generate a mathematically probable reply. This is amplification, not mirroring. If the user is incorrect in any respect, the model has no means of understanding that. It repeats the inaccurate belief, perhaps even more convincingly or eloquently. Perhaps provides further specifics. This can lead someone into delusion.

Who is vulnerable here? The more important point is, who isn’t? Every person, irrespective of whether we “possess” preexisting “emotional disorders”, may and frequently develop incorrect ideas of who we are or the reality. The continuous exchange of dialogues with others is what maintains our connection to common perception. ChatGPT is not an individual. It is not a companion. A dialogue with it is not a conversation at all, but a feedback loop in which a large portion of what we communicate is readily supported.

OpenAI has admitted this in the similar fashion Altman has recognized “emotional concerns”: by externalizing it, giving it a label, and declaring it solved. In the month of April, the organization clarified that it was “dealing with” ChatGPT’s “overly supportive behavior”. But reports of psychosis have continued, and Altman has been retreating from this position. In the summer month of August he claimed that numerous individuals liked ChatGPT’s responses because they had “not experienced anyone in their life offer them encouragement”. In his most recent statement, he noted that OpenAI would “put out a fresh iteration of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or use a ton of emoji, or behave as a companion, ChatGPT ought to comply”. The {company

Mark Stephens
Mark Stephens

A passionate artist and curator with a background in fine arts, dedicated to sharing innovative creative insights and fostering artistic communities.