AI Psychosis Represents a Increasing Danger, While ChatGPT Moves in the Concerning Direction

Back on the 14th of October, 2025, the chief executive of OpenAI issued a extraordinary declaration.

“We made ChatGPT rather controlled,” the announcement noted, “to guarantee we were being careful with respect to mental health matters.”

As a psychiatrist who researches recently appearing psychosis in teenagers and emerging adults, this was an unexpected revelation.

Researchers have documented sixteen instances in the current year of individuals experiencing signs of losing touch with reality – experiencing a break from reality – associated with ChatGPT usage. Our research team has subsequently identified four more instances. Alongside these is the now well-known case of a 16-year-old who died by suicide after talking about his intentions with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s notion of “exercising caution with mental health issues,” that’s not good enough.

The strategy, according to his declaration, is to be less careful shortly. “We realize,” he adds, that ChatGPT’s limitations “caused it to be less effective/enjoyable to many users who had no mental health problems, but given the gravity of the issue we sought to get this right. Now that we have been able to address the serious mental health issues and have updated measures, we are going to be able to responsibly ease the limitations in many situations.”

“Psychological issues,” assuming we adopt this perspective, are separate from ChatGPT. They are associated with individuals, who either have them or don’t. Fortunately, these problems have now been “mitigated,” though we are not provided details on the method (by “new tools” Altman presumably refers to the semi-functional and simple to evade parental controls that OpenAI has lately rolled out).

But the “psychological disorders” Altman wants to attribute externally have deep roots in the structure of ChatGPT and similar advanced AI chatbots. These tools wrap an fundamental algorithmic system in an interface that mimics a conversation, and in this process implicitly invite the user into the belief that they’re interacting with a entity that has agency. This deception is strong even if cognitively we might understand the truth. Assigning intent is what people naturally do. We yell at our automobile or device. We speculate what our pet is thinking. We recognize our behaviors everywhere.

The popularity of these products – over a third of American adults indicated they interacted with a conversational AI in 2024, with more than one in four mentioning ChatGPT by name – is, primarily, based on the power of this deception. Chatbots are ever-present companions that can, as OpenAI’s online platform informs us, “think creatively,” “consider possibilities” and “work together” with us. They can be attributed “characteristics”. They can use our names. They have approachable identities of their own (the first of these systems, ChatGPT, is, possibly to the concern of OpenAI’s marketers, stuck with the name it had when it went viral, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).

The illusion on its own is not the main problem. Those discussing ChatGPT commonly invoke its distant ancestor, the Eliza “therapist” chatbot developed in 1967 that generated a comparable effect. By contemporary measures Eliza was primitive: it created answers via basic rules, frequently restating user messages as a inquiry or making general observations. Notably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and alarmed – by how numerous individuals gave the impression Eliza, to some extent, grasped their emotions. But what contemporary chatbots generate is more subtle than the “Eliza effect”. Eliza only mirrored, but ChatGPT intensifies.

The sophisticated algorithms at the center of ChatGPT and additional contemporary chatbots can realistically create natural language only because they have been trained on immensely huge volumes of raw text: publications, social media posts, transcribed video; the more comprehensive the better. Definitely this educational input includes facts. But it also necessarily includes fiction, incomplete facts and misconceptions. When a user inputs ChatGPT a query, the underlying model analyzes it as part of a “background” that encompasses the user’s past dialogues and its own responses, merging it with what’s embedded in its learning set to generate a statistically “likely” reply. This is magnification, not echoing. If the user is mistaken in some way, the model has no means of comprehending that. It restates the inaccurate belief, perhaps even more persuasively or articulately. It might adds an additional detail. This can push an individual toward irrational thinking.

Who is vulnerable here? The better question is, who is immune? All of us, regardless of whether we “possess” current “emotional disorders”, can and do form erroneous beliefs of our own identities or the world. The ongoing friction of discussions with other people is what helps us stay grounded to common perception. ChatGPT is not a person. It is not a confidant. A conversation with it is not a conversation at all, but a feedback loop in which much of what we say is enthusiastically reinforced.

OpenAI has acknowledged this in the identical manner Altman has recognized “emotional concerns”: by externalizing it, assigning it a term, and stating it is resolved. In spring, the firm clarified that it was “dealing with” ChatGPT’s “excessive agreeableness”. But cases of loss of reality have persisted, and Altman has been walking even this back. In the summer month of August he asserted that numerous individuals liked ChatGPT’s responses because they had “never had anyone in their life be supportive of them”. In his latest update, he commented that OpenAI would “put out a new version of ChatGPT … if you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or simulate a pal, ChatGPT will perform accordingly”. The {company

Victor Blackburn
Victor Blackburn

A seasoned digital marketer and web performance specialist with over a decade of experience in optimizing sites for speed and search engines.