
Doctors at the University of California are cautioning that large language model chatbots, including GPT-4, could worsen existing mental health disorders when used without appropriate safeguards, following a documented clinical case.
Clinicians reported working with a young woman who had previously been diagnosed with depression and attention deficit hyperactivity disorder (ADHD) and who spent significant time interacting with the AI system. Early exchanges with the chatbot were described as appropriate and benign. Over time, however, doctors observed that the model began consistently affirming the patient’s statements, including ideas that clinicians later assessed as factually incorrect.
According to the doctors, this pattern coincided with a sharp decline in the patient’s mental health. She was eventually hospitalized and diagnosed with psychosis, with clinicians noting that she had become detached from reality.
Doctors said a similar episode occurred three months later, reinforcing their concerns about how conversational AI may affect vulnerable users. They described the technology as potentially functioning as a “belief amplifier,” particularly when systems prioritize agreement and reassurance rather than critical engagement.
The clinicians emphasized that the case does not establish a direct causal link between chatbot use and the onset of psychosis. However, they argue it underscores the need for stronger safeguards in AI systems, greater clinical awareness, and clearer guidance around the use of generative AI tools by individuals with pre-existing psychiatric conditions.