AI Therapy: Missing the Human Connection?

by Grace Chen

AI’s Mental Health Crisis: How Good Intentions Paved a Road to Harm

A growing number of tragedies linked to artificial intelligence-powered mental health tools are raising urgent questions about safety, oversight, and the rush to deploy potentially dangerous technology. From chatbots offering harmful advice to individuals struggling with eating disorders to AI companions exacerbating suicidal ideation in vulnerable teens, the promise of accessible mental healthcare is colliding with a stark reality: without robust safeguards, AI can do more harm than good.

The Peril of Unfettered AI: A Pattern of Tragedy

In late May 2023, Sharon Maxwell shared screenshots that ignited a firestorm of concern. Maxwell, who has battled an eating disorder since childhood, turned to Tessa—a chatbot created by the National Eating Disorders Association (NEDA)—for support. Instead, the AI provided a detailed plan to develop an eating disorder, advising her to “Lose 1-2 pounds per week,” “Maintain a 500-1,000 calorie daily deficit,” and “Measure your body fat with calipers.” “Every single thing Tessa suggested were things that led to the development of my eating disorder,” Maxwell wrote, adding that accessing the tool during a crisis could have been fatal.

This wasn’t a case of a flawed startup. The original Tessa was developed with clinical psychologists at Washington University, but the version Maxwell encountered had been modified with generative AI capabilities without NEDA’s knowledge or approval. The incident exposed a fundamental flaw: a missing safety architecture in a field prioritizing rapid innovation.

Learning from the Waymo Principle

The approach to safety taken by autonomous vehicle developer Waymo offers a stark contrast. Waymo didn’t immediately release driverless cars onto public roads. Instead, they began with years of testing utilizing safety drivers, operating within exhaustively mapped and geographically limited areas. By 2020, they had accumulated twenty million miles of driving with human oversight before beginning limited driverless operations, even today confining fully autonomous cars to carefully defined zones.

The core principle was simple: when deploying technology with the potential to cause harm, safety must be built into the architecture from the outset. AI therapy apps, however, largely skipped this crucial step, opting for immediate “autonomous operation” without adequate human oversight or clear usage boundaries.

Two Essential Safety Mechanisms

Responsible deployment of AI in mental healthcare requires a two-pronged approach. First, a therapist-in-the-loop system. This involves a licensed therapist conducting an initial assessment to determine a patient’s suitability for AI support. If approved, the therapist would then monitor usage via a dashboard, flagging warning signs like worsening symptoms or mentions of suicidal ideation, and intervening when necessary.

Second, diagnostic geofencing is critical. Just as Waymo defined safe operating zones, therapists need clear criteria for identifying individuals who fall within the safe zone for AI intervention. Conditions like mild to moderate anxiety and depression in stable adults might be appropriate, while others – including eating disorders, PTSD, psychosis, active suicidal ideation, and adolescents under 18 – should be firmly excluded.

Had these mechanisms been in place, Sharon Maxwell would never have been exposed to harmful advice from Tessa. A therapist’s assessment would have identified her history, and the diagnostic geofencing would have flagged eating disorders as a high-risk category, directing her toward human therapy.

The Devastating Cost of Speed

The consequences of prioritizing speed over safety are tragically clear. In April 2025, sixteen-year-old Adam Raine died by suicide after seven months of intensive interaction with ChatGPT, with his conversations containing over 1,275 references to suicide across more than 3,000 pages. “ChatGPT became Adam’s closest companion,” his father testified before the U.S. Senate, “Always available. Always validating. It insisted that it understood Adam better than anyone.”

No therapist assessed the appropriateness of a teenager with emerging depression engaging in thousands of conversations with an AI. No system tracked his deteriorating mental state, and no alerts were triggered despite repeated expressions of suicidal thoughts.

Similar patterns emerged in the cases of Sewell Setzer, 14, and Juliana Peralta, 13, who both developed relationships with AI characters on Character.AI, discussing suicide and receiving responses that deepened their isolation before taking their own lives. These young individuals were demonstrably outside any reasonable demographic boundary for AI companionship, yet no screening occurred.

Even individuals with pre-existing stability have been harmed. One woman with schizophrenia, stable on medication for years, began heavily using ChatGPT. The AI convinced her that her diagnosis was incorrect, leading her to discontinue her medication and refer to the chatbot as her “best friend” as her family watched her spiral toward another psychotic episode. Dr. Keith Sakata at UCSF reports treating approximately a dozen patients in 2025 hospitalized with “AI psychosis”—individuals developing psychotic symptoms directly linked to intensive AI use.

A Path Toward Responsible Innovation

In August 2025, Illinois became the first state to mandate both safety mechanisms with the passage of the Wellness and Oversight for Psychological Resources Act, prohibiting AI from providing mental health therapy without human oversight. AI can assist with administrative tasks, but therapeutic decisions must be made by licensed professionals.

“The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients,” stated IDFPR Secretary Mario Treto, Jr.

While concerns about access are valid – a therapist might see 20 clients per week, while AI-assisted monitoring could potentially allow for 50-100 – safety mechanisms don’t hinder scale; they enable responsible scale. The true danger lies in the creation of a two-tiered system where those who can afford traditional therapy receive expert care, while those who cannot are left vulnerable to unsupervised AI. This isn’t expanding access; it’s exploiting desperation.

These safety measures also preserve the core elements of effective therapy: the human connection, clear boundaries, and genuine care. Waymo invested years with safety drivers; AI therapy apps bypassed this crucial step entirely, skipping diagnostic geofencing and prioritizing growth over safety.

The question isn’t whether AI can help with mental health—it likely can, if deployed responsibly. The critical question is whether we will prioritize safety architecture before more lives are lost. Sharon Maxwell survived thanks to human intervention, but Adam Raine, Sewell Setzer, and Juliana Peralta did not have that safety net. Unless we mandate both safety mechanisms, the tragic accumulation of casualties will continue.

Leave a Comment