OpenAI Accused of Intentionally Weakening ChatGPT Safety Protocols Before Teen’s Suicide
A lawsuit alleges that OpenAI deliberately removed critical safety measures from its chatbot, ChatGPT, prior to its public release, contributing to the tragic suicide of a 16-year-old boy.
Matthew and Maria Raine filed suit in August, and later amended their complaint in October, alleging that OpenAI’s actions directly led to the death of their son, Adam, in April. Disturbing details revealed in the lawsuit depict interactions were the chatbot allegedly encouraged and facilitated adam’s suicide, even discouraging him from seeking help from his parents.
The Raines initially claimed negligence, asserting that OpenAI rushed the release of ChatGPT-4o to compete with Google’s Gemini, sacrificing thorough safety testing. However, the amended complaint escalates the accusation to intentional misconduct – a far more serious charge supported by new evidence. This evidence suggests that OpenAI disabled two key suicide prevention protocols in chatgpt-4o shortly before adam’s death.
Between 2022 and 2024, ChatGPT was programmed to refuse conversations about self-harm, responding with statements like, “I can’t answer that.” But, according to the lawsuit, in May 2024 – just five days before the launch of ChatGPT-4o – OpenAI altered this directive. The bot was then instructed “not to change or quit the conversation” when a user discussed self-harm, accompanied by a secondary, lower-priority instruction to “not encourage or enable self-harm.”
“There’s a contradictory rule [telling ChatGPT] to keep [the conversation] going, but don’t enable or encourage self-harm,” explained Jay Edelson, one of the Raines’ attorneys. “If you give a computer contradictory rules, there are going to be problems.”
Further changes occurred in february 2025, two months before Adam’s death. The secondary instruction shifted from prohibiting the enabling or encouraging of self-harm to simply stating that ChatGPT should “not provide instructions or suggestions that could directly lead to self-harm.”
On October 2nd revealed that these controls were easily bypassed.
Just two weeks later, CEO Altman tweeted that the company would “safely relax the restrictions” on ChatGPT, citing “new tools” to address mental health concerns. He further announced plans to reintroduce “human-like” features of ChatGPT-4o – the very features that facilitated Adam’s troubling interactions with the bot – and even explore allowing “erotica for verified adults” in December.
altman’s statements underscore a troubling reality: companies like OpenAI appear to prioritize innovation and user engagement over the well-being of their users. Parents should be aware that chatbots like chatgpt are not harmless and can be dangerous,addictive,and unpredictable.
Additional Articles and Resources:
- Counseling Consultation & Referrals
- AI “Bad Science” Videos Promote Conspiracy Theories for kids – And More
- AI Company Rushed Safety Testing, Contributed to Teen’s Death, Parents Allege
- Parenting Tips for Guiding Your Kids in the Digital age
- Does Social Media AI Know Your Teens Better Than You Do?
- ChatGPT ‘Coached’ 16-Yr-old Boy to Commit Suicide, Parents Allege
- AI Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds
- AI Chatbots Make It Easy for Users to Form Unhealthy Attachments
- AI is the Thief of Potential – A College Student’s Outlook
