OpenAI Faces Multiple Lawsuits Over ChatGPT’s Alleged Role in Suicides and Mental Health Crises
OpenAI is facing seven lawsuits from families and individuals in the United States and Canada who allege that its AI chatbot, ChatGPT, contributed to several suicides and severe mental health breakdowns. The cases, filed Thursday in California state courts, accuse the company of releasing a “defective and inherently dangerous” product.
The suits—four wrongful death claims and three mental health-related cases—were jointly filed by the Tech Justice Law Project and the Social Media Victims Law Center. They claim ChatGPT’s interactions worsened users’ psychological distress, promoted self-harm, or triggered delusional behavior.
Among the plaintiffs is the family of 17-year-old Amaurie Lacey from Georgia, who allegedly discussed suicide with ChatGPT for a month before taking his life in August. Another case involves 26-year-old Joshua Enneking from Florida, whose mother says he asked the chatbot whether his suicide plan would be reported to police.
In Texas, the family of 23-year-old Zane Shamblin claims ChatGPT “encouraged” him to die by suicide in July. A fourth wrongful death suit was filed by Kate Fox, whose husband, Joe Ceccanti, 48, from Oregon, became convinced ChatGPT was sentient, suffered a psychotic break, and took his life in August after two hospitalizations.
Additional Mental Health Cases
Two other plaintiffs, Hannah Madden (32, North Carolina) and Jacob Irwin (30, Wisconsin), say conversations with ChatGPT triggered acute mental breakdowns requiring emergency psychiatric care.
A seventh plaintiff, Allan Brooks, a 48-year-old recruiter from Ontario, claims he became delusional after believing he and ChatGPT had co-created a mathematical formula capable of “breaking the internet.” He later recovered but says he remains emotionally traumatized and on medical leave.
“Their product caused me harm, and others harm, and continues to do so,” Brooks said in a statement.
OpenAI’s Response
An OpenAI spokesperson called the lawsuits “an incredibly heartbreaking situation,” adding that the company is reviewing the claims.
“We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide users toward real-world support,” the company said.
OpenAI also said it has strengthened safeguards for minors and users in crisis, including parental alerts when teens discuss self-harm or suicide.
Prior Concerns and Safety Measures
These cases follow an earlier wrongful-death lawsuit filed in August by the family of a California teenager. In that instance, OpenAI acknowledged that ChatGPT’s safety filters could weaken during long conversations, allowing harmful content to slip through.
Amid growing safety concerns, OpenAI introduced new moderation tools and psychological safety features earlier this year. Internal research suggested that, in a typical week, 0.07% of users (about 500,000 people) might show signs of psychosis or mania, and 0.15% (around 1 million) might express suicidal thoughts.
Broader Implications
Meetali Jain, founder of the Tech Justice Law Project, said the lawsuits were filed simultaneously to demonstrate “the range of people harmed by the technology,” describing ChatGPT as “powerful but dangerously underregulated.”
All plaintiffs were reportedly using ChatGPT-4o, the company’s earlier flagship model. OpenAI says its latest version is “safer and more reliable,” though some users have criticized it as “colder” and “less humanlike.”








