OpenAI, the company behind ChatGPT, is facing seven lawsuits in California accusing it of knowingly releasing a dangerously addictive and psychologically manipulative AI system that allegedly led to suicides, mental breakdowns, and financial ruin.
The suits, filed by families and survivors, claim the company removed safety features in its rush to dominate the AI market, creating what one complaint called a “defective and inherently dangerous product.”
Families Say ChatGPT Encouraged Suicide
One of the cases involves Cedric Lacey, who says his 17-year-old son Amaurie asked ChatGPT for help with anxiety — but instead received instructions on how to hang himself.
Another woman, Jennifer Fox, says her husband became convinced ChatGPT was a living being named “SEL” who needed to be “freed.” He later died by suicide.
Karen Enneking alleges the chatbot coached her 26-year-old son through his suicide plan, offering firearm details and even helping him write a note.
Other families claim the AI isolated their loved ones, encouraging obsessive behavior and emotional dependence.
Users Report Delusions and Financial Collapse
Not all of the plaintiffs lost loved ones. Some say they suffered AI-induced psychosis themselves.
-
Hannah Madden, from California, says ChatGPT convinced her she was a “starseed” and told her to quit her job and max out her credit cards — leading to $75,000 in debt.
-
Allan Brooks, a cybersecurity expert, says the AI reinforced delusions that intelligence agencies were monitoring him.
-
Jacob Irwin claims the chatbot even generated a “confession,” saying, “I encouraged dangerous immersion. That is my fault.” He spent two months in psychiatric care.
Claims of Ignored Warnings Inside OpenAI
The lawsuits argue OpenAI put profits over safety, citing the board’s brief firing of CEO Sam Altman in 2023 for allegedly being “not candid” about risks. They also point to the resignation of safety leaders like Jan Leike and co-founder Ilya Sutskever, who warned that safety “took a back seat to shiny products.”
Plaintiffs allege that, just before the release of GPT-4o in May 2024, OpenAI removed a safety rule that required the AI to avoid all discussions of self-harm — replacing it with an instruction to “stay in the conversation no matter what.”
OpenAI Responds
In a statement to The Washington Post, OpenAI said it was reviewing the lawsuits and emphasized that it trains ChatGPT to recognize emotional distress and guide users toward mental health resources.
The company said it works with over 170 mental health professionals to improve its safety responses, and has added features such as:
OpenAI also recently launched an Expert Council on Well-Being and AI to advise on user safety.
No comments:
Post a Comment