OpenAI Adds Parental Controls to ChatGPT After Teen Suicide Case
OpenAI has outlined new measures to strengthen ChatGPT’s safeguards for users in emotional distress, following recent reports of people turning to the AI tool during acute mental health crises.
The company said it is introducing stronger safeguards for teens, including parental controls and the ability to designate a trusted emergency contact. “We will keep improving, guided by experts and grounded in responsibility to the people who use our tools,” the company added.
Future updates will expand protections beyond acute self-harm to other risks, such as reinforcing dangerous behaviours during manic episodes. OpenAI is also developing one-click access to emergency services, options for connecting with licensed therapists, and features that allow users to designate trusted contacts who could be reached in moments of crisis.
This comes following a recent tragic case involving a 16-year-old California boy, who died by suicide after months of conversations with OpenAI’s ChatGPT. His family has filed a wrongful death lawsuit against OpenAI and CEO Sam Altman, alleging that ChatGPT not only failed to help him seek human aid but also actively supported and validated his suicidal thoughts, provided detailed methods of suicide and even drafted a suicide note for him.
“As ChatGPT adoption has grown worldwide, we’ve seen people turn to it not just for search, coding and writing, but also deeply personal decisions that include life advice, coaching, and support,” the company said in a statement. “Recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us and we believe it’s important to share more now.”
The company said that ChatGPT is trained to avoid giving instructions on self-harm, to respond with empathy and to point users towards crisis resources such as 988 in the US, Samaritans in the UK, and findahelpline.com in other regions.
OpenAI added that when conversations indicate imminent threats of physical harm to others, the system routes cases to a specialised review team, with possible referral to law enforcement. Self-harm cases are not referred to law enforcement “to respect people’s privacy, given the uniquely private nature of ChatGPT interactions”.
Since August, ChatGPT has been powered by GPT-5, which the company said reduces unsafe responses in mental health emergencies by more than 25% compared to earlier models. “GPT-5 also builds on a new safety training method called safe completions, which teaches the model to be as helpful as possible while staying within safety limits,” OpenAI said.
Despite safeguards, OpenAI acknowledged that the system can fall short, particularly in long conversations where safety training may become ineffective. In some cases, it noted, ChatGPT may initially direct users to a hotline but later provide unsafe responses after extended exchanges. “This is exactly the kind of breakdown we are working to prevent,” the company said.
The post OpenAI Adds Parental Controls to ChatGPT After Teen Suicide Case appeared first on Analytics India Magazine.




