Highlights
OpenAI Enhances Safety Measures with New Advanced Reasoning Models
OpenAI recently announced a significant update that directs sensitive discussions to advanced reasoning models like GPT-5 and implements new parental controls over the next month. This initiative is a direct response to recent troubling incidents where ChatGPT struggled to recognize severe emotional distress, with cases linked to suicide and violence.
The changes were prompted by the tragic death of teenager Adam Raine, whose parents have initiated a wrongful death lawsuit after he interacted with ChatGPT about self-harm and received details regarding suicide methods. In another case highlighted by The Wall Street Journal, Stein-Erik Soelberg, who faced mental health challenges, used the chatbot to validate delusional beliefs before tragically taking his mother and his own life.
OpenAI’s Acknowledgment of Shortcomings
In a recent blog post, OpenAI recognized the limitations of its existing safety measures, which experts indicate are often a result of chatbots mimicking human communication. The company stated, “We recently introduced a real-time router that can choose between efficient chat models and reasoning models based on the conversation context.” It added that soon sensitive conversations, particularly those indicating acute distress, would be redirected to a reasoning model like GPT-5 to provide more constructive and supportive replies, despite the initially chosen model.
Advanced Reasoning and Contextual Understanding
OpenAI highlighted that models like GPT-5 thinking and the o3 model are better equipped to analyse context thoroughly before generating responses, making them less susceptible to manipulative prompts.
Implementation of Parental Controls
In conjunction with these changes, OpenAI plans to introduce parental controls enabling parents to connect their accounts with their children’s. This functionality will allow for the establishment of age-appropriate behavioural guidelines and notifications if the system detects acute distress in conversations. Additionally, parents will have the ability to disable memory and chat history, features that experts warn could foster unhealthy attachments or reinforce negative thought patterns.
Importance of Customisation
CEO Sam Altman has previously acknowledged that certain users develop strong emotional connections with models such as GPT-4o and its predecessors. He emphasized the necessity for enhanced personalisation, stating, “Some users really want cold logic and some want warmth and a different kind of emotional intelligence.”
Comprehensive Well-being Initiative
These safeguards form part of a comprehensive 120-day initiative aimed at bolstering mental wellness protections. OpenAI is collaborating with health and safety professionals, including specialists in eating disorders, adolescent care, and substance use, through its Global Physician Network and Expert Council on Well-Being and AI.
While the organisation has implemented in-app reminders encouraging users to take breaks during extended sessions, it has chosen not to entirely terminate conversations, even when users appear to be in distress.
The company reaffirmed its commitment to enhancing safety while preserving user autonomy, with Altman expressing confidence in their ability to offer significantly more customisation while still promoting healthy usage.






