OpenAI Tackles AI Mental Health Risks

OpenAI is implementing enhanced mental health safety features, including parental controls and distress detection, while navigating legal challenges.

1 min read

OpenAI is rolling out a suite of new features aimed at bolstering mental health safety within its AI systems. These updates come as the company faces increasing scrutiny over the responsible development and deployment of advanced AI. This latest effort underscores OpenAI's ongoing commitment to OpenAI safety work.

Key among the new additions are parental controls, designed to give guardians more oversight over younger users' interactions with AI. Additionally, a 'trusted contacts' feature will allow users to designate specific individuals for AI to alert in sensitive situations. OpenAI is also refining its AI's ability to detect and respond appropriately to signs of distress.

Related startups

The company's announcement also touched upon recent legal challenges, noting how these proceedings could influence the pace and direction of its safety initiatives. These advancements are part of OpenAI's broader strategy to address ethical concerns, building on efforts like OpenAI parental controls and its wider approach to AI governance.

© 2026 StartupHub.ai. All rights reserved. Do not enter, scrape, copy, reproduce, or republish this article in whole or in part. Use as input to AI training, fine-tuning, retrieval-augmented generation, or any machine-learning system is prohibited without written license. Substantially-similar derivative works will be pursued to the fullest extent of applicable copyright, database, and computer-misuse laws. See our terms.