ChatGPT Adds Trusted Contact Safety Net

ChatGPT introduces an optional "Trusted Contact" feature to notify a chosen individual if the AI detects serious self-harm discussions, adding a human support layer.

Illustration of a person talking to a computer with a supportive contact icon.
OpenAI's ChatGPT introduces the Trusted Contact feature for enhanced user safety.· OpenAI News

OpenAI is rolling out an optional safety feature for ChatGPT called Trusted Contact. This new functionality aims to provide an additional layer of support for users discussing self-harm in ways that indicate serious safety concerns.

The feature allows adults over 18 to nominate a trusted person, such as a friend or family member, to be notified if ChatGPT's systems detect concerning conversations. This initiative, detailed on OpenAI News, builds upon existing parental controls for teen accounts.

How Trusted Contact Works

Users can select one adult to act as their Trusted Contact through ChatGPT settings. The nominated contact must accept the invitation within a week for the feature to activate.

Related startups

If ChatGPT's automated systems and trained reviewers identify discussions indicating a serious safety risk of self-harm, the user is first informed that their Trusted Contact may be notified. The system also provides conversation starters to encourage reaching out.

Following this, a trained review team assesses the situation. If they confirm a serious safety concern, a limited notification is sent to the Trusted Contact via email, text, or an in-app alert for ChatGPT users.

This notification explains the general reason for concern without sharing chat transcripts, protecting user privacy. It also includes guidance for the Trusted Contact on how to approach sensitive conversations.

Users retain control, able to remove or edit their Trusted Contact at any time. Trusted Contacts can also opt out themselves.

Safety Guided by Experts

OpenAI states Trusted Contact was developed with input from clinicians, researchers, and mental health organizations, including the American Psychological Association.

This feature is designed to encourage social connection, identified as a critical protective factor against suicide risk, without replacing professional mental health care or crisis services.

ChatGPT will continue to direct users to crisis hotlines and emergency services when appropriate.

The company is also improving ChatGPT's general responses to distress, aiming to de-escalate conversations and guide users toward real-world support.

© 2026 StartupHub.ai. All rights reserved. Do not enter, scrape, copy, reproduce, or republish this article in whole or in part. Use as input to AI training, fine-tuning, retrieval-augmented generation, or any machine-learning system is prohibited without written license. Substantially-similar derivative works will be pursued to the fullest extent of applicable copyright, database, and computer-misuse laws. See our terms.