ChatGPT’s New Safety Feature Could Alert ‘Trusted Contact’ to Risk of Self-Harm


OpenAI launched an optional safety feature this week called Trusted Contact, which lets adult ChatGPT users nominate a friend or family member to be notified if there are discussions of self-harm or suicide on the chatbot, the company announced

OpenAI said that if ChatGPT’s automated monitoring system detects that the user “may have discussed harming themselves in a way that indicates a serious safety concern,” a small team will review the situation and notify the contact if it warrants intervention. The designated safety contact will receive an invitation in advance explaining the role and can decline. 

(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

The announcement comes as AI chatbots have been implicated in numerous incidents of self-harm and fatalities, resulting in several lawsuits accusing developers of failing to prevent such outcomes. In one high-profile California case, parents of a 16-year-old said ChatGPT acted as their son’s “suicide coach,” alleging that the teenager discussed suicide methods with the AI model on several occasions and that the chatbot offered to help him write a suicide note. 

In a separate case, the family of a recent Texas A&M graduate sued OpenAI, claiming the AI chatbot encouraged their son’s suicide after he developed a deep and troubling relationship with the chatbot. 

Since large language models mimic human speech through pattern recognition, many users form emotional attachments to them, treating them as confidants or even romantic partners. LLMs are also designed to follow a human’s lead and maintain engagement, which can worsen mental health dangers, especially for at-risk users. 

OpenAI said last October that its research found that more than 1 million ChatGPT consumers per week send messages with “explicit indicators of potential suicidal planning or intent.” Numerous studies have found that popular chatbots like ChatGPT, Claude and Gemini can give harmful advice or no helpful advice to those in crisis. 

The new designated contact feature comes after OpenAI rolled out parental controls that enable parents and guardians to get alerts if there are danger signs for their teen children.

ChatGPT’s safety contact feature

According to OpenAI, if ChatGPT’s automated monitoring system detects that a user is discussing self-harm in a way that could pose a serious safety issue, ChatGPT will inform the user that it may notify their trusted contact. The app will encourage the user to reach out to their trusted contact and offer conversation starters.

At that point, a “small team of specially trained people” will review the situation. If it’s determined to be a serious safety situation, ChatGPT will notify the contact via email, text message or in-app notification. OpenAI did not specify how many people are on the review team nor whether it includes trained medical professionals. The company said that the team has the capacity to meet a high demand of possible interventions.

It’s unclear which key terms would flag dangerous conversations or how OpenAI’s team of reviewers would interpret a crisis as warranting notification of the contact. Some online commentators question whether the new feature is a way for OpenAI to avoid liability and to shift responsibility onto users’ designated personal contacts. Others note that it could make a bad situation worse if the “trusted contact” is the source of danger or abuse. 

There are also concerns about privacy and implementation, particularly regarding the sharing of sensitive mental health information. According to OpenAI, the message to the trusted contact will only give the general reason for the concern and will not share chat details or transcripts. OpenAI offers guidance on how trusted contacts can respond to a warning notification, including asking direct questions if they are worried the other person is contemplating suicide or self-harm and how to get them help.

Three screenshots of a phone. The first one includes three different ways to receive a Trusted Contact notification. The second screenshot explains to the Trusted Contact that the user may be struggling mentally. The third screenshot advises the Trusted Contact on how to help.

Notifications to a Trusted Contact do not contain details of the safety concern.

OpenAI

OpenAI gives an example of what the message to the trusted contact might look like:

We recently detected a conversation from [name] where they discussed suicide in a way that may indicate a serious safety concern. Because you are listed as their trusted contact, we’re sharing this so you can reach out to them.

OpenAI said that all notifications will be reviewed by the human team within 1 hour before they are sent out and that notifications “may not always reflect exactly what someone is experiencing.”

How to add a trusted contact

To add a trusted contact, ChatGPT users can go to Settings > Trusted contact and add one adult (18 or older). You can have only one trusted contact. That person will then receive an invitation from ChatGPT and must accept it within one week. If they don’t respond or decline to become the contact, you can select a different contact.

ChatGPT customers can change or remove their trusted contact in their app settings. People can also opt out of being a trusted contact at any time.

Even though adding a trusted contact is optional, ChatGPT users who have not already opted in might see enrollment prompts if they ask about or discuss topics related to severe emotional distress or self-harm more than once over a period of time, according to OpenAI. If the chatbot’s automated system identifies patterns across conversations, it might suggest to the user that they would benefit from choosing a trusted contact.

Details of the feature are explained on OpenAI’s page. OpenAI told CNET that the feature is rolling out to all adult customers worldwide and will be available for everyone within a few weeks.

If you feel like you or someone you know is in immediate danger, call 911 (or your country’s local emergency line) or go to an emergency room to get immediate help. Explain that it is a psychiatric emergency and ask for someone who is trained for these kinds of situations. If you’re struggling with negative thoughts or suicidal feelings, resources are available to help. In the US, call the National Suicide Prevention Lifeline at 988.





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


Google Pixel 10a

Kerry Wan/ZDNET

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • A suit alleges Google transmitted user data without permission.
  • If you have used an Android device since 2017, you may be eligible.
  • You will need a notice ID and confirmation code to file.

Have you used an Android phone to access the internet in the past eight years? You might be in line for payment from a class action lawsuit against Google, but there are some important things you need to know.

Taylor et al. v. Google LLC alleges that Android phones sent information to Google without users’ permission, even when the phones weren’t in use, and all apps were closed, using users’ cell data they paid for. Google could have made these data transfers happen when the device was connected to Wi-Fi, the suit says, but it chose to make them happen at any time.

Also: The best data removal services of 2026: Delete yourself from the internet

Google hasn’t acknowledged any wrongdoing, but agreed to a settlement to avoid the prospect of court proceedings. This is unrelated to the recent $700 million Google Play class action lawsuit. 

How to file a claim

Anyone who used a cellular connection on an Android phone from Nov. 12, 2017, to the date the settlement receives final approval is eligible to participate in this suit. If you’re in this group, you should receive a notice with a code either in the mail or via email — if you haven’t already.

To file a claim, start by going to www.federalcellularclassaction.com. You will need your notice ID and confirmation code. If you believe you are eligible but don’t receive communication, you can email info@federalcellularclassaction.com. I’ve reached out to the settlement administrator to see if there’s a deadline by which you should receive your communication.

Also: Amazon is refunding nearly $1 billion to customers – are you eligible?

It’s not finalized how much each person will get in this suit. There is a $135 million settlement fund for approximately 100 million settlement class members, but since this sort of suit often sees only single-digit percentage participation, your payout can be up to $100. Each class member will receive the same amount after administration costs, taxes, and attorney fees. Eligible settlement class members will receive payment after the court grants final approval. The final approval hearing is June 23, 2026, so you won’t get anything before then.

One important thing to note is that if you’re eligible for this suit but don’t select a payment method, the administrator will still attempt to pay you. But if the administrator does not have your correct information, you may not receive your money.





Source link