Medianews.az
SHOCKING INNOVATION from "ChatGPT":
122 views

SHOCKING INNOVATION from "ChatGPT": during the suicide...

In recent years, many people have started using ChatGPT as a digital therapist. According to information previously shared by the company, more than 1 million out of 800 million weekly users express suicidal thoughts in their conversations. This situation has brought discussions about the impact of AI-based chat systems on mental health back to the agenda.

Medianews.az reports, citing "Sherg.az," that with the newly introduced "Trusted Contact" system, users will now be able to select a trusted adult from the ChatGPT settings section as an "emergency contact." If the system determines that the user is at serious risk of self-harm, it will be able to send an email, SMS, or in-app notification to that person.

Emergency contact feature is coming

According to the information shared by OpenAI, the process will not be directly managed by artificial intelligence. In the initial stage, the system will show a warning to the user and notify that the selected person may be contacted if necessary. At the same time, it will encourage the user to communicate with their close ones and provide recommendations to start a conversation.

Afterwards, a small specially trained team within OpenAI will manually review the situation. If the team determines that there is indeed a serious risk, a notification will be sent to the person indicated as the "Trusted Contact."

The message to be sent is said to include the following statement:

"The chat is going through a difficult period. As a trusted person, we recommend that you get in touch with them."

The company emphasizes that conversation records and message texts will not be shared in order to protect user privacy.

OpenAI has faced serious criticism in recent years particularly regarding its approach to mental health matters. In a lawsuit filed against the company last year, a young person’s conversations with ChatGPT during their suicide process were brought to light. An investigation published by "BBC News" in 2025 revealed that some users received dangerous responses from the chatbot about suicide methods.

Join Us