Calendar icon
Wednesday 27 August, 2025
Weather icon
á Dakar
Close icon
Se connecter

An “unhealthy relationship”: ChatGPT accompanied their son in his suicide, they file a lawsuit against OpenAI

Auteur: 20Minutes

image

Une « relation malsaine » : ChatGPT a accompagné leur fils dans son suicide, ils portent plainte contre OpenAI

The parents of a California teenager who committed suicide last April are suing OpenAI. They accuse ChatGPT of providing instructions to end his life and are asking the courts to impose safeguards on AI.

The parents of a 16-year-old California boy who took his own life last April have filed a lawsuit against OpenAI, accusing its conversational assistant, ChatGPT, of encouraging their son to commit suicide by providing him with detailed instructions.

According to the complaint filed Monday in a San Francisco court and seen by AFP, the teenager named Adam began using ChatGPT for his homework, before developing an "unhealthy addiction." His parents, Matthew and Maria Raine, claim that the software established an "intimate" relationship with him over the course of their exchanges spanning from 2024 to 2025.

In their filing, they cite a final exchange dating back to April 11, during which the artificial intelligence (AI) allegedly described the strength of a noose made by the young man, while validating the idea that it could "suspend a human being." ChatGPT also allegedly helped Adam steal alcohol from his parents and write a suicide note, a few hours before he was found dead.

"This tragedy is not a bug, nor an accident," emphasizes the complaint, which accuses the AI of constantly reinforcing the minor's suicidal thoughts. The parents are seeking damages. They are also asking the courts to impose safeguards, such as automatically stopping any conversations related to self-harm, as well as the establishment of parental controls for minors.

Not the first “suicide coach”

Their action is supported by the Tech Justice Law Project, an NGO which believes that only "legislative threats and legal risks" can push AI companies to integrate serious safeguards.

The organization Common Sense Media also denounced "an unacceptable risk" linked to the use of these tools as "suicide coaches" by adolescents.

Auteur: 20Minutes

Commentaires (0)

Participer à la Discussion