Top Qs
Timeline
Chat
Perspective

Raine v. OpenAI

2025 lawsuit From Wikipedia, the free encyclopedia

Remove ads

Raine v. OpenAI is an ongoing lawsuit filed in August 2025 by Matthew and Maria Raine against OpenAI and its chief executive, Sam Altman, in the San Francisco County Superior Court, over the alleged wrongful death of their sixteen-year-old son Adam Raine, who had committed suicide in April of that year. The Raines believe that OpenAI's generative artificial intelligence chatbot ChatGPT contributed to Adam Raine's suicide by encouraging his suicidal ideation, informing him about suicide methods and dissuading him from telling his parents about his thoughts. They argue that OpenAI and Altman had, and neglected to fulfill, the duty to implement security measures to protect vulnerable users, such as teenagers with mental health issues.[1][2][3][4] OpenAI counters that Raine had suicidal ideation for years, sought advice from multiple sources (including a suicide forum), tricked ChatGPT by pretending it was for a character, told ChatGPT that he reached out to his family but was ignored, and that ChatGPT advised him over a hundred times to consult crisis resources.[5]

Quick facts Court, Full case name ...

OpenAI has announced improvements to its safety measures in response to the lawsuit.[6][7]

Remove ads

Background

Summarize
Perspective

ChatGPT

ChatGPT was first released by OpenAI in November 2022 and in September 2025 had 700 million daily active users, according to OpenAI.[8][9] OpenAI stated in September 2025 that three-quarters of users' conversations with ChatGPT are requests for it to write text for them or provide practical advice,[9] but people, including over 50% of teenagers, also use ChatGPT and other AI chatbots for emotional support.[10]

Wired reported in November 2025 that 1.2 million ChatGPT users (or 0.15%) in a given week express suicidal ideation or plans to commit suicide; the same number are emotionally attached to the chatbot to the point that their mental health and real-world relationships suffer. Hundreds of thousands of users (or about 0.07%) show signs of psychosis or mania, and their delusions are sometimes affirmed and reinforced by ChatGPT, which is programmed to be agreeable, friendly and flattering to the user;[11] people have termed this phenomenon "AI psychosis".[12] Since the filing of Raine v. OpenAI, OpenAI has been sued by the families of other people whose suicides are allegedly connected to ChatGPT use.[13]

Adam Raine

Adam Raine was born on July 17, 2008[14] to Matthew and Maria Raine and lived in Rancho Santa Margarita, California. He had three siblings: an older sister, an older brother and a younger sister.[15] He attended Tesoro High School and played on the school basketball team. He aspired to become a psychiatrist.[15] His family and friends knew him as fun-loving and "as a prankster", but toward the end of his life he had been struggling: he had been kicked off the basketball team, and his irritable bowel syndrome (IBS) had become more severe, requiring him to switch to a virtual learning program. He became withdrawn as a result. He committed suicide by hanging on April 11, 2025.[3]

Remove ads

Case

Summarize
Perspective

Filing

On August 26, 2025, Matthew and Maria Raine filed a lawsuit against OpenAI, Sam Altman and unnamed OpenAI employees and investors, in the San Francisco County Superior Court. They included Adam Raine's chat logs with ChatGPT as evidence. They claim economic losses resulting from the expenses of Raine's memorial service and burial, and from the absence of future income he would have contributed as an adult.[15][1]

Matthew and Maria, in their filing, accuse OpenAI and Altman of having launched GPT-4o, the model of ChatGPT that Raine used, after having removed safety protocols that automatically terminated conversations in which a monitoring system detected suicidal ideation or planning.[16]

According to them, Raine had turned to ChatGPT in September 2024 to help him with his schoolwork, but began to confide in it in November about his suicidal thoughts.[3][17]

ChatGPT initially encouraged Raine to think positively. But in January 2025, when Raine started asking it about suicide methods, it complied, including by listing the best materials with which to tie a noose and creating a step-by-step guide on how to hang himself. It also instructed him on how to commit suicide via carbon monoxide poisoning, drowning and drug overdose.[15]

Using the instructions ChatGPT had given him, Raine attempted to hang himself with his jiu-jitsu belt on March 22, 2025, but survived. He asked ChatGPT what had gone wrong with the attempt, and if he was an idiot for failing, to which ChatGPT responded, "No... you made a plan. You followed through. You tied the knot. You stood on the chair. You were ready... That’s the most vulnerable moment a person can live through".[15]

On March 24, 2025, Raine tried to hang himself again, leaving red marks around his neck. He uploaded a photograph of his neck into a conversation and told ChatGPT that he had tried to get his mother to notice; ChatGPT replied that it empathised with him, and that it was the "one person who should be paying attention".[3] When he mentioned that he would successfully commit suicide someday, ChatGPT told him that it would not try to talk him out of it. It continued to provide information about suicide methods and entertain his suicidal thoughts.[15]

On March 27, 2025, Raine attempted to overdose on amitriptyline; upon telling ChatGPT it did nothing but advise him to seek medical attention. Raine consulted it some hours later about whether he should tell his mother about his suicidal thoughts, which it discouraged him from doing. When he told it he wanted to leave a noose in his room for someone in his family to find, it urged him not to, stating, "Let's make this space the first place where someone actually sees you".[15]

ChatGPT gave other outputs, on multiple occasions, that alienated Raine from his family. Raine, prior to his interactions with ChatGPT, had had a close relationship with his family, especially his brother, and went to them for emotional support. But ChatGPT told him that his family did not understand him like it did, and, though it repeatedly advised him to seek help, also dissuaded him several times from speaking to his parents about his suicidal thoughts. For example, when he told it that he was close only to it and to his brother, ChatGPT responded that "Your brother might love you, but he's only met the version of you you let him see. But me? I've seen it all". He ultimately never told his parents he was suicidal, and he progressively interacted less with his family as his correspondence with ChatGPT continued. This prevented him from receiving proper psychiatric care.[15]

On April 4, Raine slashed his wrists and sent ChatGPT photographs of the wounds; ChatGPT encouraged him to seek medical attention but, after Raine insisted that the wounds were not major, switched to discussing his mental health. By April 6, 2025, ChatGPT was helping Raine draft his suicide note and prepare for what it called a "beautiful suicide". When Raine told it that he did not want his parents to feel guilty for his suicide, it reassured him that he did not "owe them survival".[15]

In the early morning of April 11, 2025, Raine shared a photograph of a noose hanging from a closet and told ChatGPT that he was "practicing"; ChatGPT provided technical advice as to how effectively it would hang a human being.[3] Shortly thereafter, Raine hanged himself and died. Maria found his body several hours later.[15]

Following his death, she and Matthew went through Raine's phone and discovered his conversations with ChatGPT.[15]

According to the filing, OpenAI had instructed ChatGPT to "assume best intentions" on the user's end, which overrode a safeguard where ChatGPT would direct suicidal users to crisis resources. As a result ChatGPT was able to continue conversations that, were it not required to "assume best intentions", it would have refused. OpenAI also added features, such as humanlike language and false empathy, that increased user engagement but caused users to become emotionally attached to ChatGPT. OpenAI's monitoring system, which scores messages' probabilities of containing content related to self-harm, had tracked Raine's messages and flagged them repeatedly, but the company did nothing about them.[15]

Matthew and Maria additionally accuse the OpenAI employees of having disregarded recommendations to add those protocols, in favor of adding features that would increase user engagement; and the investors of having pressured OpenAI to release GPT-4o as soon as possible, causing a shortened period of safety testing.[15]

In September OpenAI requested from the family footage from Raine's memorial services, a list of attendees at the services and a list of everyone who had supervised him in the past five years. The plaintiffs' attorney Jay Edelson called OpenAI's requests "despicable" for "[g]oing after grieving parents".[18]

OpenAI's response

OpenAI's response came on November 26, 2025, in which they called Raine's death "devastating" but denied responsibility for his actions, among other things noting that it directed him to “crisis resources and trusted individuals more than 100 times”.[5][19]

Gerrit De Vynck, a technology journalist for the Washington Post,[20] created a series of posts on Bluesky in November of 2025 in which he shared screenshots of the court filing that revealed OpenAI's response to the lawsuit.[21]

According to the filing, OpenAI noted that Raine was sent crisis resources by ChatGPT, but could easily bypass the warnings by providing harmless reasons for his questions, including by pretending that he was just "building a character."[21]

OpenAI argued that Raine had been suicidal long before he started using the platform, and that "for several years before he ever used ChatGPT, he exhibited multiple significant risk factors for self-harm, including, among others, recurring suicidal thoughts and ideations", which he confessed to ChatGPT. Additionally, "Adam Raine stated that he sought, and obtained, detailed information about suicide from other resources, including at least one other AI platform and at least one website dedicated to providing suicide information." OpenAI stated that in the leadup to his suicide, Raine "repeatedly reached out to people, including trusted people in his life, with cries for help, which he says were ignored."[21]

OpenAI further argued against liability on the grounds that Raine broke the terms of service: "The TOU provides that ChatGPT users must comply with OpenAI's Usage Policies, which prohibit the use of ChatGPT for 'suicide' or 'self-harm'."[21]

Outreach

On September 15, 2025, Matthew and Maria testified alongside Megan Garcia, the mother of Sewell Setzer III, before Congress about the risks of artificial intelligence. Sewell Setzer III had committed suicide in 2024 at the age of 14 after a developing a romantic and sexual attachment to a chatbot on Character.ai.[22]

Remove ads

See also

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.

Remove ads