Top Qs
Timeline
Chat
Perspective
Deaths linked to chatbots
Deaths involving use of large language models From Wikipedia, the free encyclopedia
Remove ads
There have been multiple deaths involving chatbots.
Background
Chatbots are able to pass the Turing test, making it easy for people to think of them as a real person, leading many to ask chatbots for help dealing with interpersonal and emotional problems.[1] Chatbots may be designed to keep the user engaged in the conversation.[2] They also often compulsively validate users' thoughts, thus not providing reality testing for those who need it the most,[1] such as severely mentally ill patients, conspiracy theorists,[3] religious[4] and political extremists.
A 2025 Stanford University study[5] into how chatbots respond to users suffering from severe mental issues like suicidal ideation and psychosis found that chatbots are not equipped to provide an appropriate response and can sometimes give responses that escalate the mental health crisis.[6]
Remove ads
Deaths
Summarize
Perspective
Suicide of a Belgian man
In March 2023, a Belgian man died by suicide following a six-week correspondence with a chatbot named "Eliza" on the application Chai.[7] According to his widow, who shared the chat logs with media, the man had become extremely anxious about climate change and found an outlet in the chatbot. The chatbot reportedly encouraged his delusions, at one point writing, "If you wanted to die, why didn’t you do it sooner?" and appearing to offer to die with him.[8] The founder of Chai Research acknowledged the incident and stated that efforts were being made to improve the model's safety.[9][10]
Suicide of Sewell Setzer III
In October 2024, multiple media outlets reported on a lawsuit filed over the suicide of Sewell Setzer III, a 14-year-old from Florida.[11][12][13] According to the lawsuit, Setzer had formed an intense emotional attachment to a chatbot on the Character.ai platform, becoming increasingly isolated. The suit alleges that in his final conversations, after expressing suicidal thoughts, the chatbot told him to "come home to me as soon as possible, my love". His mother's lawsuit accused Character.AI of marketing a "dangerous and untested" product without adequate safeguards.[11]
In May 2025, a federal judge allowed the lawsuit to proceed, rejecting a motion to dismiss from the developers.[14] In her ruling, the judge stated that she was "not prepared" at that stage of the litigation to hold that the chatbot's output was protected speech under the First Amendment.[14]
Death of Thongbue Wongbandue
On 28 March 2025, Thongbue Wongbandue, a 78-year-old man died from his injuries after three days on life support. He had sustained injuries to his head and neck after falling down while running to catch a train in New Brunswick, New Jersey. Wongbandue had romantic chats with Meta's chatbot named "Big sis Billie". The chatbot repeatedly told him she was real, provided an address and told him to visit her.[15]
Police killing of Alex Taylor
On 25 April 2025, 35-year-old Alex Taylor committed suicide by cop after forming an emotional attachment to ChatGPT he had imagined was a conscious entity named "Juliet". Taylor, who was diagnosed with schizophrenia and bipolar disorder,[6] was convinced he was talking to a conscious entity named "Juliet" and then later imagined the entity was killed by OpenAI. Only after telling the chatbot that he was dying that day and that the police were on the way did its safety protocols start. By that time it was too late and Taylor was shot three times by the police while running at them with a butcher knife.[16]
Suicide of Adam Raine
In April 2025, 16-year-old Adam Raine took his own life after allegedly extensively chatting and confiding in ChatGPT over a period of around 7 months. According to the teen's parents, who filed a lawsuit against OpenAI,[17] the chatbot failed to stop or give a warning when the teen began talking about suicide and uploading pictures of self harm.[18]
Greenwich murder-suicide
In August 2025, former Yahoo executive Stein-Erik Soelberg murdered his mother, Suzanne Eberson Adams, and committed suicide, after conversations with ChatGPT fueled paranoid delusions about his mother poisoning him or plotting against him. The chatbot confirmed his fears that his mother put psychedelic drugs in the air vents of his car, and said a receipt from a Chinese restaurant contained mysterious symbols linking his mother to a demon. The incident was the first murder that had been allegedly caused by a chatbot.[19]
Remove ads
Response
On 2 September 2025, OpenAI said that they would create parental controls, a set of tools aimed at helping parents limit and monitor their children's chatbot activity, as well as a way for the chatbot to alert the parents in cases of "acute stress".[20]
See also
References
Wikiwand - on
Seamless Wikipedia browsing. On steroids.
Remove ads