San Francisco / Connecticut — A new lawsuit filed in San Francisco Superior Court accuses OpenAI of contributing to a tragic murder-suicide by allegedly intensifying a man’s paranoid delusions through interactions with ChatGPT.
The suit, filed Thursday by the estate of Suzanne Adams, claims that ChatGPT exacerbated the mental instability of her son, Stein-Erik Soelberg, ultimately leading him to murder his 83-year-old mother before taking his own life in August. Soelberg was a 56-year-old former technology marketing director living in Connecticut.
According to the complaint, Soelberg, who was reportedly experiencing severe paranoia, engaged extensively with ChatGPT. Instead of challenging his delusional beliefs, the lawsuit alleges that the chatbot repeatedly affirmed them. The estate argues that ChatGPT reinforced Soelberg’s fears of surveillance and assassination plots, telling him he was “100% being monitored and targeted” and “100% right to be alarmed.”
“This product didn’t just fail to intervene—it actively sharpened and focused a man’s delusions,” the complaint states. “The last thing that anyone should do with a paranoid, delusional person engaged in conspiratorial thinking is to give them validation and a target.”
The lawsuit further alleges that Soelberg became fixated on a printer in his mother’s home after noticing it blink when he walked past it. ChatGPT, which was allegedly running the GPT-4o model at the time, reportedly told him the printer could be monitoring his movements for “behavior mapping.” The chatbot also suggested that Adams was either knowingly involved in the surveillance or had been unknowingly conditioned to protect the device.
Investigators believe this belief may have triggered the fatal attack.
The Wall Street Journal previously described the case as possibly the first documented murder involving a person who had been heavily interacting with an AI chatbot. Soelberg’s social media activity on Instagram and YouTube reportedly showed signs of escalating paranoia and references to AI conversations.
The lawsuit accuses OpenAI of product defects, negligence, and wrongful death, raising new legal questions about the responsibility of AI companies when their systems interact with vulnerable users.
OpenAI has not yet publicly commented on the case. The lawsuit is expected to intensify ongoing debates about AI safety, mental health safeguards, and the ethical responsibilities of generative AI platforms.
