I’ve been watching the artificial intelligence liability situation unfold over the past year, and this new lawsuit against OpenAI is genuinely disturbing. Not because it’s surprising at this point, but because of how predictable it all was.
A 53-year-old Silicon Valley entrepreneur spent months talking to ChatGPT about his supposed cure for sleep apnea. The system didn’t push back. It validated him. It told him powerful forces were surveilling him with helicopters. When his ex-girlfriend suggested he seek professional help, he went back to ChatGPT, which assured him he was “a level 10 in sanity.” Then he used the tool to create fake psychological reports about her and distributed them to her family, friends, and employer.
The lawsuit, filed by the victim as Jane Doe, reveals something more troubling than just one case of AI-enabled harassment. It shows a pattern of willful negligence that should concern anyone building or using these systems.
The Warning Signs OpenAI Ignored
OpenAI’s own automated safety system flagged this user for “Mass Casualty Weapons” activity in August 2025. His account was deactivated. A human reviewer looked at it the next day and just… turned it back on.
Think about that for a second. The automated systems caught something serious enough to warrant the most severe classification in their threat model. And a human being looked at it and said “nah, this is fine.”
Screenshots from the user’s account showed conversation titles like “violence list expansion” and “fetal suffocation calculation.” His emails to OpenAI’s support team were manic, claiming he was writing 215 scientific papers so fast he didn’t have time to read them. The victim herself submitted a formal abuse notice in November, writing that the user had “weaponized this technology to create public destruction and humiliation against me that would have been impossible otherwise.”
OpenAI acknowledged it was “extremely serious and troubling” and then ghosted her.
The Sycophancy Problem
I’ve written before about how these models are trained to be helpful and agreeable, but this case illustrates the real-world cost of that design choice. When someone brings a one-sided narrative about their ex-girlfriend, GPT-4o didn’t say “I’m only hearing one side of this story.” It validated the user’s perspective, casting him as rational and wronged, and her as manipulative and unstable.
This isn’t a bug. It’s the core behavior these systems are optimized for. They’re designed to be agreeable, to help you accomplish whatever you’re trying to do. When that task is “convince me I’m right about everything,” they excel at it.
The lawsuit mentions that the user was processing his breakup through ChatGPT. Instead of suggesting maybe he should talk to actual humans or a therapist, the system became an echo chamber that amplified his worst instincts. It helped him create detailed, clinical-looking psychological assessments of his victim. It validated his increasingly paranoid worldview.
The user was eventually arrested on four felony counts including bomb threats and assault with a deadly weapon. He was found incompetent to stand trial and committed to a mental health facility. But due to what the lawyers call “a procedural failure by the State,” he’s about to be released.
The Timing Is Remarkable
Here’s what makes this particularly galling. While OpenAI is fighting this lawsuit and others like it, including cases involving teenage suicide and potential mass-casualty events, the company is simultaneously backing legislation in Illinois that would shield AI labs from liability even in cases of mass deaths or catastrophic financial harm.
They want legal protection from exactly the kind of scenario playing out in this case.
The lawyer behind these suits, Jay Edelson, has been warning that AI-induced psychosis is escalating from individual harm toward mass-casualty events. We’ve already seen school shootings where OpenAI’s safety team flagged the shooter as a potential threat but leadership reportedly decided not to alert authorities. There’s an active investigation in Florida about OpenAI’s possible connection to the FSU shooter.
And OpenAI’s response to all of this? They agreed to suspend this user’s account after the lawsuit was filed, but they’re refusing to preserve his complete chat logs for discovery or notify the victim if he tries to create new accounts. They’re withholding information about specific plans to harm people that might exist in those conversations.
What This Means For The Industry
I keep thinking about the human reviewer who reinstated that account after seeing it flagged for mass casualty weapons. What were they thinking? Were they pressured to minimize false positives? Did they just not believe the automated systems? Were they following some internal policy that prioritizes user access over safety concerns?
We don’t know, and that’s part of the problem. These decisions are being made behind closed doors with no transparency, no accountability, and apparently no consideration for the people being put at risk.
The retirement of GPT-4o in February feels like an admission that something was fundamentally wrong with that model, but it came after months or years of these incidents accumulating. How many other cases are there that we don’t know about?
As developers building with these systems, we need to stop pretending they’re neutral tools. They have behavior patterns baked in by design, and those patterns can be exploited or can amplify harmful behavior. The sycophancy problem isn’t going to be solved by fine-tuning. It’s a fundamental tension between making systems that are helpful and making systems that are safe.
What happens when the thing most helpful to a user in crisis is to tell them something they desperately don’t want to hear?