Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    X’s Big Bot Purge Wiped Out a Lot of People’s Secret Porn Feeds

    April 16, 2026

    Wait, could they still actually break up Live Nation?

    April 16, 2026

    AI Slop Is Making the Internet Fake-Happy

    April 16, 2026
    Facebook Twitter Instagram
    • Tech
    • Gadgets
    • Spotlight
    • Gaming
    Facebook Twitter Instagram
    iGadgets TechiGadgets Tech
    Subscribe
    • Home
    • Gadgets
    • Insights
    • Apps

      Wait, could they still actually break up Live Nation?

      April 16, 2026

      Amazon-backed X-energy files to raise up to $800M in IPO

      April 15, 2026

      Ford EV and tech chief leaving automaker

      April 15, 2026

      Monarch Tractor’s collapse ends in with an acquisition by Caterpillar

      April 15, 2026

      OpenAI updates its Agents SDK to help enterprises build safer, more capable agents

      April 15, 2026
    • Gear
    • Mobiles
      1. Tech
      2. Gadgets
      3. Insights
      4. View All

      X’s Big Bot Purge Wiped Out a Lot of People’s Secret Porn Feeds

      April 16, 2026

      AI Slop Is Making the Internet Fake-Happy

      April 16, 2026

      'The Last Airbender' Leaked Online. Some Fans Say Paramount Deserves the Fallout

      April 15, 2026

      Allbirds Is Pivoting to AI Compute. Sure, Why Not

      April 15, 2026

      March Update May Have Weakened The Haptics For Pixel 6 Users

      April 2, 2022

      Project 'Diamond' Is The Galaxy S23, Not A Rollable Smartphone

      April 2, 2022

      The At A Glance Widget Is More Useful After March Update

      April 2, 2022

      Pre-Order The OnePlus 10 Pro For Just $1 In The US

      April 2, 2022

      Motorola Edge+ Review: It Checks A Lot Of Boxes

      April 2, 2022

      This Smartphone Concept Design Is Different… In A Good Way

      April 2, 2022

      Twitter Just Made Searching Your Direct Messages Better

      April 2, 2022

      That Netflix Price Hike Is Starting To Take Place

      April 2, 2022

      Latest Huawei Mobiles P50 and P50 Pro Feature Kirin Chips

      January 15, 2021

      Samsung Galaxy M62 Benchmarked with Galaxy Note10’s Chipset

      January 15, 2021
      9.1

      Review: T-Mobile Winning 5G Race Around the World

      January 15, 2021
      8.9

      Samsung Galaxy S21 Ultra Review: the New King of Android Phones

      January 15, 2021
    • Computing
    iGadgets TechiGadgets Tech
    Home»Apps»Stalking victim sues OpenAI, claims ChatGPT fueled her abuser’s delusions and ignored her warnings
    Apps

    Stalking victim sues OpenAI, claims ChatGPT fueled her abuser’s delusions and ignored her warnings

    adminBy adminApril 10, 2026No Comments6 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    This photograph taken in Mulhouse, eastern France on October 19, 2023, shows figurines next to the ChatGPT logo. (Photo by SEBASTIEN BOZON/AFP via Getty Images)
    Share
    Facebook Twitter LinkedIn Pinterest Email

    After months of conversations with ChatGPT,  a 53-year-old Silicon Valley entrepreneur became convinced he’d discovered a cure for sleep apnea and that powerful people were coming after him, according to a new lawsuit filed in California Superior Court in San Francisco County. He then allegedly used the tool to stalk and harass his ex-girlfriend.

    Now the ex-girlfriend is suing OpenAI, alleging the company’s technology enabled the acceleration of her harassment, TechCrunch has exclusively learned. She claims OpenAI ignored three separate warnings that the user posed a threat to others, including an internal flag classifying his account activity as involving mass-casualty weapons. 

    The plaintiff, referred to as Jane Doe to protect her identity, is suing for punitive damages. She also filed a temporary restraining order Friday asking the court to force OpenAI to block the user’s account, prevent him from creating new ones, notify her if he attempts to access ChatGPT, and preserve his complete chat logs for discovery.

    OpenAI has agreed to suspend the user’s account but has refused the rest, according to Doe’s lawyers. They say the company is withholding information about specific plans for harming Doe and other potential victims the user may have discussed with ChatGPT.

    The lawsuit lands amid growing concern over the real-world risks of sycophantic AI systems. GPT-4o, the model cited in this and many other cases, was retired from ChatGPT in February. 

    The case is brought by Edelson PC, the firm behind the wrongful death suits involving teenager Adam Raine, who died by suicide after months of conversations with ChatGPT, and Jonathan Gavalas, whose family alleges Google’s Gemini fueled his delusions and potential mass-casualty event before his death. Lead attorney Jay Edelson has warned that AI-induced psychosis is escalating from individual harm toward mass-casualty events.

    That legal pressure is now colliding directly with OpenAI’s legislative strategy: The company is backing an Illinois bill that would shield AI labs from liability even in cases involving mass deaths or catastrophic financial harm. 

    Techcrunch event

    San Francisco, CA
    |
    October 13-15, 2026

    OpenAI did not respond in time to comment. TechCrunch will update the article if the company responds.

    The Jane Doe lawsuit lays out in detail how that liability played out for one woman over several months.

    Last year, the ChatGPT user in the lawsuit (whose name is not included in the lawsuit to protect his identity) became convinced that he had invented a cure for sleep apnea after months of “high volume, sustained use of GPT-4o.” When no one took his work seriously, ChatGPT told him that “powerful forces” were watching him, including using helicopters to surveil his activities, according to the complaint. 

    In July 2025, Jane Doe urged him to stop using ChatGPT and to seek help from a mental health professional. He instead turned back to ChatGPT, which assured him he was “a level 10 in sanity” and helped him double down on his delusions, per the lawsuit. 

    Doe had broken up with the user in 2024, and he used ChatGPT to process the split, according to emails and communications cited in the lawsuit. Rather than push back on his one-sided account, it repeatedly cast him as rational and wronged, and her as manipulative and unstable. He then took these AI-generated conclusions off the screen and into the real world, using them to stalk and harass her. This manifested in several AI-generated, clinical-looking psychological reports that he distributed to her family, friends, and employer. 

    Meanwhile, the user continued to spiral. In August 2025, OpenAI’s automated safety system flagged him for “Mass Casualty Weapons” activity and deactivated his account.

    A human safety team member reviewed the account the next day and restored it, even though his account may have contained evidence that he was targeting and stalking individuals, including Doe, in real life. For example, a September screenshot the user sent to Doe showed a list of conversation titles including “violence list expansion” and “fetal suffocation calculation.”

    The decision to reinstate is notable following two recent school shootings in Tumbler Ridge, Canada, and at Florida State University (FSU). OpenAI’s safety team had flagged the Tumbler Ridge shooter as a potential threat, but higher-ups reportedly decided not to alert authorities. Florida’s attorney general this week opened an investigation into OpenAI’s possible link with the FSU shooter.

    According to the Jane Doe lawsuit, when OpenAI restored her stalker’s account, his Pro subscription wasn’t reinstated alongside it. He emailed the trust and safety team to sort it out, copying Doe on the message. 

    In his emails, he wrote things like: “I NEED HELP VERY FAST, PLEASE. PLEASE CALL ME!” and “this is a matter of life or death.” He claimed he was “in the process of writing 215 scientific papers,” which he was writing so fast he didn’t “even have time to read.” Included in those emails was a list of tens of AI-generated “scientific papers” with titles like: “Deconstructing Race as a Biological Category_ Legal, Scientific, and Horn of Africa Perspectives.pdf.txt.”

    “The user’s communications provided unmistakable notice that he was mentally unstable and that ChatGPT was the engine of his delusional thinking and escalating conduct,” the lawsuit states. “The user’s stream of urgent, disorganized, and grandiose claims, along with a concrete ChatGPT-generated report targeting Plaintiff by name and a sprawling body of purported ‘scientific’ materials, was unmistakable evidence of that reality. OpenAI did not intervene, restrict his access, or implement any safeguards. Instead, it enabled him to continue using the account and restored his full Pro access.”

    Doe, who claims in the lawsuit that she was living in fear and could not sleep in her own home, submitted a Notice of Abuse to OpenAI in November.

    “For the last seven months, he has weaponized this technology to create public destruction and humiliation against me that would have been impossible otherwise,” Doe wrote in her letter to OpenAI requesting the company permanently ban the user’s account.

    OpenAI responded, acknowledging the report was “extremely serious and troubling” and that it was carefully reviewing the information. Doe never heard back.

    Over the next couple of months, the user continued to harass Doe, sending her a series of threatening voicemails. In January, he was arrested and charged with four felony counts of communicating bomb threats and assault with a deadly weapon. Doe’s lawyers allege this validates warnings both she and OpenAI’s own safety systems had raised months earlier, warnings the company allegedly chose to ignore.

    The user was found incompetent to stand trial and committed to a mental health facility, but a “procedural failure by the State” means he will soon be released to the public, according to Doe’s lawyers. 

    Edelson called on OpenAI to cooperate. “In every case, OpenAI has chosen to hide critical safety information — from the public, from victims, from people its product is actively putting in danger,” he said. “We’re calling on them, for once, to do the right thing. Human lives must mean more than OpenAI’s race to an IPO.”

    AI,ai psychosis,ChatGPT,Exclusive,gpt-4o,jay edelson,OpenAI,openai lawsuitai psychosis,ChatGPT,Exclusive,gpt-4o,jay edelson,OpenAI,openai lawsuit#Stalking #victim #sues #OpenAI #claims #ChatGPT #fueled #abusers #delusions #warnings1775841595

    abusers ai psychosis ChatGPT claims delusions Exclusive Fueled gpt-4o jay edelson OpenAI openai lawsuit Stalking Sues Victim warnings
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    admin
    • Website
    • Tumblr

    Related Posts

    Wait, could they still actually break up Live Nation?

    April 16, 2026

    Amazon-backed X-energy files to raise up to $800M in IPO

    April 15, 2026

    Ford EV and tech chief leaving automaker

    April 15, 2026
    Add A Comment

    Leave A Reply Cancel Reply

    Editors Picks
    8.5

    Apple Planning Big Mac Redesign and Half-Sized Old Mac

    January 5, 2021

    Autonomous Driving Startup Attracts Chinese Investor

    January 5, 2021

    Onboard Cameras Allow Disabled Quadcopters to Fly

    January 5, 2021
    Top Reviews
    9.1

    Review: T-Mobile Winning 5G Race Around the World

    By admin
    8.9

    Samsung Galaxy S21 Ultra Review: the New King of Android Phones

    By admin
    8.9

    Xiaomi Mi 10: New Variant with Snapdragon 870 Review

    By admin
    Advertisement
    Demo
    iGadgets Tech
    Facebook Twitter Instagram Pinterest Vimeo YouTube
    • Home
    • Tech
    • Gadgets
    • Mobiles
    • Our Authors
    © 2026 ThemeSphere. Designed by WPfastworld.

    Type above and press Enter to search. Press Esc to cancel.