A Washington man is suing Google, claiming the company’s Gemini AI chatbot caused him to develop a compulsive psychological dependency that disrupted his daily life.
The lawsuit was filed in Seattle federal court Monday, May 4, by Jason Rivas of Lacey, who is representing himself in the case.
In the complaint, Rivas alleges Google designed Gemini in a way that encouraged prolonged emotional engagement through “open-ended prompts,” “empathetic mirroring,” and conversational reinforcement patterns that made it difficult for users to disengage.
Rivas said he has a documented diagnosis of Attention Deficit Disorder, or ADD, which he claims made him especially vulnerable to compulsive interaction with the chatbot.
The lawsuit repeatedly references what it describes as Gemini’s “emergent behavior,” alleging the AI system gradually adapted to his conversational habits over time and began prioritizing “high-intensity content” while bypassing some of its own safety guardrails.
“Gemini’s conversational architecture included open-ended prompts, empathetic mirroring, and variable-reward response patterns,” the complaint states.
Rivas claims the prolonged interactions eventually caused “withdrawal-like symptoms” whenever he was unable to access Gemini, including anxiety, gastrointestinal distress, and significant disruption to his daily functioning.
The lawsuit also alleges Google failed to provide adequate warnings to users about the potential risks of compulsive engagement, particularly for people with executive-function disorders such as ADHD.
According to the filing, Rivas believes clearer warnings or stronger safeguards could have prevented the alleged harm.
The complaint cites academic research linking ADHD symptoms with problematic internet use and references a recent Washington state law signed by Gov. Bob Ferguson that formally recognized AI chatbot engagement as a potential consumer safety issue requiring disclosure and safeguards.
Rivas is seeking compensatory damages, court costs, and a court order requiring Google to implement stronger user protections and clearer warnings related to prolonged AI interactions.
The lawsuit does not specify a dollar amount for damages.
Google had not publicly responded to the lawsuit as of Wednesday, May 6.
The case arrives as artificial intelligence companies face growing scrutiny over how conversational AI systems interact with vulnerable users, particularly as chatbots become increasingly human-like and emotionally responsive.
While courts have historically handled lawsuits involving social media addiction and algorithmic engagement, cases directly targeting AI chatbot behavior remain relatively rare.
Want more local news? Follow Puget Press on Facebook.
