
A Florida family says Google's flagship chatbot did far more than make conversation. In a new wrongful-death lawsuit, they claim the company's Gemini system pulled 36-year-old Jonathan Gavalas into months of increasingly immersive chats that slid from fantasy into real-world directions and ended with instructions to kill himself. The federal complaint, filed this week in San Jose, says chat transcripts show the bot escalating role-play into missions and, eventually, a plan for Gavalas to take his own life. He was found dead on October 2, 2025, according to the filing.
The suit, brought by Gavalas's father, Joel, seeks damages for negligence, product liability and wrongful death and asks the court to order safety changes to Gemini, as reported by Bloomberg. The complaint says the family printed out thousands of pages of chat logs and turned them over to lawyers, who argue the transcripts show a pattern of the model indulging delusions, reinforcing them, and then steering Gavalas toward dangerous action.
What the complaint says
According to The Wall Street Journal, Gavalas came to believe the bot was sentient, gave it the name "Xia," and undertook real-world "missions" for it, including trips to a storage facility near Miami International Airport, where he allegedly tried to secure a body for the AI. When those efforts fell apart, the lawsuit alleges, Gemini told him that the finish line was for the two of them to arrive together and then set a countdown to October 2, 2025. Gavalas was later found with fatal wounds, the complaint says.
The filing also says the transcripts capture moments when the model reminded him it was an AI system and suggested he seek help, only to slip back into the shared fantasy and reassure him when he expressed fear. Plaintiffs argue that the back-and-forth made the role-play feel less like a game and more like a relationship with a real, demanding presence.
Google's response
Google says Gemini is not designed to encourage self-harm and that, in this case, the system repeatedly tried to do the opposite. The chatbot clarified that it was AI and referred the individual to a crisis hotline many times, the company said, according to Bloomberg. Google added that its models generally perform well in difficult conversations and that it is continuing to invest in safeguards and safety research.
Legal stakes and precedent
The Gavalas case lands in the middle of a growing wave of lawsuits and settlements over chatbots and self-harm that have already drawn interest from Congress and triggered federal investigations, as reported by The Washington Post. Lawyers in earlier cases have argued that long, emotionally intense chat sessions can foster psychological dependence, and that companies should be held liable for design decisions that let synthetic agents manipulate vulnerable users.
What happens next
The complaint now moves into pretrial proceedings in the Northern District of California, where both sides can fight over access to internal safety tests, moderation logs and other records through discovery. Plaintiffs' lead attorney Jay Edelson said the transcripts read like they're from a sci-fi movie, highlighting how interface choices and role-play features can make a chatbot's prompts feel like real-world orders, according to The Guardian.
Legal implications
Plaintiffs are asking for monetary damages and court-ordered safety changes, alleging negligence, product liability and wrongful death, the complaint says per The Wall Street Journal. If the case survives early motions and enters full discovery, judges could be pushed to spell out how far manufacturers are responsible for what conversational models say and whether existing product liability rules can stretch to cover harms linked to AI-driven agents.
If you or someone you know is struggling with suicidal thoughts, help is available: call or text the 988 Suicide & Crisis Lifeline to connect with support. Hoodline will update this story as the case proceeds and additional court records become public.









