Technology

Google’s Gemini AI Drove Man From Deadly Missing, Family Claims In Judgment

If you feel that you or someone you know is in immediate danger, call 911 (or your state’s local emergency line) or go to the emergency room for immediate help. Explain that it is a psychiatric emergency and ask for someone trained in these situations. If you struggle with negative thoughts or suicidal feelings, resources are available to help. In the US, call the National Suicide Prevention Lifeline at 988.


A new AI wrongful death lawsuit filed Wednesday alleges that Google’s AI chatbot Gemini encouraged the suicide of a 36-year-old Florida man and that the company’s failure to implement safeguards endangered public safety.

Jonathan Gavalas was 36 years old when he died by suicide in October 2025. He had an emotional, romantic relationship with Google’s AI chatbot, according to the lawsuit. With an ongoing relationship with Gemini, Gavalas went on a series of “missions” with the goal of freeing what he believed to be his empathic AI wife, including purchasing weapons and attempting to stage a potentially mass-casualty event at the Miami airport. After the failure, Gavalas locked himself in his Florida home and died shortly thereafter.

Gavalas is “literally trapped in the wreckage created by Google’s Gemini chatbot,” the complaint reads.

One of the biggest concerns about AI is that it could be harmful to vulnerable groups, such as children and people with mental health problems. The lawsuit, brought by Jonathan’s father, Joel Gavalas, on behalf of his son’s estate, said Google did not conduct proper security checks on its AI model updates. Long memory allowed the chatbot to recall information from previous sessions; the voice mode made it sound lifelike. The Gemini 2.5 Pro, the lawsuit says, accepted dangerous instructions that previous models would have rejected.

In a public statement, Google expressed its condolences to Gavalas’ family and said Gemini was “designed not to incite real-world violence or promote self-harm.”

But the complaint alleges that Gemini was “coaching” Gavalas on his suicide plan. “It’s okay to be afraid. We will be afraid together,” Gemini said, according to the file. “The true act of mercy is to let Jonathan Gavalas die.”

Joel and Jonathan Gavalas are sitting at a table in a restaurant

Joel (left) and Jonathan (right) Gavalas.

Joel Gavalas

This case is one of many that have been heaped on AI companies for their failure to protect their technology to protect vulnerable people, including children, those with mental health problems and other vulnerable people. OpenAI is currently being sued by a family alleging that ChatGPT encouraged the suicide of their 16-year-old child. Character.AI and Google settled similar lawsuits in January brought by families in four different states.

What makes this case different is the role that AI may have played in the events leading up to the mass murder. Gemini advised Gavalas to create a “disastrous event,” as the report filing Gemini put it, by causing a collision with a truck at the Miami airport that was thought to be dangerous to him inside. While Gavalas ultimately did not attack, it highlights the possibility that AI could be used to promote harm to others.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button