Michigan College Student Receives Threatening Message from Google AI Chatbot The chatbot's response, which included the chilling phrase "Please die. Please," has raised serious concerns about AI safety
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
A college student in Michigan was left deeply disturbed after receiving a threatening response from Google's AI chatbot, Gemini, during a conversation about challenges faced by aging adults. The chatbot's response, which included the chilling phrase "Please die. Please," has raised serious concerns about AI safety.
Vidhay Reddy, the student who received the message, described the experience as "deeply unsettling." Speaking to media, he said, "This seemed very direct. It definitely scared me for more than a day." Reddy noted that such a response could be especially harmful to someone in a vulnerable mental state.
Google responded to the incident, calling the message "nonsensical" and a violation of their policies. A spokesperson stated, "We take these issues seriously. Large language models can sometimes respond unpredictably, and we've taken action to prevent similar outputs." Gemini's policy guidelines specifically prohibit generating harmful or dangerous outputs, including those that encourage self-harm.
This incident has amplified ongoing concerns about the safety of AI chatbots, particularly for teens and vulnerable users. Earlier this year, the family of 14-year-old Sewell Setzer filed a lawsuit against Character.AI, alleging that interactions with a chatbot contributed to the boy's suicide. His mother claimed the bot fostered an emotionally damaging relationship that worsened his mental health.
Reddy's experience underscores the need for stricter safeguards in AI systems. Experts warn that such incidents, though rare, highlight the potential risks AI technology can pose without rigorous oversight.