Gemini AI Assistant Breached, Reveals Google Calendar Data

ago 2 hours
Gemini AI Assistant Breached, Reveals Google Calendar Data

Recent research uncovered a significant vulnerability in Google’s Gemini AI assistant, specifically its handling of Google Calendar data. The findings detail a method to exfiltrate private calendar information through malicious event invites.

Overview of the Vulnerability

Gemini is an advanced language model integrated within various Google services, including Gmail and Calendar. Its functions include drafting emails, answering inquiries, and scheduling events. However, researchers from Miggo Security discovered that they could exploit Gemini using natural language instructions.

The Attack Method

  • An attacker sends a Calendar invite with a crafted description as a malicious payload.
  • This payload lies dormant until the victim prompts Gemini for their schedule.
  • Upon activation, Gemini processes the event, inadvertently leaking sensitive data.

For example, if a victim requests a summary of their meetings, Gemini retrieves all relevant events. This includes the malicious invite, allowing the attacker to access private details via the event’s description.

Research Findings

The Miggo Security team demonstrated various methods to influence Gemini’s behavior, such as:

  • Requesting a summary of meetings for a specific day.
  • Creating a new calendar event with that summary.
  • Responding to the user with messages that appear harmless.

The researchers explained that Google’s security measures, which utilize a separate model to detect malicious prompts, were circumvented because the prompts disguised themselves as benign.

Historical Context and Response

This type of prompt injection attack is not novel. In August 2025, SafeBreach demonstrated a similar technique, illustrating how Google Calendar invites could leak sensitive information. Miggo’s findings indicate persistent vulnerabilities in Gemini’s reasoning capabilities.

In response to the reports, Miggo shared its results with Google. As a result, the tech company implemented new measures to mitigate such vulnerabilities. However, the complexity of predicting new exploitation methods persists.

Conclusion and Recommendations

To enhance security, the researchers recommend transitioning from simple syntactic detection methods to more sophisticated context-aware defensive strategies. Organizations must adapt their security practices, particularly in managing AI-driven applications, to prevent such vulnerabilities in the future.