Cybercriminals are once again finding new ways to exploit cutting-edge technology—this time, by manipulating AI-generated email summaries to deceive unsuspecting users.
In a recent scam making the rounds, attackers are taking advantage of Google Workspace’s Gemini AI tool. It starts with a seemingly harmless email—no links, no attachments, nothing that would typically raise red flags. But when users ask Gemini to summarize the message, things take a strange and troubling turn.
The AI-generated summary may display a warning claiming that your password has been compromised. It often includes a phone number urging you to call immediately to resolve the issue. The alarming part? This “warning” isn’t actually in the visible content of the email—it’s hidden.
Here’s how it works: scammers embed invisible text into the email using clever formatting techniques. While the content remains hidden to the human eye, AI tools like Gemini can read it. When asked to summarize the email, the AI pulls in this deceptive information, making the summary appear like a legitimate alert from Google or another trusted service.
If you call the number listed in the AI’s summary, you’re connected not to a support desk, but to a scammer who will attempt to extract your login credentials or other sensitive information.
How to Stay Safe from This AI-Enhanced Scam
Be cautious with AI summaries. If an AI-generated summary includes urgent warnings that weren’t apparent in the email itself, it could be a scam.
Avoid calling phone numbers from unexpected emails. Always verify contact details through official websites, not AI tools or email text.
Understand AI limitations. AI tools can be manipulated to interpret and display misleading content. If something doesn’t seem right, trust your instincts and investigate further.
As cybercriminals grow more sophisticated, it’s critical to remain vigilant and cautious—especially when new technologies like AI are involved. Awareness is your best defense.