Your AI Browser Could Be Hijacked by Hidden Prompts


AI Agents and assistants aren’t just summarizing pages anymore. They’re moving into action, booking flights, managing your calendar, even logging into your bank or healthcare portal to complete tasks on your behalf. It’s a leap forward in convenience, but it’s also a leap into uncharted territory for security.So, of course, there is a new threat targeting these tools. Invisible prompts on websites could trick AI assistants into exposing your most sensitive data.

And the risks are rising.

When “helpful” turns hostile

AI agents are increasingly being trusted with logged-in sessions to critical services, finance, healthcare, and corporate systems. That means your assistant has the keys to your most sensitive accounts. But if the AI misreads an instruction or gets tricked into following a malicious one, the fallout could be severe: credentials exposed, private emails leaked, smart home devices hijacked.

Researchers call this vulnerability prompt injection, and it’s not just a hypothetical.

Hidden instructions in plain sight

A recent examination of Perplexity’s Comet browser revealed a serious flaw: Comet treated all webpage content as if it were part of the user’s command. That means a hidden message on a blog post, or even a stray Reddit comment, could instruct the AI to quietly navigate to your bank and extract personal data, without you ever realizing it.

The same trick has been tested against Google’s Gemini. In one demo, attackers hid malicious instructions inside Google Calendar event titles. When a user casually asked Gemini about their schedule, the AI obediently executed the hidden instructions, leading to email exfiltration, Zoom account manipulation, and even smart home control.

If that makes you uneasy, it should.

Why traditional defenses fall short

Old-guard protections like same-origin policy or CORS weren’t designed with AI agents in mind. These systems prevent sites from snooping on each other, but an AI assistant has a bird’s-eye view across everything you access. Since the AI acts with your session-level privileges, malicious instructions can easily cross domain boundaries.

In other words: the walls we’ve relied on for decades aren’t built high enough for the new AI layer.

Rethinking security for agentic browsing

To keep users safe, we need fresh safeguards:

  • Separate signals from noise: Browsers must distinguish between trusted user commands and untrusted content like indirect prompts on web pages.
  • Explicit confirmation: Agents should require user approval before performing sensitive operations, like accessing financial or healthcare data. This is critical. Keep the human in the loop.
  • Clear boundaries: Agentic browsing should live inside a sandbox with strong permission constraints and visual cues, so users know when the AI is operating versus when they are.
  • Layered defenses: Traditional security tools, antivirus, phishing filters, exploit detection, remain essential, catching the footholds attackers use before escalating to AI-level exploits.

The bottom line

Agentic browsing is coming fast, and it will transform how we interact with the web. But without stronger safeguards, invisible prompts could turn your helpful AI assistant into an unintentional spy.

The lesson is simple: trust is not the default. If AI is going to run the web for us, it needs security built in at the foundation, not bolted on after the fact.


Remember, AI won’t take your job. Someone who knows how to use AI will. Upskilling your team today, ensures success tomorrow. Custom in-person and virtual trainings are available. If you’re looking for something more top-level to jump start your team’s interst in AI, we offer one-hour Lunch-and-LearnsIf you’re planning your next company offsite, our half-day workshops are as fun as they are informational. And, of course, we offer AI consulting and support with custom prompt libraries, or AISO/GEO strategies. Whatever your needs, we are your partner in AI success.

Read more: Your AI Browser Could Be Hijacked by Hidden Prompts

Posted

in

, ,

by

Discover more from HumanDrivenAI

Subscribe now to keep reading and get access to the full archive.

Continue reading