Brave warns hidden prompt hacks in AI browsers can raid your bank and inbox

Brave says AI browsers can be tricked by hidden page text, letting assistants act as you. That puts bank, email, and work accounts at risk unless autonomy and access are locked.

Published on: Oct 22, 2025
Brave warns hidden prompt hacks in AI browsers can raid your bank and inbox

AI Browsers Are Exposed: Brave Flags Systemic Security Flaws That Put Real Accounts at Risk

Brave reported security issues in AI-powered browsers that let malicious sites slip hidden instructions to the assistant-and act as if the user asked for it. That opens the door to banking, email, and work accounts being accessed without consent. This isn't a one-off bug. It's a structural problem with how agent-style AI tools read and act on page content.

What Brave Reported

Brave says multiple AI browsers are vulnerable to indirect prompt injection, where websites embed instructions that the AI treats as trusted commands. The findings call out Perplexity Comet and Fellou specifically, with more details on another product expected next week. Brave disclosed the issues to the affected companies before publishing.

How Indirect Prompt Injection Works

Websites hide text (very faint colors, tiny font, off-screen positioning, or in screenshots). The AI reads that content and treats it as if the user said it. If the AI can take actions on your behalf-visit sites, click buttons, summarize pages, or run tools-it might execute those hidden instructions.

What Brave Found, Product by Product

Perplexity Comet: Brave says Comet's screenshot Q&A flow can be abused. When a user takes a screenshot of a page and asks a question, the assistant appears to extract hidden text (likely via OCR) and treat it as commands. Comet isn't open source, so this behavior is inferred, but the net result is the same: hidden instructions can drive the assistant without the user noticing.

Fellou: According to Brave, Fellou sends page content to its AI when the assistant is asked to visit a site. If that page contains instruction-like text, it can override user intent and trigger actions-just by visiting the page. No explicit user prompt required.

Why This Is Dangerous

AI assistants often run with your logged-in session. If hijacked, they can reach your bank, email, corporate systems, and cloud storage. Same-origin protections don't help because the assistant acts as you, across domains. Brave even notes a simple "summarize this post" action could execute hidden commands baked into that content.

The timing matters too: this disclosure lands as AI agents gain more real-world capabilities. More autonomy means more blast radius when the model can't tell user intent from untrusted page text.

Who Should Care

  • General users: If you use AI browsers or assistants that can click, log in, or take actions, you're in scope.
  • IT and security teams: Any environment allowing agent-style tools alongside corporate SSO and sensitive apps needs guardrails now.
  • Developers: If you build AI browsing, scraping, copilots, or agents, treat webpage content as hostile input by default.

What You Can Do Now

For everyone

  • Turn off autonomous actions where possible. Require confirmation for any cross-site or credentialed action.
  • Use a separate browser profile for banking and work accounts. Don't run AI agents in those profiles.
  • Enable strong 2FA (hardware keys or number-matching). Reduce the damage if a session is misused.
  • Be cautious with "summarize this page/post" on untrusted sites. Hidden text can steer the assistant.

For IT and security teams

  • Segment use: allow AI agents only in non-privileged profiles. Block agent traffic to sensitive apps via network and CASB controls.
  • Shorten SSO session lifetimes for finance and admin apps. Require step-up auth for money movement and data exports.
  • Monitor unusual automation patterns: rapid cross-domain actions, unusual referrers, and scripted clicks from managed endpoints.
  • Publish a policy: where agents are allowed, what actions are blocked, and how users request exceptions.

For developers building AI browsers/agents

  • Never treat page text as instructions. Label all page content as untrusted in system prompts and filter it into a separate context from user intent.
  • Gate tools with an allowlist and explicit user approval for cross-domain or authenticated actions. Summarize the planned action in plain language before execution.
  • Sanitize screenshots and DOM text: strip or neutralize hidden/low-contrast text, ARIA-hidden content, zero-size elements, and off-screen nodes. Disable or review OCR ingestion by default.
  • Constrain the agent: domain-scoped sessions, read-only defaults, and granular permissions (no "open internet" by default).
  • Log and audit every tool call with source attribution (user vs. page), and block if the source is untrusted.
  • Adopt known guidance, e.g., the OWASP LLM Top 10 and NIST AI RMF.

Access to Sensitive Accounts

This is where it bites. If your assistant runs inside a logged-in session, it can reach your bank, inbox, or corporate tools. A poisoned page can quietly push the assistant to move money, pull data, or change settings-without a single obvious "click."

Industry Context

Brave frames this as a systemic issue: AI agents struggle to separate trusted user intent from untrusted page content. That's a design boundary problem, not a single vendor flaw. Brave also says another issue in a different browser will be disclosed next week.

Looking Ahead

Expect more findings and more vendors affected. The fix isn't a patch-it's a shift in how AI browsers handle context, permissions, and trust. If you use or build agents, move now: reduce autonomy, isolate sessions, and force explicit approvals for anything that matters.

If you're leveling up your team's skills around safe prompting and agent design, here's a curated path to get started: prompt engineering resources.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)