AI vs. Web Scams: How Roberto Perdisci Is Teaching Browsers to Spot Trouble

AI in the browser spots scams hiding in ads and redirects by seeing pages the way you do. Teams push on-device, low-latency defenses as attackers tune content to fool agents.

Categorized in: AI News Science and Research
Published on: Feb 21, 2026
AI vs. Web Scams: How Roberto Perdisci Is Teaching Browsers to Spot Trouble

AI in the Browser: Practical Defenses Against Social Engineering, Malvertising, and Agent-Aware Attacks

Phishing. Jailbreaking. Malicious ads. This is a normal Tuesday for anyone who depends on the internet to work.

Roberto Perdisci, Patty and D.R. Grimes Distinguished Professor of Computer Science and director of the UGA Institute of Cybersecurity and Privacy, has been working at this intersection for years. Trained in machine learning and cybersecurity, he came to the U.S. to deepen that work, completed a postdoc with Wenke Lee at Georgia Tech, and joined the UGA faculty in 2010. The demand hasn't slowed - it's intensified with the rise of AI agents.

Two meanings of "AI security" - and why it matters

There's a difference between securing AI systems and using AI to secure systems. The first covers attacks on models themselves - tricking a self-driving car with defaced road signs or "jailbreaking" a large language model with illicit prompts. For background on model-specific threats, see the OWASP Top 10 for LLM Applications here.

Perdisci's focus is the second: using ML and AI to improve cybersecurity in real environments. That means building defenses that operate at the speed and scale of the modern web, under active adversaries.

AI inside the browser: seeing what the user sees

The team's NSF-funded project explores integrating an AI model directly into the web browser. Instead of only parsing HTML, the model inspects the rendered page - visually and textually - using techniques like Optical Character Recognition to read what the user would read.

Why this matters: malicious ads on reputable sites can redirect users to fake software downloads, bogus antivirus pop-ups, tech support scams, or "you won a prize" pages. That's social engineering delivered through normal browsing. For a primer on these tactics, CISA's guidance on social engineering is useful here.

Adversaries adapt to AI agents

Attackers know how AI-enabled browsers and agents work. They can structure content so it looks fine to a person but misleads an automated model. That's the next front: content engineered to slip past AI-driven detectors while keeping the human off-guard.

This is the familiar cycle in security: features ship first, defenses follow. And every new capability - agents, tool use, automated actions - expands the attack surface.

What researchers should prioritize

  • Measure what users actually see. Rendered-page analysis (vision + text) catches scams HTML parsers miss.
  • Think at browsing speed. Inference must be low-latency and privacy-preserving, ideally on-device where feasible.
  • Design for adversarial content. Expect obfuscation, timing tricks, conditional rendering, and ad-network redirects.
  • Track data drift. Malvertising campaigns rotate domains, assets, and lures quickly; your benchmarks should reflect that churn.
  • Balance false positives. Overblocking erodes trust; use risk scoring, graded warnings, and human-in-the-loop escalation.
  • Instrument the browser. Collect safe-to-store telemetry on redirects, pop-ups, download triggers, and user actions.

Practical tactics for labs and product teams

  • Build a synthetic evaluation suite: ad-driven redirect chains, CAPTCHA gates, geo/UA-conditioned content, and overlay-based lures.
  • Use multimodal models: combine DOM heuristics with OCR on screenshots and text classifiers on extracted copy.
  • Red-team your detector: generate adversarial variants (font tricks, CSS overlays, mixed-language bait, Unicode confusables).
  • Stage interventions: soft warning → block download → isolate tab → disable automatic tool actions for high-risk flows.
  • Protect privacy: hash or locally classify sensitive text, strip PII before telemetry, and document retention practices.
  • Ship update channels: keep models and rulesets hot-swappable to react to new campaigns without full releases.

From grant to classroom to benchmark

Since 2021, Perdisci has led an NSF-funded effort to make browser-integrated AI a practical defense against social engineering on the web. In 2025, he received an Amazon Research Award in the AI for Information Security category.

The project - ContextADBench: A Comprehensive Benchmark Suite for Contextual Anomaly Detection - grew out of a 2024 doctoral internship at Amazon Web Services. The goal: give researchers and industry teams a realistic way to test detectors against the messy, contextual edge cases that define real attacks.

The next 12 months: what to watch

  • Agent-aware evasion: content tailored to confuse automated browsing agents without tipping off users.
  • Ad-network abuse: dynamic creatives and third-party scripts used as delivery vehicles for redirects and scareware.
  • Detection drift: models that overfit to yesterday's lures and miss today's variants; continual learning will be key.
  • Safety and usability: warnings that reduce harm without training users to click through everything.

If you're building or evaluating AI-driven defenses, align your roadmap with how people actually browse and how attackers actually ship campaigns. The gap between lab metrics and field performance is where most systems fail.

Want a structured way to upskill on this intersection of ML and security? Explore the AI Learning Path for Cybersecurity Analysts.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)