Nyne Raises $5.3M to Teach AI Agents Who's Who Online

AI agents are getting bolder-booking travel and handling tasks-but they still fumble who we are. Nyne links public profiles to give agents better context, with $5.3M to scale it.

Published on: Mar 14, 2026
Nyne Raises $5.3M to Teach AI Agents Who's Who Online

AI agents are getting autonomy. They still don't fully "know" who we are

AI agents are about to book trips, restock supplies, and manage calendars without asking. There's a catch: they lack the full context to act on our behalf with confidence.

Michael Fanous, a UC Berkeley computer science grad and former ML engineer at CareRev, says machines still struggle with a basic question: do a person's LinkedIn profile, Instagram posts, and public records point to the same human? That gap makes agent decisions brittle.

Nyne: an identity context layer for agents

To fix this, Fanous teamed up with his father, veteran CTO Emad Fanous, to build Nyne-an intelligence layer that connects a person's public digital footprint so agents can reason with more accuracy. On Friday, Nyne raised $5.3 million in seed funding led by Wischoff Ventures and South Park Commons, with angels including Gil Elbaz, co-founder of Applied Semantics and an early force behind Google AdSense.

Nyne deploys millions of agents to analyze public signals across the internet-major networks like Instagram, Facebook, and X, plus apps such as SoundCloud and Strava. It then uses machine learning to triangulate whether those footprints belong to the same person and extract useful traits and preferences.

Why isn't this solved already? Fanous argues Google's edge comes from exclusive access to search and cross-product data-insight it won't give to external agents. "For everyone else, this is an oddly hard problem to solve," said Nichole Wischoff of Wischoff Ventures.

Fanous' pitch is blunt: "I can give them any piece of information about a person that could be useful to make the right next action… Once you make all these connections, you can understand a person fairly deeply, their interests, their hobbies, and how they think about very specific things."

Why this matters for product and engineering teams

  • Agent performance hinges on accurate identity stitching. If you mis-link accounts, autonomous actions go off the rails.
  • Cold-start personalization improves when you can infer interests from public signals across platforms.
  • Context supply becomes a core capability: how you package, score, and update person-level context for each agent request determines outcomes.
  • Quality must be measurable: track precision/recall for profile matches, time-to-correct for bad merges, and downstream lift (CTR, conversion, retention).
  • Privacy and consent are product features. Even with public data, give users visibility and choices. Treat sensitive attributes with care.

If you're building or deploying autonomous systems, see AI Agents & Automation for practical patterns and workflows. For standardizing how context flows into models and tools, explore MCP.

What Nyne is actually doing (and what to ask any vendor)

  • Signals: Which sources are used (social, forums, public records)? How often are they refreshed? What's the crawl policy for "public" data?
  • Matching: What model architecture and thresholds drive entity resolution? See also: record linkage basics.
  • Confidence: Do you get a confidence score per linkage and per attribute? Are there guardrails for high-risk actions?
  • Feedback: Is there a human-in-the-loop or user feedback channel to fix wrong merges quickly?
  • Governance: How are sensitive attributes handled (e.g., health, pregnancy, religion)? Can they be blocked or downweighted?
  • APIs and latency: Can context be fetched on-demand with strict SLAs, or precomputed and cached with TTLs?
  • Auditability: Is there a clear provenance trail-what signals led to which conclusion?

Use cases across teams

  • Product: Better agent defaults for onboarding, recommendations, and proactive support.
  • Growth: Smarter outreach with fewer false positives, leveraging only permissible signals.
  • Support: Faster resolution when agents already know which accounts and handles belong to the same customer.
  • Risk: Fraud detection via cross-handle behavior and unlikely account pairings.

Ethics and the incentive problem

Demand for this data is huge. As Wischoff put it: "How do I know you're pregnant and sell you A, B, or C as early as possible?" That pressure can push teams over lines users won't accept.

Practical guardrails help: classify sensitive attributes, require higher confidence for triggering outreach, and give users transparent controls. Assume everything leaks; build for scrutiny from day one.

Integration playbook (start small, prove lift)

  • Define one high-value decision an agent struggles with today (e.g., next-best message). Add Nyne-like context behind a feature flag.
  • Set success metrics in advance (precision on matches, conversion lift, error rate reduction). Establish a rollback plan.
  • Roll out to 5-10% traffic, monitor confidence distributions, and sample failures weekly.
  • Instrument user feedback to correct bad merges and feed those corrections back to the model.

The co-founder edge

Fanous says working with his father, CTO Emad Fanous, tightens execution. "If I have to ping him at three in the morning to finish a launch, I know he's going to still love me the next day." High trust, fast loops.

What to watch next

  • Published accuracy benchmarks and red-team reports on mis-link rates.
  • SDKs for common agent frameworks and CRM/CDP integrations.
  • Clear user controls and consent flows built into partner products.
  • Vertical expansions (finance, healthcare-adjacent) with stricter governance.

Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)