Campbell Brown Built Facebook's News Division. Now She's Fixing AI's Information Problem.
Campbell Brown spent years at Facebook watching what happens when a platform optimizes for the wrong metric. The fact-checking program she built no longer exists. That experience shaped what she's doing now at Forum AI, a company evaluating how foundation models handle information on high-stakes topics where accuracy matters most.
Forum AI, founded 17 months ago in New York, tests AI systems on geopolitics, mental health, finance, and hiring - subjects that resist simple yes-or-no answers. Brown has recruited domain experts including Niall Ferguson, Fareed Zakaria, and former Secretary of State Tony Blinken to architect benchmarks. The goal: train AI judges to evaluate models at scale and reach roughly 90% consensus with human experts, a threshold Forum AI says it has achieved.
Brown traces the company's origin to a specific moment. "I was at Meta when ChatGPT was first released publicly," she said, "and I remember really shortly after realizing this is going to be the funnel through which all information flows. And it's not very good."
What Forum AI Found When Testing Leading Models
The findings weren't encouraging. Gemini pulled from Chinese Communist Party websites for stories unrelated to China. Nearly all leading models showed left-leaning political bias. Subtler failures abound: missing context, missing perspectives, arguments presented as strawmen without acknowledgment.
Brown's frustration centers on priorities. Foundation model companies focus heavily on coding and math. "News and information are harder," she said. "But harder doesn't mean optional."
The compliance landscape compounds the problem. When New York City passed the first hiring bias law requiring AI audits, the state comptroller found more than half had violations that went undetected. Real evaluation requires domain expertise to work through edge cases "that can get you into trouble that people don't think about."
Enterprise Demand May Drive Change
Brown sees an unlikely ally: business. Companies using AI for credit decisions, lending, insurance, and hiring care about liability. "They're going to want you to optimize for getting it right," she said.
Forum AI is betting its business on that enterprise demand. The company raised $3 million last fall led by Lerer Hippeau. Converting compliance interest into consistent revenue remains a challenge, particularly since much of the current market accepts checkbox audits and standardized benchmarks Brown considers inadequate.
The Gap Between What Tech Leaders Say and What Users Experience
Brown is uniquely positioned to describe the disconnect. "You hear from the leaders of the big tech companies, 'This technology is going to change the world,' 'it's going to put you out of work,' 'it's going to cure cancer,'" she said. "But then to a normal person who's just using a chatbot to ask basic questions, they're still getting a lot of slop and wrong answers."
Trust in AI sits at extraordinarily low levels. Brown thinks that skepticism is often justified. "The conversation is sort of happening in Silicon Valley around one thing, and a totally different conversation is happening among consumers."
Her hope is that AI can break the cycle that plagued social media. Companies could give users what they want, or they could "give people what's real and what's honest and what's truthful." She acknowledged the idealistic version sounds naive. But she also said "there are some very easy fixes that would vastly improve the outcomes."
For communications professionals, the implications are direct: as AI becomes the primary information channel, accuracy and trust become competitive advantages. Understanding how these systems fail - and what expertise is required to fix them - matters for anyone responsible for information credibility.
Generative AI and LLM Courses can help communications teams understand how foundation models work and where they fall short. AI for PR & Communications training addresses the specific challenges of maintaining information integrity in an AI-driven environment.
Your membership also unlocks: