Meta's Court Losses Raise Questions About Internal Research and AI Safety
Two jury verdicts against Meta this week have exposed a tension that now confronts the entire AI industry: companies that conduct internal research on product harms face potential legal liability when that research becomes public.
Juries in trials in New Mexico and Los Angeles determined that Meta inadequately policed its platforms, putting minors at risk. Both cases hinged on the same central finding: internal documents and research showed Meta understood the potential harms of its services but did not disclose what it knew to the public.
The trials presented millions of corporate documents, including internal surveys showing a concerning percentage of teenage Instagram users receiving unwanted sexual advances. Researchers at Meta had also found that people who reduced their Facebook use became less depressed and anxious-research the company eventually halted.
Brian Boland, a former Meta executive who testified in both trials, said the company's internal research contradicted how it portrayed itself publicly. "Both juries, with very different cases, came back with clear verdicts," he said.
A Chilling Effect on Research
Meta began restricting its research teams several years ago, after Frances Haugen, a former Facebook researcher, became a whistleblower in 2021. Haugen's disclosure of internal documents marked a turning point for how the public and policymakers viewed the company's knowledge of potential harms.
The aftermath created a broader pattern. Many tech companies began cutting research teams that studied alleged harms. Some removed tools that allowed third-party researchers to study their platforms.
Newer AI companies like OpenAI and Anthropic initially invested heavily in research teams and published their findings. Now those companies face the same question: whether continued research funding is in their interest or a liability.
Kate Blocker, director of research and programs at the nonprofit Children and Screens: Institute of Digital Media and Child Development, warned against suppressing research. "Companies may now view ongoing research as a liability, but independent, third-party research must continue to be supported," she said.
The AI Industry's Next Test
As the tech industry pushes into AI, Meta, OpenAI, and Google have prioritized product development over research and safety. Blocker said there is limited public visibility into what AI companies are studying about their products.
AI companies are mostly studying the models themselves-model behavior, interpretability, and alignment. But research on how chatbots and digital assistants affect child development remains sparse.
Sacha Haworth, executive director of the Tech Oversight Project, said the trials added value by presenting the specific internal documents and context that showed what companies knew and when. "The very emails, the very words, the very screenshots, the internal marketing presentations, the memos" provided necessary clarity, he said.
Blocker said AI companies have an opportunity to avoid repeating social media's mistakes. "We urgently need to establish systems of transparency and access that share what these companies know about their platforms with the public and support further independent evaluation."
Meta and Google said they would appeal the verdicts.
Learn more about research practices in AI: Explore AI Research Courses and AI for Science & Research to deepen your understanding of how research methodologies shape AI development and safety.
Your membership also unlocks: