National Academies Convenes Experts on Securing AI Systems
Cybersecurity and artificial intelligence experts gathered at the National Academies of Sciences, Engineering, and Medicine in April to map research priorities for securing AI systems before deployment in high-stakes fields.
The two-day meeting examined how existing cybersecurity tools can be adapted for AI, emerging risks in scientific research, drug discovery, and financial services, and security challenges posed by generative AI and LLM systems. Speakers from Microsoft, Meta, Qualcomm, and Security Superintelligence Labs covered infrastructure security, threat analysis, and benchmarking frameworks.
A key focus was agentic AI systems - semiautonomous or fully autonomous AI that operates with minimal human intervention. Securing these systems presents distinct challenges as they make independent decisions across extended periods.
"This rapid and pervasive ascent makes the challenge of securing AI systems incredibly urgent," said Ellen Zegura, senior science and engineering adviser on artificial intelligence at the National Science Foundation, which requested the event.
Research Gaps Identified
The meeting will produce an issue paper outlining research priorities and gaps in AI security. The document aims to coordinate efforts across the research community and direct funding toward critical vulnerabilities.
The National Academies is also conducting a separate rapid expert consultation on the implications of AI for cybersecurity, expected in coming months.
For researchers in AI for Science & Research, these findings will shape how security requirements integrate into research infrastructure and data protection protocols.
Your membership also unlocks: