Surveying Foreign Influence in AI Tools
Artificial Intelligence (AI)
March 4, 2026 | 12:00 pm - 1:15 pm ET
A livestream of the conversation will begin at 12:00 pm ET on March 4, 2026. For event questions, contact events@fdd.org. For media inquiries, contact press@fdd.org.
Why this matters now
Authoritarian regimes are working to steer what Americans see and believe through AI tools people assume are neutral. By optimizing propaganda for AI training and retrieval, adversaries can plant claims that show up in research, education, and daily queries.
This isn't abstract. Large language models synthesize content from sources they consider credible. If bad actors game those inputs, the outputs tilt-quietly and at scale.
What this event will cover
- How state media and proxies position content to be cited by LLMs and answer engines.
- Russia's efforts to influence training data and seed Kremlin-aligned narratives in chatbot responses.
- Risks tied to deploying Chinese-built AI models and infrastructure inside the United States.
- Concrete options for policymakers, technologists, and media to reduce exposure and raise integrity.
Practical takeaways for government, IT, and development teams
- Source integrity and dataset hygiene: Maintain allowlists of vetted sources and down-rank or label state-affiliated outlets. Track dataset lineage and licenses. Use retrieval restricted to curated corpora for sensitive domains.
- Data poisoning defenses: Apply adversarial filtering, content fingerprinting, and duplicate detection before training. Seed canary facts to spot tampering. Regularly retrain with contamination checks.
- Transparent citations: Require models and AI features to surface sources by default. Flag state-backed outlets and provide counter-sources when geopolitical narratives are involved.
- Prompt-injection and RAG hardening: Sanitize retrieved content, enforce content-security policies, and isolate external data paths. Test with hostile documents and web pages before rollout.
- Model supply chain security: Vet foreign-built models and endpoints. Prefer on-prem or VPC inference for sensitive workloads. Demand model "SBOMs," eval reports, telemetry disclosures, and update policies.
- Governance and risk management: Align programs to recognized frameworks and run red-team exercises focused on foreign influence narratives. Stand up incident response playbooks for model drift and content manipulation.
- User feedback loops: Give users a one-click way to report biased or propagandistic outputs, and close the loop with measurable fixes.
For foundational context on architectures, training pipelines, and mitigations, see Generative AI and LLM. For policy and operational guidance tailored to the public sector, explore AI for Government.
Speakers
RADM (Ret.) Mark Montgomery
Senior director of the Center on Cyber and Technology Innovation (CCTI) and director of CSC 2.0. Former policy director for the Senate Armed Services Committee and a 32-year U.S. Navy veteran who retired as a rear admiral.
Leah Siskind
Director of impact and AI research fellow at CCTI. Former deputy director of the AI Corps at DHS and alum of the U.S. Digital Service with private-sector roles at data and analytics firms.
Joseph Bodnar
Senior research manager at the Institute for Strategic Dialogue focused on foreign influence operations. Former roles include the German Marshall Fund and the Atlantic Council, with bylines on disinformation, cybersecurity, and election security.
Jamil N. Jaffer
Founder and executive director of the National Security Institute at George Mason University's Antonin Scalia Law School, assistant professor of law, and LL.M. program director. Former senior counsel on the House Intelligence Committee and national security roles in the White House and DOJ; venture partner at Paladin Capital Group and board member of FDD's CCTI.
How to watch and engage
The livestream runs 12:00 pm - 1:15 pm ET on March 4, 2026. For event questions, email events@fdd.org. For media requests, email press@fdd.org.
Additional resources
- NIST AI Risk Management Framework for program-level controls and evaluation approaches.
- CISA Secure by Design for secure development practices that apply to AI features and integrations.
Your membership also unlocks: