Election officials get AI training as foreign interference tools grow more powerful

AI capabilities are 675 times more powerful than in 2020, giving bad actors new tools to flood elections with misinformation at scale. Officials are now training to spot threats-and using AI themselves for tasks like predicting voter turnout.

Published on: May 06, 2026
Election officials get AI training as foreign interference tools grow more powerful

AI's Power to Disrupt Elections Is Growing. So Are Election Officials' Defenses

Election officials face a new threat from artificial intelligence capabilities that have grown 675 times more powerful since the 2020 presidential election. The good news: training programs and tools are now available to help them defend against it - and to use AI for their own benefit.

AI capabilities have doubled every seven months since 2019, according to researchers. The risks themselves aren't new, says Mike Moser, a consultant for the Election Security Exchange, but AI advancement has industrialized them. "It lowers the barriers for those that don't have the scale or the sophistication to do their own coding," Moser says.

How AI Changes the Attack Surface

The biggest shift since 2016 is the rise of AI agents - autonomous software programs powered by large language models that can work around the clock without human input. A bad actor could deploy thousands of agents simultaneously to create content, post to social media, scrape profiles, and write personalized messages.

Even if few people see this AI-generated content directly, it becomes part of the data that chatbots use when answering questions about elections or candidates. "That provides a back door to people's brains," says Siddharth Hiregowdara, co-founder of CivAI, a nonprofit studying AI's election risks.

Russia's "Portal Kombat" network demonstrates the threat in action. The network includes nearly 200 websites, including Pravda Australia, designed to feed AI chatbots with misinformation rather than reach human readers directly. Australian security experts flagged the site during the 2025 election season.

AI agents could also magnify denial-of-service attacks that crash election office websites at critical moments, or generate bomb threats on a mass scale. As election officials post more information online than ever, they're creating more material for AI systems to exploit.

A Stanford study found that AI-generated messages on policy issues appeared "more logical, better informed and less angry" than human-written ones. The bigger concern, says Izzy Gainsburg, associate director of Stanford's AI for Public Benefit Lab, is a future where voters can't distinguish real information from AI-generated content. "That seems like a scarier, more macro influence of AI on democracy," Gainsburg says.

Election Offices Are Preparing

A recent survey by the Brennan Center found that most election officials worry AI could make their jobs harder or more dangerous. Only 16 percent of election offices currently use AI, but nearly half want implementation guidance.

The AI and Elections Clinic at Arizona State University is filling that gap. The clinic offers boot camps, case studies, a media library, and a curated collection of AI prompts tailored to election offices.

Election officials are already finding practical uses. A Virginia official used AI to predict turnout for a primary election to inform staffing decisions. A Connecticut registrar uploaded poll worker training manuals and Secretary of State guidance into a chatbot application, creating an interactive resource.

AI excels at summarizing large amounts of information and identifying patterns - tasks that benefit small jurisdictions where a handful of people run everything. But human oversight remains essential. "A human always needs to be in the loop," says Bill Gates, director of the ASU clinic and a former Maricopa County supervisor.

Training materials also include interactive demonstrations of how easily AI can generate fake tweets, create deepfakes, and amplify bias. These resources help officials understand both the threats and the capabilities they're defending against.

What to Watch For

Hiregowdara predicts the 2026 midterms won't see catastrophic AI involvement in elections. But the warning signs are already visible - the "Portal Kombat" network, the proliferation of AI agents, and the growing sophistication of generated content.

Many of the threats AI enables aren't entirely new. Video footage of routine election activity has been weaponized with false context before. Misinformation itself has been part of politics for centuries. What's changed is the scale and speed at which AI can produce and distribute it.

Election offices have spent a decade hardening themselves against cybersecurity, information security, and physical security threats. That experience is helping them prepare for AI-enabled ones - but more work remains. Resources from the Election Security Exchange and CivAI offer frameworks for managing AI risk and understanding generative AI and large language models specifically.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)