Would you let AI run the country? 41% of young voters say yes

Many young voters are open to AI calling policy shots, but support is split. Agencies should use AI for speed, with guardrails that keep rights, transparency, and humans in charge.

Categorized in: AI News Government
Published on: Dec 10, 2025
Would you let AI run the country? 41% of young voters say yes

Young Voters Are Open to AI-Led Governance. Here's What Public Sector Teams Should Do Next

A classroom screen spelling out AI guidelines, hung above a portrait of Ernest Hemingway, says a lot about where we are: rules and tradition staring at new tools. The same tension is hitting government.

A recent Rasmussen/Heartland poll of likely voters aged 18-39 reports that 41% would support giving an advanced AI system authority over most public policy decisions. Support is uneven by ideology: 55% of conservatives, 45% of moderates, and 28% of liberals were on board.

The poll also found that 36% would let AI determine core rights related to speech, religion, government authority, and property. And 35% backed AI control of the world's largest militaries to reduce war deaths. Among 18- to 24-year-olds, support for an AI-run military rose to 40% (49% conservatives, 33% moderates, 24% liberals).

Methodology matters: Rasmussen surveyed 1,496 likely voters aged 18-39 from Oct. 31 to Nov. 2, with a ±3 percentage point margin of error at the 95% confidence level. That means opinions are split, and a majority still opposes offloading policy, military, and rights decisions to AI.

How to read the signal (without overreacting)

Two things can be true at once. Many young voters think current systems aren't working and want better results. At the same time, experts warn that handing core powers to AI risks free speech, due process, and constitutional safeguards.

Political scientists note that interest in AI governance may reflect frustration with human performance, not blind tech optimism. This is a trust problem. Government teams can respond by delivering competence with clear guardrails.

Why this matters for public sector leaders

AI is moving from pilot to practice across agencies. Federal and state leaders from both parties are embedding AI to boost efficiency, from health to transportation. The question isn't "AI or no AI." It's "Where does AI assist-and where must humans stay in control?"

Citizens-especially younger ones-will reward services that are faster, fair, and transparent. They will punish black-box decisions that feel unaccountable.

Practical steps you can start this quarter

  • Draw bright lines. List decisions that must always stay human-led (rights, enforcement actions, benefits denials, military targeting). Codify "AI assists, humans decide."
  • Stand up an AI working group. Include legal, policy, CIO/CISO, civil rights, labor, procurement, and comms. Meet biweekly. Ship guidance, not memos that gather dust.
  • Adopt existing frameworks. Use the NIST AI Risk Management Framework for risk tiers and controls, and align with the Blueprint for an AI Bill of Rights for rights-respecting design.
  • Inventory AI use. Catalog every model, vendor, dataset, and use case across your org. Assign an owner. Note risk level, data sensitivity, and decision impact.
  • Procurement guardrails. Require audit logs, model cards, bias testing, versioning, incident reporting, data lineage, and SOC2/FedRAMP-equivalent security. Ban training on your data without consent.
  • Human-in-the-loop by default. For high-impact decisions, require review, justification notes, and an override path. Escalate edge cases, not just averages.
  • Pre-deployment testing. Red-team for safety, bias, privacy leakage, and failure modes. Test with real edge cases. Document limits in plain language.
  • Measure what matters. Track speed, accuracy, error types, appeals, and user satisfaction. Publish dashboards. If outcomes don't improve, pause and fix.
  • Public notice and appeal. Tell people when AI assisted a decision. Provide a simple appeal with human review. Keep the paper trail.
  • Data governance. Classify inputs and outputs. Strip PII where possible. Set retention, access controls, and third-party data-sharing rules.
  • Security and incidents. Add AI-specific threats to your playbooks (prompt injection, model poisoning, data exfiltration). Run tabletop exercises.
  • Workforce readiness. Train staff on safe use, limits, and accountability-not just features. Pair training with policy. For structured learning paths, see AI courses by job role.
  • Legal review. Map AI uses to constitutional, statutory, and administrative requirements. Check public records obligations and due process impacts.
  • Equity checks. Test outcomes across demographics. If disparities appear, fix inputs, logic, or usage. Document mitigation.

What the poll means for your agency

Interest in AI-led governance is a symptom. People want competence, clarity, and less friction. If agencies deliver faster services with visible accountability, calls to outsource core powers to machines cool down.

If we deploy AI without transparency or recourse, trust falls-and the pressure to replace human decision-making grows. The choice is ours.

Bottom line

AI should sit inside public services, not above them. Use it to reduce backlogs, assist analysis, and improve service quality-while keeping human judgment, civil liberties, and democratic oversight intact.

Move now: set guardrails, ship small wins, and show your work. That's how you earn trust from the same young voters who are looking for something that works.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide