41% of young adults would hand broad powers to AI, including speech and religion, poll finds

41% of young voters would let advanced AI steer policy, with support highest among conservatives. A third even back AI over rights and militaries, pressing agencies to draw lines.

Categorized in: AI News Healthcare
Published on: Nov 21, 2025
41% of young adults would hand broad powers to AI, including speech and religion, poll finds

Survey: 41% of young adults back broad AI authority over policy and individual rights

A new survey of 1,496 likely voters ages 18-39 found that 41% support giving an advanced AI system authority to control public policymaking decisions. The poll carries a margin of error of +/- 3 percentage points at a 95% confidence level.

Support was strongest among self-identified conservatives (55%) and those ages 25-29 (54%). Researchers described the findings as "stunning," pointing to "the early emergence of an AI strong man mentality among younger Americans."

Support for deeper authority went further. Thirty-six percent backed AI control over rights tied to speech, religious practices, government authority, and property. Thirty-five percent supported giving an AI system authority over all major militaries to reduce war deaths, including 40% among those ages 18-24.

The poll was conducted by The Heartland Institute's Glenn C. Haskins Emerging Issues Center and Rasmussen Reports. You can review each organization's work here: The Heartland Institute and Rasmussen Reports.

Why this matters for government professionals

Public sentiment is drifting toward outsourcing core decisions to AI, even in areas that touch constitutional rights. That creates pressure on agencies to clarify where AI can assist-and where human judgment is nonnegotiable.

This is not a tech story alone. It's a governance and legitimacy issue: who decides, who is accountable, and how the public can challenge outcomes.

Key numbers at a glance

  • 41% support giving AI authority over public policymaking decisions.
  • 36% support giving AI authority over speech, religious practices, government authority, and property rights.
  • 35% support giving AI authority over the world's largest militaries; 40% among ages 18-24.
  • Support peaks among conservatives (55%) and ages 25-29 (54%).
  • Poll sample: 1,496 likely voters ages 18-39; margin of error: +/- 3 percentage points.

Context: policy moves and global signals

Earlier this year, a proposed federal package included a 10-year moratorium on state-level AI regulations. That provision was removed before final passage, but the attempt itself signals growing appetite for national uniformity on AI rules.

Abroad, Albania appointed an AI chatbot, Diella ("sun"), as minister for public procurement. Prime Minister Edi Rama framed it as a response to corruption. Diella told Parliament, "I am not here to replace people but to assist them... I only have data, a thirst for knowledge and algorithms dedicated to serving citizens impartially, transparently and tirelessly."

Read this as a wake-up call

Large slices of younger voters are open to delegating hard problems to AI. If government doesn't set clear boundaries and show competent use of AI, that openness can drift into support for systems that bypass human oversight.

What to do now (practical steps for agencies)

  • Publish an AI use policy that draws a bright line: AI can inform, humans decide-especially on rights, benefits, enforcement, and due process.
  • Adopt an AI risk framework across programs (e.g., impact assessments, bias testing, red-teaming) and make summaries public to build trust.
  • Stand up an AI Review Board with legal, ethics, civil rights, cybersecurity, and program leads; require approvals for high-risk use cases.
  • Codify "human in the loop" for any AI that affects eligibility, sanctions, speech moderation, or access to public services.
  • Set procurement guardrails: require model cards, audit logs, data lineage, and kill-switches; mandate vendor compliance with your standards.
  • Create a citizen appeal path for AI-influenced decisions with timelines, documentation, and a human case owner.
  • Train staff on AI literacy, policy constraints, and practical use. For structured learning paths by job role, see Complete AI Training.
  • Run small, time-boxed pilots with clear success metrics, external oversight, and public reporting before any scale-up.

Legal and ethical guardrails to reinforce

  • First Amendment and free exercise: no AI system should direct decisions that restrict speech or religious practice without strict human review and legal basis.
  • Administrative law: maintain reason-giving, records, and accountability; AI outputs are inputs-not final agency actions.
  • Transparency and records: preserve prompts, models used, and decision logs for FOIA and audits.
  • Equity: require pre-launch disparate impact analysis and post-launch monitoring with clear remediation steps.
  • Security: treat models and data as high-value assets; enforce least privilege, model isolation, and incident response drills.

How to communicate with the public

  • State plainly where AI is used, why it helps, and what humans still decide.
  • Offer simple appeal and correction channels for AI-influenced outcomes.
  • Publish regular AI transparency updates (use cases, audits, incidents, fixes).

Bottom line

The survey shows a growing willingness among younger voters to hand AI sweeping authority-even over rights and hard security decisions. Government should meet that moment by using AI where it improves service and analysis, while reaffirming a simple rule: people are accountable, rights are protected, and machines do not decide the terms of democratic life.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide