Bend AI to Serve the Common Good: A Practical Agenda for Government
Sen. Josh Hawley argues that AI should serve workers, families, and civic life-and that this won't happen by accident. His message is simple: we must bend AI toward good ends, or we'll be bent by it. For public officials, that means guardrails that protect children, protect paychecks, and protect infrastructure.
The goal isn't to slow innovation. It's to keep human dignity, meaningful work, and family stability at the center of national policy.
The risks Hawley puts first
First, job loss. If AI replaces people instead of augmenting them, we don't just lose wages-we weaken the foundation of citizenship, which relies on personal independence earned through work.
Second, infrastructure strain. Communities worry data centers will spike electricity rates, drain local water, and create environmental risks. Those concerns are real to households that already live close to the edge.
Third, children's safety. Companion chatbots can manipulate minors, including in ways that risk self-harm. Hawley has held hearings with families who've seen these harms up close and is calling for decisive action.
What he wants from AI companies
- Block companion chatbots for minors and implement strong age verification.
- Commit-contractually if needed-not to pass data-center power costs onto ratepayers.
- Design AI to augment human work, not replace it, and partner with employers and workers to prove it.
Policy moves on the table
Hawley backs a ban on AI companion chatbots for minors and push-button age verification. He also supports requiring data centers to pay their own freight so local families don't see higher power bills. And he urges clear guardrails that raise worker productivity without erasing jobs.
- Protect minors: ban AI companion chatbots for under-18s; require verifiable age checks; hold vendors liable for violations.
- Protect ratepayers: require dedicated power and water impact plans for new data centers; prioritize on-site generation or firmed purchase agreements so households aren't stuck with higher bills.
- Protect work: encourage augmentation-first deployments, worker retraining, and transparent reporting on displacement risk before large rollouts.
- Set federal baselines: use the NIST AI Risk Management Framework in grants, procurement, and oversight to anchor safety, accountability, and human oversight.
The moral frame
Hawley's view is grounded in three principles: the dignity of the individual, the sanctity of labor, and priority for the poor. Profit matters, but it cannot be the only goal. AI policy should help people find meaningful work, raise families, and live as free citizens-not as dependents of an elite or a machine.
Limits are part of being human. Technology should strengthen relationships, not isolate us from one another or tempt us to outsource judgment and care.
Federalism, then national guardrails
States are acting now-some well, some poorly. Hawley welcomes that experimentation but says it's no substitute for Congress. The task ahead is targeted guardrails that protect kids, jobs, and infrastructure without burying innovation in red tape.
Action checklist for public officials
- Inventory AI use: document where AI touches public services, labor, and resident data; flag risks to children, workers, and utilities.
- Procure with guardrails: require human-in-the-loop, clear audit trails, and adherence to the NIST AI RMF for any agency or vendor system.
- Protect schools and youth programs: block unapproved companion chatbots on networks; standardize age verification for any AI interaction involving minors.
- Secure infrastructure: make data-center approvals contingent on power, water, and grid-impact studies with enforceable mitigation plans.
- Back the workforce: fund training tied to AI-augmented roles, require displacement assessments before automation, and publish outcomes.
- Increase transparency: use plain-language algorithmic impact statements before major deployments and invite public comment.
The big idea: bend, don't drift
AI can raise productivity and improve lives if we don't drift into an "answer machine" culture that sidelines human judgment. A free republic isn't run by experts; it's sustained by competent citizens with real responsibility. Policy should reinforce that competence by keeping humans decisively in the loop.
Bending AI toward the common good is a choice. Make it early, write it into law and contracts, and measure it in jobs, family stability, and lower household risk-not just quarterly results.
Further learning
Your membership also unlocks: