AI Hype Fades, Scrutiny Rises: Where Business and the Public Agree-and Don't

Public trust in AI is trailing corporate rollout. To keep trust while scaling, leaders should fund safety, label content, protect IP, and reinvest in workers.

Published on: Dec 14, 2025
AI Hype Fades, Scrutiny Rises: Where Business and the Public Agree-and Don't

AI Skepticism Outside Corporate Walls: What Executives Need to Do Next

AI enthusiasm has cooled. Outside the enterprise, people are asking harder questions about safety, energy use, and who benefits. Inside, leaders are moving from pilots to rollouts. The gap between public expectations and corporate plans is now a strategy risk.

A new study from Just Capital (with The Harris Poll, Robin Hood Foundation, and Gerson Lehrman Group) surveyed 98 institutional investors and analysts, 111 corporate executives, and 2,000+ US adults (fielded September-November 2025). The message is clear: move faster on safety and workforce support, and be explicit about how gains are shared.

Key signals from the study

  • 81% say business leaders have a role in ensuring ethical AI use.
  • AI outlook over the next five years: 58% of the public see net positive impact, versus 80% of investors and 93% of executives.
  • Safety spend: investors and the public want companies to allocate more than 5% of total AI investment to safety; most leaders plan 1-5%.
  • Content labeling: 86% of executives, 84% of investors, and 78% of the public want watermarking of AI-generated content.
  • IP protection: favored by 94% of executives, 91% of investors, and 75% of the public.
  • Energy and community impact: majorities across groups say data center operators should compensate local consumers for increased energy use and environmental effects.
  • Profit allocation: executives emphasize R&D (30%) and shareholders (28%) over worker training (17%), while the public prefers lower prices and reinvestment in the workforce.
  • Workforce expectations: critical to ensure AI training for employees (public 90%, investors 97%); 75%+ of leaders plan training, but most do not plan additional support for departing employees.
  • Transition support: 65% of the public say companies should extend compensation periods, extend health benefits, and subsidize more retraining than typical layoffs provide.

What this means for your strategy

AI is moving from novelty to utility, but trust is lagging. Executives who set visible guardrails, fund safety like a core feature, and invest in people will keep their license to innovate. Those who don't will face brand risk, regulatory friction, and talent churn.

Governance to ship this quarter

  • Set a floor of 5-10% of total AI investment for safety, evaluation, and red-teaming. Align with the NIST AI Risk Management Framework.
  • Adopt watermarking across owned channels and vendor deliverables. Document when, where, and how AI content is labeled.
  • Tighten IP policies: rights clearance before training or use, auditable datasets, and updated vendor clauses covering indemnity and consent.
  • Energy and community impact: track PUE/water metrics, publish targets, and budget community benefits where data center load rises.
  • Stand up an AI risk committee with a clear incident playbook for hallucinations, bias, data leakage, and brand misuse.

Workforce: training, transitions, and trust

The market expects real investment in people, not a slide deck. Fund role-based AI upskilling for every impacted team (ops, sales, finance, legal, product) and certify progress.

  • Guarantee access to AI training for all employees; track completion and skill verification.
  • Offer transition support for roles at risk: extended pay and benefits, plus retraining vouchers that exceed typical layoff packages (matching the 65% public expectation).
  • Publish internal job pathways that convert AI efficiency into higher-value work, not just headcount cuts.

If you're formalizing learning paths, map roles to skill tracks and certifications your managers can approve. Useful starting points: Courses by Job and Popular AI Certifications.

Capital allocation: signal balance

Executives in the study tilt toward R&D and shareholders. The public wants lower prices and workforce reinvestment. Split the difference visibly: announce a balanced formula (for example: R&D, workforce, customer savings, and shareholders) and report it quarterly.

Metrics to report externally

  • Percent of AI budget committed to safety and evaluation.
  • Share of AI-generated content that is watermarked across channels.
  • IP compliance: cleared datasets, audit results, and claims resolved.
  • Data center footprint: energy use, water use, and community compensation where applicable.
  • Workforce: employees trained, certifications earned, redeployment rate, and support provided to departing employees.
  • Customer impact: price index or productivity benefits shared.

Immediate risks to get ahead of

  • Brand damage from hallucinations and undisclosed AI content.
  • IP disputes from training data or third-party tools with weak provenance.
  • Community pushback on energy/water use that slows capacity plans.
  • Investor pressure if safety incidents or workforce backlash hit margins.

Bottom line

There's consensus on the "what" (safety, watermarking, IP rights, workforce training). The gap is in "how much" and "how fast." Set clear budgets, publish your rules, and show people where the gains go. That's how you keep trust while you scale.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide