GenAI Trust Surges While Only 40% Invest in Security - Reliable AI Delivers Higher ROI

Leaders trust generative AI, but only 40% fund security and governance-an expensive gap. Prioritizing reliability makes orgs 1.6x likelier to double ROI, IDC/SAS find.

Categorized in: AI News Management
Published on: Oct 02, 2025
GenAI Trust Surges While Only 40% Invest in Security - Reliable AI Delivers Higher ROI

Confidence in generative AI is growing, but security is lagging

IT leaders trust generative AI more than traditional AI, yet only 40 percent invest in security and governance. That gap is costing results. Organizations that prioritize reliable AI are 1.6x more likely to achieve double ROI on their AI projects, according to research by IDC for SAS.

Nearly half of respondents say they trust generative AI completely. Only 18 percent say the same about traditional AI, even though it's the more established approach. Meanwhile, 78 percent claim complete trust in AI overall, while just two in five fund governance, explainability, and ethical safeguards. The disconnect is clear-and risky.

Trust is rising for the wrong reasons

"AI with humanlike interactivity and social familiarity seem to encourage the greatest trust, regardless of actual reliability or accuracy," says Kathy Lange, Research Director, AI & Automation Practice at IDC. The survey also shows 26 percent place complete trust in quantum AI, despite its limited availability.

Reliability isn't a top priority inside organizations. Only 2 percent of respondents ranked AI governance in their top three priorities, and less than 10 percent are working on responsible AI policies. Leaders are betting on outcomes without the guardrails that make those outcomes repeatable.

Data fundamentals remain the stumbling block

  • Non-centralized data infrastructure: 49 percent
  • Insufficient data governance: 44 percent
  • Lack of specialized employees: 41 percent
  • Access to relevant data sources: 58 percent (AI data management)
  • Data privacy and compliance issues: 49 percent
  • Data quality: 46 percent

What leaders should do now

  • Set reliability targets upfront: define acceptable error rates, auditability, and human-in-the-loop criteria before build or buy.
  • Stand up AI governance: assign owners, create a model registry, require risk assessments, model cards, and approval workflows.
  • Build security into AI: test for prompt injection, apply data loss prevention, isolate secrets, scan third-party models and libraries, and run red-team exercises.
  • Operationalize oversight: monitor drift, bias, and hallucination rates; log inputs/outputs; version data and models; set decommission rules.
  • Fix data at the source: centralize access, enforce privacy-by-design, define quality SLAs, and maintain metadata and lineage.
  • Close skills gaps: upskill product, risk, security, and data teams; hire specialists for model risk and AI safety.
  • Tie funding to ROI and reliability: stage-gate investments, track unit economics, and require reliability SLOs for every AI use case.
  • Demand vendor proof: SOC 2/ISO 27001, model provenance, evaluation reports, and responsible AI commitments in contracts.

"For the good of society, businesses and employees - trust in AI is imperative," says Bryan Harris, CTO at SAS. "The AI industry must increase the success rate of implementations, humans must critically review AI results, and leadership must empower the workforce with AI."

Tip: Trustworthy AI starts before the first line of code. Bake governance, security, and data standards into project intake, not after deployment.

Quick scorecard for executives

  • Do you have a signed AI policy and governance model with clear roles?
  • Are reliability metrics (accuracy, bias, safety) reviewed in steering meetings?
  • Is there a central model registry with approvals and audit logs?
  • Are security tests for LLM and traditional models part of release gates?
  • Can you trace every model prediction back to data, version, and owner?
  • Are high-risk use cases subject to human review and kill switches?

Resources

Upskill your team

If you're building an internal baseline for AI governance and ROI, explore focused learning paths for managers: AI courses by job role.