EY Study Highlights CEO Misjudgments on AI Concerns and Offers Clear Solutions
Recent research from EY uncovers a significant disconnect between what CEOs believe about public concerns over AI and the actual worries consumers express. This gap risks undermining enterprise AI initiatives as companies invest heavily in technologies that may face public resistance. EY proposes a clear, actionable framework to bridge this divide and ensure sustainable success with AI.
Public Concerns Are Deeper Than Executives Assume
EY’s analysis comparing executive opinions with surveys of over 15,000 consumers across 15 countries reveals that ordinary people are roughly twice as worried as CEOs about responsible AI issues such as data accuracy and privacy. Contrary to the widespread narrative about AI-driven job losses, consumers focus more on risks like AI-generated fake news, manipulation, and exploitation of vulnerable groups.
This misalignment isn't just academic. It threatens the multi-billion pound AI market as consumer skepticism grows. EY stresses that many companies are building AI strategies on shaky foundations of public trust.
Overconfidence Among Experienced AI Adopters
Companies claiming to have fully integrated AI tend to overestimate their grasp of consumer sentiment. Among these mature adopters, 71% of executives believe they understand public concerns, compared to just 51% at companies still developing their AI capabilities. Ironically, firms newer to AI are more in tune with worries about privacy, security, and reliability — concerns that consumers share.
EY’s research also reveals that about one-third of executives claim full AI integration and scaling, a figure that likely reflects wishful thinking rather than reality. Additionally, there is a stark difference between consumers' willingness to theoretically use AI and their actual behavior, especially in sensitive sectors like healthcare and banking where trust is essential.
EY’s Nine Principles for Closing the AI Governance Gap
To address these issues, EY introduces a nine-point responsible AI framework that targets areas where companies typically fall short:
- Accountability
- Data Protection
- Reliability
- Security
- Transparency
- Explainability
- Fairness
- Compliance
- Sustainability
This framework directly addresses consumer concerns. For example, data protection focuses on safeguarding personal information, while transparency requires clear disclosure on AI system purposes and designs. Explainability ensures human operators can understand and challenge AI decisions. However, EY finds companies currently maintain strong controls in only three of these nine areas on average. The biggest gaps exist in fairness—ensuring inclusive outcomes—and sustainability, which includes considering environmental and social impacts throughout the AI lifecycle.
The Next AI Wave Brings New Governance Challenges
Companies preparing for agentic AI—systems capable of autonomous decision-making—face growing governance headaches. Half of surveyed executives admit their current risk management frameworks won’t handle these advanced systems effectively. Over 51% say creating proper oversight for today’s AI tools is already difficult.
Yet, many organizations planning to deploy advanced AI within the next year have not fully familiarized themselves with the associated risks. EY emphasizes that maintaining trust requires continuous education of both consumers and leadership, including boards, about AI risks and governance measures.
CEOs Are More Aligned with Consumer Concerns Than Other Executives
Despite the overall disconnect, EY’s data shows CEOs have a better grasp of public sentiment than other board members. CEOs are more cautious about claiming their companies have bulletproof AI controls and often take primary responsibility for AI strategy, second only to Chief Technology and Information Officers.
CEOs also spend more time engaging with customers, giving them closer insight into consumer concerns. The issue appears to be a communication gap within organizations—if CEOs understand the risks but other executives do not, the concerns are not filtering down effectively.
EY’s Three-Step Approach to Closing the AI Trust Gap
EY recommends a straightforward, three-step plan that goes beyond traditional risk management:
- Listen: Expose the entire C-suite to customer voices. This means involving CTOs, CIOs, and other leaders in customer interactions, focus groups, and surveys. In healthcare, senior executives should spend time with patients to understand real-world concerns.
- Act: Integrate responsible AI principles throughout the development lifecycle. Move beyond compliance to embed “human-centric responsible AI design” that directly addresses consumer worries.
- Communicate: Treat responsible AI as a competitive advantage. Transparently sharing governance frameworks and safeguards helps build consumer trust and differentiates companies in the market.
Turning the Trust Gap into an Opportunity
Many companies see responsible AI as a compliance hurdle, but EY’s findings suggest it can be a competitive edge. The EY Responsible AI framework has earned recognition for its impact on AI transparency and responsibility. While consumers have legitimate concerns about AI safety, most companies fail to clearly explain how they manage these risks.
This silence risks consumers lumping all companies together and assuming the worst. Companies willing to lead on responsible AI and make it central to their brand can stand out to current and prospective customers, gaining a market advantage.
For executives looking to deepen their AI knowledge and governance capabilities, exploring targeted training like those available at Complete AI Training can provide practical skills aligned with these emerging demands.
Your membership also unlocks: