Privacy teams absorb AI governance work as most companies lack dedicated resources, IAPP research finds

Two-thirds of companies say privacy teams should own AI governance, but nearly half lack the budget and staff to do it. The result is widespread confusion over who's actually responsible.

Categorized in: AI News Management
Published on: May 04, 2026
Privacy teams absorb AI governance work as most companies lack dedicated resources, IAPP research finds

Who Actually Owns AI Governance?

Two-thirds of companies say their privacy teams should handle AI governance. At the same time, nearly half lack the budget and staff to do it properly. This disconnect is creating confusion about who actually owns the work inside most organizations.

The International Association of Privacy Professionals recently surveyed companies on how they're structuring AI governance roles. The results show no consistent model yet. Some organizations add AI governance as extra work for privacy professionals. Others have created entirely new roles focused solely on AI, with someone else taking over privacy duties.

The problem extends beyond privacy teams. Cybersecurity and data governance professionals are also being pulled into AI governance work, depending on the organization's size and industry.

What AI Governance Actually Requires

The work spans policy, technical evaluation, compliance, and ethics. On the policy side, professionals translate principles into concrete rules and set up governance structures so the right people make decisions. They implement frameworks like the NIST AI Risk Management Framework and handle compliance obligations.

Technical work includes evaluating systems for bias and identifying cybersecurity risks. Ethics and assurance work involves thinking through broader implications and, in some sectors, arranging independent audits.

That's significant upskilling territory. The roles require strong regulatory knowledge but also expect people to move beyond pure compliance and understand technical evaluations of AI systems. Internal auditors and accountants are increasingly being asked to review AI systems as part of their jobs, raising questions about what training they need.

California Sets the Tone

California is becoming the test bed for AI governance policy the way it has for privacy. The state's focus on automated decision-making is particularly important because what regulators work out there will likely influence other states.

California's large, diverse population and concentration of major tech platforms create stronger appetite among regulators to tackle substantive issues around how AI makes decisions about people.

What Good Governance Looks Like

First, know where AI is actually being used. Systems get updated and suddenly include an agentic chatbot. Without visibility, governance fails before it starts.

Next, define what good looks like for your organization through policies, standards, and internal principles. Establish a governance mechanism with real responsibility for decisions and oversight.

Consider potential harms and actual impacts on people, not just risk categories on a checklist. That thinking should drive the technical and data safeguards you implement. Finally, understand your compliance obligations in the jurisdictions where you operate - disclosure requirements, recourse mechanisms, and other rules.

For advertising specifically, context matters enormously. Be clear about the purpose of data collection. Think hard about downstream use. Where possible, reduce reliance on sensitive personal data. Medical research and pharmaceuticals have more mature guardrails around data collection and reuse. Advertising can learn from those processes.

Learn more about AI for Management or explore AI for Executives & Strategy to understand how governance fits into broader organizational strategy.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)