Podcast | Governance That Accelerates Innovation - A Proven Approach to Responsible AI
In this episode, Informatica's Amy Horowitz sits down with Carla Eid, SVP Data & AI Architecture & Governance at PepsiCo, and Jennifer Mezzio, Global HR Data Officer at a major financial institution, to talk about something most companies overcomplicate: how governance, culture, and AI literacy work together to scale innovation responsibly.
The short version: governance is a growth function. When you treat it that way, adoption increases, cycle times drop, and risk stays within tolerance while products ship faster.
Listen now and use the notes below to align your leadership team around a practical approach that turns trust into advantage.
Governance as a Growth Enabler
High-performing enterprises don't position governance as a gate. They turn it into productized guardrails with clear decision rights, defined tiers of risk, and lightweight approvals tied to business value.
Three moves make the difference:
- Anchor on outcomes: Link every policy to a business KPI (cost, revenue, risk avoidance, customer experience). If you can't measure the lift, it's noise.
- Right-size controls: Tier models by impact. Tier 1 (customer-facing, regulated) gets deeper reviews; Tier 3 (internal, low risk) uses streamlined checks.
- Ship with the product: Integrate model lifecycle steps (data quality, lineage, testing, monitoring) into delivery sprints, not after the fact.
The Human Side of Risk
Most AI risk is behavioral: incentives, unclear accountability, and weak communication. Fix the people system and the technical system improves.
- Defined ownership: Product owns outcomes. Risk, Legal, and Security advise. A central AI office sets standards and tooling.
- Plain-language artifacts: One-page model cards, impact assessments, and usage guidelines anyone can read.
- Frictionless escalation: A visible channel for exceptions and fast decisions keeps teams moving and risks transparent.
Change Management That Drives Adoption
Adoption isn't a memo. It's a system of habits that make the safe path the easy path.
- Lead with one flagship use case: Prove speed and safety on a material problem, then templatize.
- Embed change agents: Plant enablement leads in product, data, and HR to coach teams in real time.
- Operational rituals: Weekly decision reviews, monthly risk forums, quarterly model portfolio reviews.
AI Fluency Across the Enterprise
Fluency beats fear. Set a clear, role-based learning path and keep it practical.
- Executives: risks, ROI cases, and governance levers. 2-hour playbook, not a textbook.
- Builders: data contracts, evaluation methods, prompt and model testing, observability.
- Frontline: safe use, privacy basics, and when to escalate.
If you need structured upskilling, explore curated options by job role at Complete AI Training. For a broader catalog, see the latest programs here: Latest AI Courses.
An Operating Model That Scales
- Central policy, federated execution: One standard; domain teams deliver with shared tooling.
- AI Council: Product, Risk, Legal, Security, HR, and Data meet on a fixed cadence with clear SLAs.
- Model registry and monitoring: Every productive model has an owner, KPIs, test results, and incident logs.
- Data contracts: Quality, lineage, retention, and consent requirements are explicit and testable.
Practical Checklist for Executives
- Publish a one-page AI policy tied to 3-5 enterprise KPIs.
- Adopt risk tiers with matching evidence requirements.
- Stand up a cross-functional AI Council with decision SLAs.
- Require model cards and evaluation results before launch.
- Fund a shared toolkit for data quality, testing, and monitoring.
- Launch an executive AI literacy session and builder enablement track.
- Start with one high-value, compliant use case and document the template.
- Track incidents, bias findings, and model drift with clear owners.
- Review portfolio quarterly and retire models that miss thresholds.
Signals You're On Track
- Time-to-approve new use cases trending down while control findings stay flat or improve.
- Percent of models with complete documentation and monitoring above 90%.
- Incident rate per 1,000 predictions stable or declining.
- Share of AI-enabled products hitting revenue or cost targets within two quarters of launch.
- Employee AI fluency scores and completion rates increasing quarter over quarter.
What the Guests Emphasized
- Carla Eid: Treat governance as an accelerator. When standards, lineage, and controls are embedded into delivery, teams move faster with fewer surprises.
- Jennifer Mezzio: HR is pivotal. Clear skills maps, change champions, and ethical use norms build trust and adoption across the workforce.
- Amy Horowitz: The companies winning on AI pair data excellence with pragmatic controls and relentless enablement.
Helpful References
- NIST AI Risk Management Framework for a common risk language and process.
- EU AI regulatory framework overview to align controls with emerging requirements.
Final Take
Responsible AI isn't about saying no. It's about creating a clear path to yes-one that your teams trust and your customers feel.
Use the themes from this conversation to align your leaders, pick one flagship use case, and build a repeatable system. That's how governance accelerates innovation.
Your membership also unlocks: