Oracle AI World 25: five lessons on getting enterprise AI right with Steve Miranda
Oracle AI World delivered a wave of AI announcements. The sharper story sits in how customers turn that into results. In a pre-event briefing, Steve Miranda, EVP, Oracle Applications Product Development, cut through the hype and focused on what's working inside enterprises.
Here are five lessons product leaders can act on now.
1) Enterprise AI is different: context eats "shadow AI"
Most teams have scars from one-off generative AI experiments. Shadow AI created risk without repeatable value. The fix is a contextual architecture that grounds AI on your data, security, and roles.
Miranda: "What we're enabling is allowing you to use generative AI to query your own business data securely. We take internal data, external data, and reasoning from the gen AI data models, to construct not only BI, but BI advisors, or an AI advisor to you, the business."
- Anchor AI on production data, not generic web content.
- Enforce role-based access at the agent/tooling layer.
- Treat data governance as a feature, not a checklist.
2) Vertical partners extend the last mile
Horizontal features won't close industry gaps. Oracle's Fusion Applications AI Agent Marketplace invites partners-large and small-to build niche agents that fit specific workflows.
Miranda: "The trend is partially industry-specific, and partially long tail, last mile type of automations. We're doing things like ledger and payment agents for core processes. Partners extend with very specific capabilities. The two combined are going to be very effective."
- Open your ecosystem to smaller ISVs; speed beats scale at the edge.
- Prioritize agents that remove manual swivel-chair steps.
- Expect extension, not duplication, of core functionality.
3) Smaller agents, real savings
Chasing "one-button-close" promises is a trap. The returns today come from narrow agents that eliminate friction-stacked across a process.
Inside Oracle, Miranda shared a simple win: validating public sector suppliers used to be a manual review across systems. A small agent now handles the checks end-to-end. No drama-just time back.
- Inventory long-tail tasks that burn analyst hours; automate those first.
- Measure minutes saved per ticket, request, or record-then multiply by volume.
- Build a library of small agents; reuse across teams.
4) The AI backlash is real-set expectations with math
Overselling AI creates a trust bill you'll have to pay. The wins are incremental, compounding, and defensible. That's enough to move the needle at scale.
Miranda: "If you paid a lot of money on an IT project to do AI, the expectation was, 'We're going to totally get rid of whole job functions.' I don't see those happening. Having AI included with our apps, you start small, and you add things. It's easy to turn on, and accurate for a small piece."
- Model ROI around 10-20% efficiency on targeted tasks; expand from there.
- Ship weekly improvements; avoid big-bang bets.
- Socialize before/after baselines so stakeholders see progress.
5) Jobs aren't dead-skills are shifting
Technical depth still matters, but the mix changes. Communication, incident handling, and customer clarity rise in value as agents take the first pass on tickets and tasks.
Miranda: "To hire somebody who's an expert at sorting through technical SRs versus hiring somebody who knows how to deal with a customer who's upset… and how to recover if there is an incident… is way more valuable these days. We need fewer of the former and more of the latter."
- Hire for judgment, communication, and system thinking-then layer AI skills.
- Upskill existing teams on prompt patterns, agent handoffs, and tool use.
- Redesign roles to pair humans with agents instead of duplicating effort.
Product development takeaways
- Start with data context: define sources, security, and retrieval patterns before UI polish.
- Ship micro-agents: 1-2 week cycles solving one painful step each.
- Instrument everything: add telemetry for context adherence, tool call errors, and user overrides.
- Adopt an evaluation playbook: pre/post prompts, golden datasets, red-teaming, and regression gates.
- Budget for 10-20% gains across coding, reviews, and support; compounding beats moonshots.
- Open the partner lane: publish APIs, schemas, and guardrails so ISVs can extend the last mile safely.
- Update hiring profiles: fewer ticket sorters, more customer communicators and AI-savvy generalists.
Agent evaluation frameworks matter
Evaluation isn't optional with agentic workflows. You need repeatable ways to detect where things break-context drift, tool misuse, or bad handoffs-and fix them fast.
- Benchmark against an external standard like the NIST AI Risk Management Framework for governance signals.
- Threat-model prompts and tools with the OWASP Top 10 for LLM Applications to reduce brittle behavior.
Why this approach works
Enterprise AI wins are built on context, security, and small shipped bets. That rhythm builds trust without headline risk. Stack the saves, evaluate relentlessly, and let the numbers make the case.
Level up your team
If you're formalizing AI skills by role, these curated paths can help product and engineering leaders set a learning plan: AI courses by job.
Your membership also unlocks: