Management reboot essential for agentic AI strategy
Agentic AI breaks the old rulebook. It behaves like a tool and a coworker at the same time. That split identity creates management debt if you try to run it with legacy technology and HR playbooks.
Research from MIT Sloan Management Review and Boston Consulting Group (BCG) points to a simple truth: you're now managing an asset that also learns on the job. Treat it like a server and you'll miss upside. Treat it like a person and you'll miss scale.
Why this challenges your operating model
Traditional tech scales predictably with specs and depreciation. People adapt, improve, and need coaching and oversight. Agentic AI does both, which means your org must fuse asset management with people management.
IT wants standards and uptime. Finance wants clear ROI and schedules. HR wants performance frameworks and supervision protocols. Agentic systems force you to satisfy all three at once.
The new system: run AI as tool and coworker
- Provider portfolio strategy: Expect negotiation dynamics that feel closer to labor discussions. Your key models, data platforms, and orchestration layers will set your pace and your costs.
- Budget model: Plan for heavy upfront build plus ongoing variable costs (training, inference, monitoring, data work). As BCG's Sylvain Duranton noted, tech spend will likely take a larger share than people over time.
- Performance management: Give agentic systems objectives, guardrails, and reviews. Use SLAs, error budgets, and human-in-the-loop checkpoints like you would for high-stakes roles.
- Org design: Stand up an "AI operations" function that blends IT, data science, product, process excellence, risk, and HR. Give it authority to set standards and stop unsafe deployments.
- Governance and risk: Track model drift, data lineage, and decision logs. Codify escalation paths. Audit outcomes, not just outputs.
- Manager skills: Train teams to brief, supervise, and course-correct AI agents just as they would a new hire.
Rethink timing and valuation
Tools depreciate. People appreciate. Agentic AI does both. Models lose edge through drift while improving through fine-tuning and new capabilities.
- Stage-gated funding: Approve builds in tranches tied to measurable learning and risk reduction, not just feature delivery.
- Option-based thinking: Model upside from future use cases and platform reuse, not just today's workflow saves.
- Continuous refresh: Replace set "3-year cycles" with rolling upgrades and controlled experiments.
- Cost-of-delay: Track value erosion if you pause updates; lag compounds fast in AI.
Procurement and vendor management 2.0
Duranton urged leaders to treat provider management as a strategic discipline. As model costs and capabilities shift, your vendor mix can either lock you in or keep you flexible.
- Map dependencies: Foundation models, vector databases, orchestration, safety layers, data vendors. Know your switching costs.
- Dual-source critical layers: Maintain at least two viable options for core components to keep leverage and resilience.
- Incentives in contracts: Tie pricing and access to measurable quality, safety, and upgrade cadence.
- Data rights first: Secure usage terms, retention, and deletion on day one. Future cost and risk live in the fine print.
Board-ready budget structure
- Capex: Model development, data pipelines, evaluation tooling, integration.
- Opex (variable): Inference, fine-tuning, monitoring, human oversight, red-teaming, data curation.
- Innovation reserve: A fixed monthly pool for rapid experiments that upgrade winning use cases into production.
- Risk buffer: Compliance, security, incident response, and insurance.
Measure what matters
- Cycle time: Lead time from request to verified output.
- Quality: Accuracy, completeness, and user satisfaction by use case.
- Safety and control: Rate of blocked actions, escalation counts, audit coverage.
- Unit economics: Cost per successful task and cost per avoided error.
- Learning velocity: Time from feedback to model or policy update.
90-day leadership plan
- Inventory all agentic use cases; tag each as tool, coworker, or hybrid.
- Publish a one-page provider strategy with exit options and trigger conditions.
- Adopt a standard "AI coworker" SOP: scoping, prompts/policies, supervision, escalation, and review.
- Stand up a ModelOps review weekly: drift checks, eval scores, incidents, and roadmap decisions.
- Lock a stage-gated funding model and report it to the board.
- Upskill managers on supervising AI agents and reading eval dashboards.
Why the old playbook fails
Conventional replacement schedules assume value fades slowly and predictably. Agentic AI moves faster. The most valuable uses often appear after deployment, once people learn how to work with the system and data improves.
If you delay upgrades, your returns decay. If you under-invest in oversight, risk climbs. The winners will treat agentic AI as both a balance sheet asset and a learning teammate.
Further reading: See the research at MIT Sloan Management Review and BCG's technology perspectives at BCG X.
If you're building management skills to supervise AI agents and set the right metrics, explore executive-focused options at Complete AI Training.
Your membership also unlocks: