AI inference moves from experiment to daily operations
AI inference has become a core operational workload for most enterprises, according to F5's 2026 State of Application Strategy report. Organisations now run an average of seven AI models in production, with 78% operating their own inference infrastructure.
The shift marks a move away from AI as a trial project toward routine use in business systems. For 77% of respondents, inference has overtaken model building and training as the main AI activity.
Distributed systems, not single platforms
Only 8% of organisations rely solely on public AI services. The rest use a mix of models and environments, creating operational demands around routing, fallback systems and policy controls.
That complexity deepens as infrastructure spreads. Ninety-three percent of organisations work across multiple clouds, while 86% run applications across on-premises, public cloud and colocation environments. More than half orchestrate multiple AI models, turning inference into a distributed systems problem rather than a single-platform task.
Managing this mix requires new control points. Twenty-nine percent of organisations identified prompt layers as their main delivery mechanism for AI workloads, while 23% prioritised token layers for both delivery and security.
Security and governance become the constraint
Eighty-eight percent of respondents had faced AI-related security challenges. Ninety-eight percent are preparing for agentic AI systems that require identities, permissions and controls similar to those used for human users.
Seventy-seven percent expect identity and access problems as AI agents expand. Nearly two-thirds already allow AI to adjust policies and configurations on its own, giving automated systems a more direct role in IT operations.
Kunal Anand, Chief Product Officer at F5, said the findings show a clear operational shift. "AI has moved from experimentation to operations. The question now is not whether companies will use AI, but whether they can run it reliably, securely, and at scale," he said.
Operations teams now own the problem
The central question for businesses is no longer whether to adopt AI tools. It's how to control cost, reliability, identity and policy enforcement when AI systems are part of the production estate.
As organisations add more models and spread workloads across clouds and on-premises systems, the operational burden widens. Distributed inference, hybrid infrastructure and automated decision-making create a more complex environment for security teams and technology leaders to manage.
For operations professionals, this means AI for Operations is no longer a future concern. It's already embedded in the infrastructure that supports everyday applications and services.
Your membership also unlocks: