Oracle APEX AI Agents Move Beyond Chat to Executable Workflows
Oracle APEX now lets developers build applications where users can ask questions in natural language and have the system reason over context, retrieve business data, and execute actions. The platform introduces AI Agents as a structured way to connect large language models directly to application logic and databases.
This represents a shift from basic chatbots. Instead of limiting AI to text generation or answering static questions, AI Agents in APEX enable users to accomplish tasks within real application workflows.
The difference between AI assistants and AI agents
A traditional AI assistant responds to prompts. It can summarize content, rewrite text, classify input, or answer questions based on available information.
An AI Agent works toward a goal. The system understands user intent, determines what information is needed, retrieves context from the application, executes actions, observes results, and continues until delivering a useful outcome. This pattern is called agentic AI.
In enterprise applications, the distinction matters. Users rarely want only an answer. They want progress on a task.
What makes a system agentic
Three core capabilities define an agentic system:
- Reasoning over user intent. The model interprets requests and determines what users are actually trying to accomplish, even with ambiguous or high-level input.
- Access to external capabilities. The model can invoke application-defined functions or tools to retrieve data or perform actions.
- Iterative execution. The system works in a loop: ask for context, call a tool, inspect the result, and continue until the task is complete.
The model becomes an orchestrator sitting on top of your application logic rather than an isolated text engine.
Function calling: The foundation
Large language models do not natively know your application data, customer records, approval rules, or business processes. Function calling provides a safe and structured mechanism to interact with those things.
APEX introduces this through Tools. Developers define tools under an AI Agent, and those tools can execute on the server side or client side. They can run on demand when the model decides they are needed, or augment the system prompt upfront by injecting fresh context with each user message.
The key design principle: the model does not directly query your tables or run arbitrary code. Instead, it works from a curated list of capabilities you expose. This gives developers precise control over what the model is allowed to do.
How the agent loop works in APEX
The interaction becomes a loop rather than a single request-response exchange. The user sends a message. APEX includes the conversation context, system prompt, and available tools. The model decides whether it can answer directly or needs to call one or more tools. APEX executes those tools. Tool results go back into the conversation. The model continues, either making further tool calls or producing the final response.
That loop is what makes the feature agentic rather than merely conversational.
Building agents in APEX
Oracle APEX brings AI Agents into the platform with a declarative approach developers already know. You create an AI Agent as a Shared Component, attach it to an AI Assistant chat experience, define your tools, configure parameters - and the agent is ready.
Business logic stays in PL/SQL. Data stays in Oracle Database. Tools run where they belong: server-side code in the database, client-side code in the browser. A developer can go from idea to working agent without leaving the APEX builder.
APEX handles three built-in tool types:
- Retrieve Data. Returns results from a SQL query to the model. This is the most common type - it gives the agent access to application data through scoped, read-only queries.
- Execute Server-side Code. Runs a PL/SQL block in the database. Use this when the agent needs to take action - create a record, update a status, send a notification - not just read data.
- Execute Client-side Code. Runs JavaScript in the user's browser. Use this for capabilities only the client has - reading browser state, showing confirmation dialogs, or triggering UI actions.
Developers can also create custom and reusable Generative AI Tool plug-ins under Shared Components. These appear in the tool type list alongside built-in options, making it easy to standardize and share tool implementations across agents and applications.
Two execution modes for tools
Augment System Prompt tools execute for each new message. Their results are injected into the conversation history as system messages. These tools include context information like the current user's name, current date and time, or retrieval-augmented generation (RAG) results based on analyzing the user prompt or chat history.
On Demand tools execute only when the AI service invokes them during response generation. They can have parameters and return data to the AI service or simply execute a task. RAG happens more naturally because the AI service only requests data when needed.
A human checkpoint inside the loop
Client-side tools enable a critical pattern: requiring explicit user consent before sensitive actions. The model can reason, retrieve data, and prepare actions autonomously. But when an action crosses a boundary - notifying a colleague, sending an email, updating a shared record - the application can require user approval.
For example, client-side code can show a confirmation dialog before a sensitive action. APEX waits for the dialog response before returning the result to the model. The agent loop pauses until the user responds. The resolved value is a simple string ("confirmed" or "denied") that the model can reason over to decide whether to proceed.
Why this matters for APEX developers
APEX applications already centralize business data in Oracle Database, rules in PL/SQL, UI behavior in declarative components and JavaScript, and security in application-level authorization schemes and conditions.
AI Agents give developers a structured way to expose those existing assets to an AI model without rebuilding anything. You don't need to redesign your data model or move your business logic. You incrementally enable AI-native workflows on top of what already exists.
In real business workflows, users are rarely looking for a paragraph of generated text. They want help making progress. They want the application to tell them what matters, what is blocked, and what to do next. With AI Agents in APEX, that becomes a native design pattern rather than a custom integration.
For product development professionals, understanding how to build AI-native applications and implement agentic workflows is increasingly critical. The AI Learning Path for Product Managers covers how to structure these capabilities into your product strategy. And for those building with Generative AI and LLM technologies, the underlying concepts of reasoning, function calling, and iterative execution apply across platforms.
Your membership also unlocks: