Koog 0.4.0 Update: Enhancing Kotlin AI Agent Development
JetBrains has released Koog 0.4.0, an important update to its Kotlin framework for building AI agents. This version introduces native structured output aimed at improving reliability in production environments. It also expands platform support to include Apple’s iOS and integrates GPT-5 model compatibility alongside OpenTelemetry support.
Announced on August 28, Koog 0.4.0 focuses on making AI agents more observable, seamless to deploy, and predictable. Developers can now build and monitor agents across multiple platforms with improved tools and controls. The source code is available on GitHub.
Native Structured Output for Reliable Data Handling
One of the key challenges with large language models (LLMs) is ensuring the output matches the required data format consistently. Koog 0.4.0 addresses this by adding native structured output support for compatible LLMs. When structured output is supported, the framework uses it directly to avoid errors.
If the model doesn’t support structured output natively, Koog employs a fallback mechanism: a tuned prompt-and-retry process combined with a fixing parser powered by a separate model. This approach retries and corrects outputs until the data payload meets the exact format expected, reducing the risk of failures in production.
iOS Support via Kotlin Multiplatform
With this release, Koog extends its reach to Apple’s iOS platform, emphasizing Kotlin Multiplatform capabilities. Developers can now write their AI agent once and deploy it to iOS, Android, and JVM back ends. This unification keeps strategy graphs, observability hooks, and tests consistent across platforms.
Note that iOS support requires upgrading to Koog 0.4.1 for building and deployment on Apple devices.
GPT-5 Compatibility and Custom LLM Parameters
Koog 0.4.0 introduces compatibility with the GPT-5 model, allowing developers to leverage the latest advances in large language models. Additionally, it offers custom LLM parameters like reasoningEffort, which enable fine-tuning of the model’s behavior. This lets users balance quality, cost, and latency more effectively for complex tasks.
OpenTelemetry Integration for Better Observability
The update adds support for OpenTelemetry, enhancing monitoring capabilities. This integration works with tools such as the W&B Weave AI development toolkit and the Langfuse open-source LLM engineering platform. Developers can install plugins on agents and connect them to their preferred back end.
With OpenTelemetry, teams gain detailed insight into nested agent events, token usage, and cost breakdowns per request, making it easier to analyze performance and troubleshoot issues.
RetryingLLMClient for Improved Resilience
To handle common issues like timeouts, network glitches, and unstable API behavior, Koog 0.4.0 introduces the RetryingLLMClient. It includes three presets—Conservative, Production, and Aggressive—that provide different retry strategies suited to various environments.
Developers also get fine-grained control over retries and support for the DeepSeek model, enhancing the robustness of LLM interactions.
Conclusion
Koog 0.4.0 delivers practical improvements for Kotlin developers working on AI agents. Native structured output boosts reliability, iOS support expands deployment options, and GPT-5 compatibility offers access to newer models. OpenTelemetry integration and retry strategies strengthen observability and fault tolerance.
For those looking to deepen their AI development skills using Kotlin or explore AI agent creation further, resources are available at Complete AI Training.
Your membership also unlocks: