Visma Scales AI with Developer-First Adoption, Federated Model, and Domain-Specific Agents
Visma turns AI hype into value with central enablement and local ownership, starting with developers. Moving from experiments to production and agents built on domain expertise.

From Hype to Value: How Visma Builds AI-Native Products, Teams, and Growth
AI is moving fast in business software. Visma is turning that pace into practical value by combining central enablement with local ownership across its companies in Europe and Latin America.
The goal is simple: bake AI into people, process, and product-then scale what works. As AI matures, the company is moving from experiments to production with clear guardrails.
The operating model: central enablement, local ownership
Visma supports a group of over a hundred software companies. Each market has different rules and customer needs. A top-down playbook would slow them down; full decentralization would fragment the stack.
The answer: a central AI team negotiates vendor terms, secures access to APIs and tools, and sets security and compliance guidance. Each company decides how to apply AI in their context. As Jacob Nyman, AI Director at Visma, puts it: "It's really not wise to be too rigid or pretend to know exactly how to do things from a central perspective."
The four shifts to an AI-native organization
- AI-native workforce: Employees use AI daily to amplify skills and output.
- AI-native product development: AI supports the lifecycle-from prototyping to coding assistance-to speed delivery and improve quality.
- AI-native products: Customer-facing features include intelligent capabilities, not bolt-ons.
- AI-native growth functions: Support, sales, and marketing use AI to improve response times and outcomes.
Each layer strengthens the others. Internal skill builds confidence to ship AI features. Real customer results push employees to go further.
Developer-first adoption compounds speed
The biggest internal win started with developers. Early usage of coding assistants sat around 5-7%. Today, nearly every technical employee uses AI-assisted coding daily.
Teams use a mix of tools like GitHub Copilot, Cursor, and Windsurf. Variety boosted adoption. This is more than code generation: it compresses the path from design to delivery.
Customer-facing teams follow a similar pattern. Support operations use AI to improve response quality and handle volume. Across the business, Google's Workspace stack (Gemini, AI Studio) supports productivity and specialized tasks.
Three phases: enablement, acceleration, optimization
- Enablement: Secure access, partnerships, and safe defaults. Make it easy to start.
- Acceleration: Capture what works locally and distribute patterns across the group.
- Optimization: Standardize on the frameworks and tech that prove themselves-without premature lock-in.
This keeps momentum high while avoiding rigid bets in a moving field.
The next frontier: from assistants to agents
Chatbots answered questions. Assistants helped individuals. Now, teams are building agents that execute workflows across systems.
Nyman puts it plainly: "From chatbots to copilots, to assistants, to agents." This shift is not a rebrand. It demands new engineering: reliable orchestration across tools, context control, and guardrails for autonomy.
Emerging standards like the Model Context Protocol (MCP) are gaining attention because they enable richer, safer context and tool use. Autonomy introduces risk, so human oversight is kept for critical actions. The most effective deployments blend automation with approvals.
Agents that win are built on domain expertise
Generic agents handle surface tasks. The durable gains come from agents that encode deep, local knowledge.
Examples inside Visma show this: legal experts in Norway co-developed tax and accounting agents with AI teams. In France, region-specific regulations were embedded directly into applications. The closer agents are tied to real expertise, the better they perform-and the more trust they earn.
Why agents will stick: nations keep investing in AI, the brain proves efficient general intelligence is possible, and the agent model maps cleanly to how businesses already work. "Agents strike this perfect abstraction level where they are limited enough to be designed but capable enough to do amazing stuff," says Nyman.
What product leaders can do now
- Adopt developer-first: Standardize access to coding assistants and track impact on cycle time, PR quality, and defects.
- Instrument usage: Measure adoption, latency, cost per task, and user satisfaction for AI features and internal tools.
- Codify safety: Define data boundaries, review gates, and human-in-the-loop thresholds by risk level.
- Start narrow with agents: Pick one workflow that spans multiple systems, integrate with tool APIs, and require approvals for irreversible actions.
- Build with domain experts: Pair product, engineering, and subject-matter experts to encode rules, edge cases, and local regulations.
- Share playbooks: Turn local wins into reusable templates, libraries, and reference architectures.
- Resist premature standardization: Converge on tech that proves stable and valuable; keep optionality elsewhere.
The bottom line
Visma shows how to make AI useful across a large product portfolio: centralize enablement, keep decisions local, start with developers, and ship domain-strong features. Agents are moving from slideware to production, but engineering discipline and oversight are non-negotiable.
If your team needs structured upskilling for AI in coding and product workflows, explore our AI Certification for Coding.