Tricentis Launches Agentic Quality Engineering Platform to Speed Releases With Human Oversight

Tricentis debuts an agentic quality platform that speeds testing and keeps people in charge. Coordinated AI agents shrink timelines and risk-think months down to a week.

Categorized in: AI News IT and Development
Published on: Mar 14, 2026
Tricentis Launches Agentic Quality Engineering Platform to Speed Releases With Human Oversight

Tricentis Launches Agentic Quality Engineering Platform for AI-Driven Development

Tricentis introduced a new agentic quality engineering platform built to help enterprises ship software faster without giving up oversight. The platform runs on the Tricentis AI Workspace, a unified environment that coordinates multiple AI agents across testing, performance validation and quality intelligence.

The pitch is simple: AI is speeding up code creation, but release risk hasn't gone away. Kevin Thompson, CEO of Tricentis, said teams still struggle to verify reliability and security before production. This launch targets that gap.

How the Tricentis AI Workspace Works

The Workspace orchestrates specialized agents that automate high-friction testing work while keeping humans in control of release gates.

  • Agentic quality intelligence: Consolidates risk signals and assesses release readiness.
  • Agentic test automation: Scales automated testing for complex, enterprise apps.
  • Agentic performance testing: Accelerates performance validation and bottleneck analysis.
  • Agentic test creation: Generates reusable test cases from natural language inputs.

Proof Point

David Cowell, vice president of AI and machine learning at Tricentis, said internal use of agentic testing compressed a cloud migration timeline from months to about a week. That's the kind of cycle-time win leaders expect when automation actually removes toil.

Why This Matters for Engineering Teams

Software estates keep getting more distributed, regulated and fast-moving. Manual test creation and brittle test suites can't keep pace. Ben Baldi, senior vice president of global public sector at Tricentis, has underscored this in work on government and defense systems: smaller teams, tighter timelines, zero room for failures in mission-critical environments.

Baldi's stance is clear: testing needs to be continuous and integrated across the SDLC. AI helps by scanning for vulnerabilities, monitoring changes and validating behavior before anything touches production.

Automation, Security and Uptime

Baldi has also highlighted how automation reduces outages and cyber risk by catching defects earlier and validating updates before deployment. That lines up with secure development guidance like the NIST Secure Software Development Framework (SSDF), which emphasizes early, continuous verification.

Humans Still Make the Call

Tools expand reach; skilled teams set the standard. Baldi points to North Carolina's Department of Health and Human Services, where automation supported thousands of tests for a public benefits portal. The result: shorter regression cycles and more time for staff to improve service delivery.

The takeaway: pair experienced engineers with AI-enabled testing to move faster while holding the line on reliability and security.

What Good Looks Like in Practice

  • Shift-left + shift-right: Generate and maintain tests at PR time, then validate behavior under production-like load before release.
  • Risk-based gating: Use agentic quality intelligence to gate deployments on failure trends, coverage, performance budgets and security findings.
  • Stable test assets: Natural-language test creation produces reusable cases tied to user journeys, not just UI selectors.
  • Outcome metrics: Track change failure rate, lead time for changes and MTTR in line with DORA metrics.

Integration Notes

  • CI/CD: Wire agents into your pipeline to run smoke, regression and performance checks on every build that matters.
  • Test data and environments: Automate provisioning and teardown so agents can run consistently and in parallel.
  • Security scanners: Feed SAST/DAST/SBOM results into quality intelligence for unified risk scoring and clearer release decisions.
  • Observability: Loop production signals (errors, latency, user flows) back into test creation and performance scenarios.

Getting Started: A Short Plan

  • Pick one critical service or user journey with frequent releases and visible impact.
  • Define guardrails: acceptable failure rate, performance budgets, security thresholds.
  • Connect repos, pipelines and test environments; seed with top user journeys.
  • Stand up agentic test creation for high-value flows; stabilize flaky tests first.
  • Enable quality intelligence gates in staging, then production with progressive rollout.
  • Review weekly: trend DORA metrics, defect escape rate and noisy test counts; prune or improve.

Level Up Your Team

Bottom line: AI can take on the heavy lifting in testing, but engineers still set scope, guardrails and release criteria. With an agentic platform-and clear governance-you ship faster and sleep better.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)