Leidos and OpenAI Put AI to Work Across Federal Missions

Leidos and OpenAI are teaming up to bring secure AI into day-to-day federal work, from health to defense. Expect faster cycles, audit trails, and pilots moving into deployments.

Published on: Jan 24, 2026
Leidos and OpenAI Put AI to Work Across Federal Missions

Leidos and OpenAI Partner to Bring AI Into Federal Operations: What Ops and Product Leaders Need to Know

Publish date: Jan 22, 2026

Leidos (NYSE: LDOS) and OpenAI are teaming up to put generative and agentic AI into day-to-day federal workflows. The focus: digital modernization, health services, national security and infrastructure, and defense-core priorities in Leidos' NorthStar 2030 strategy.

According to Leidos CTO Ted Tanner, OpenAI's most capable models will run in a secure configuration intended to protect Leidos and customer data-aimed at lifting productivity and speeding product development and delivery. Joseph Larson, OpenAI's VP for government, said adoption starts with trust, security, and mission fit, and that this partnership is about moving from pilots to deployments that improve efficiency, resilience, and public service.

Leidos is also rolling AI deeper into its internal stack. Thousands of employees are using ChatGPT and the API platform to automate knowledge work, which Leidos expects will translate to faster delivery and better outcomes for customers.

What's Being Integrated

  • Generative and agentic AI embedded into core workflows for federal missions across modernization, health, national security, infrastructure, and defense.
  • Secure model configurations to help protect enterprise and government data.
  • Custom agent workflows combined with Leidos' own AI tools for tasks like global threat assessments, supply chain monitoring, and deepfake detection.
  • Internal AI adoption at scale to accelerate product design and delivery timelines.

Why Ops and Product Leaders Should Care

  • Shorter cycle times on research-heavy tasks: threat intel summaries, document triage, and anomaly detection.
  • Agentic workflows that trigger tools, fetch data, apply policies, and escalate decisions-without manual handoffs.
  • Clear audit trails and access controls to meet federal security expectations.
  • Faster iteration loops from prototype to deployment, with measurable gains in throughput and quality.

Practical Implementation Checklist

  • Use-case scoring: rank candidates by impact, feasibility, and risk. Start with 1-2 high-signal processes.
  • Data mapping: define what models can access, log, and retain; implement role-based access and strict redaction paths.
  • Human-in-the-loop: set decision thresholds, review gates, and clear escalation paths.
  • Policy alignment: align with your AI risk framework and document model/agent behavior, inputs, outputs, and limits.
  • Metrics: track cycle time, cost per task, accuracy/quality rates, and exception volume.
  • Controls: audit logging, prompt/output capture (where allowed), and rollback procedures.
  • Change management: train end users, update SOPs, and define ownership for model/agent updates.
  • Security testing: red-team prompts, tool access boundaries, and data exfil paths before going wide.

90-Day Pilot Plan (Federal-Friendly)

  • Weeks 0-2: Pick a narrow workflow (e.g., supply chain alert triage). Define success metrics and risk controls.
  • Weeks 3-4: Stand up secure model access, data connectors, and policy enforcement. Build prompts, guardrails, and test datasets.
  • Weeks 5-8: Prototype agent flows with tool access and audit logging. Run shadow mode against historical cases.
  • Weeks 9-10: User testing with frontline staff. Compare outputs to human baselines; refine prompts and escalation rules.
  • Weeks 11-12: Limited production rollout. Monitor KPIs, error budgets, and incident response. Decide on scale-up or iterate.

Risk and Compliance Anchors

Use established guidance to structure governance and controls. The NIST AI Risk Management Framework and the OMB guidance on agency AI use offer clear expectations for testing, documentation, and oversight.

Where to Skill Up

If you're building AI-enabled workflows for operations or product teams, a structured path helps. See the AI courses by job to upskill your team on agent workflows, prompt strategy, and deployment practices.

Bottom Line

This partnership signals a push to make AI part of the daily workflow in federal missions-not experiments on the sidelines. If you lead operations or product, start with a narrow use case, enforce strong controls, and measure what matters: cycle time, quality, and reliability. Then scale what proves itself.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide

Related AI News for Product Development Professionals