Inside the government's AI playbook: Agentforce gets to work
Updated: Oct 21, 2025 - 11:34 EDT
Public agencies are under pressure to deliver faster, more accurate and more personal services - without cutting corners on trust or compliance. A clear AI action plan is the lever. It sets standards, reduces risk, and directs investment into AI that actually improves outcomes for citizens and employees.
That's the context for Salesforce's Agentforce: a platform built to automate routine work, boost service delivery and deploy AI agents across citizen inquiries, code enforcement and benefits processing. In a conversation at Dreamforce, Paul Tatum, executive vice president for global public sector solutions at Salesforce, outlined how a structured plan moves AI from pilots to production - and why procurement is already forcing the issue.
Why an AI action plan matters now
Government is careful by design. That caution is a strength if you turn it into a system: clear guardrails, measurable goals and accountable delivery. According to Tatum, recent federal direction has pushed agencies to move from curiosity to commitment.
Procurement reflects the shift. RFPs and RFIs are increasingly explicit: new systems must include AI capabilities. That creates a consistent path for adoption and ensures vendors meet standards for security, privacy and auditability.
Where Agentforce fits
Agentforce puts AI agents to work on the front lines and behind the scenes. Think intake triage, case summaries, status updates, eligibility checks and proactive notifications - the high-volume, error-prone tasks that clog queues and frustrate citizens.
Crucially, it sits within enterprise-grade controls: role-based access, data residency, logging and policy enforcement. That lets agencies move faster without compromising the requirements they live by.
From pilots to production
Agencies are moving past "let's test it" into "let's build the new system." Tatum points to a practical example: today, if you hit "help" on large federal sites, you can face hundreds of links. Tomorrow, a single agent on that page can answer questions, route requests and escalate cases - with a human in the loop where it counts.
That isn't hype. It's a natural evolution from chat and search to task-completing assistants wired into policy, data and workflow. One interface, fewer dead ends, faster resolutions.
What you can do this quarter
- Pick the right starting point: High-volume, low-risk workflows (status checks, FAQs, appointment scheduling, document prep).
- Write the rules down: Define what data an agent can see, what actions it can take and what requires human approval.
- Update your procurement language: Require audit logs, red-teaming, bias testing, model provenance and clear opt-out paths.
- Stand up a data steward function: Map datasets, set retention rules and control PII access by role.
- Instrument outcomes: Track time-to-answer, case backlog, first-contact resolution, citizen satisfaction and staff workload.
- Upskill your team: Train product owners, analysts and frontline staff on prompt design, evaluation and exception handling.
Trust, security and oversight
Trust isn't a press release - it's a checklist. Agencies should anchor deployments to recognized guidance such as the NIST AI Risk Management Framework. Bake in model evaluation, content filtering, incident response, human review and continuous monitoring from day one.
Citizen data stays protected through strict access controls and encryption. Every agent action should be traceable, explainable and reversible. That's how you move fast without creating new liabilities.
A simple picture of the future
The end state looks straightforward: a citizen asks a question once and gets a correct, final answer - not a maze of pages. An employee opens a case and sees the summary, the recommended next step and the policy reference in one view. Agents handle the routine; people handle the exceptions.
That's the promise behind the current surge in AI requirements across government solicitations. It's less about flashy demos and more about clearing backlogs, shortening queues and giving civil servants time to do the work that requires judgment.
Resources and next steps
Explore service areas where an agent can replace click-paths with clear outcomes. For a sense of scale, compare today's help experiences on major sites such as CMS.gov. Then draft a one-page plan: use case, policy guardrails, metrics, rollout milestones and training.
If you're building an internal upskilling track for your team, you can find job-based AI learning paths here: Complete AI Training - Courses by job.
Note: This perspective includes insights shared during a Dreamforce discussion with theCUBE. theCUBE was a paid media partner for Dreamforce; sponsors did not have editorial control over coverage.
Your membership also unlocks: