Meta Wins GSA Approval to Deliver AI to U.S. Agencies
Meta gains GSA clearance to provide Llama AI to U.S. agencies, adding vendor choice and easing procurement. On-prem/VPC options and data control support quick pilots.

Meta Cleared to Provide AI Services to U.S. Government: What It Means for Your Agency
Meta has received approval from the U.S. General Services Administration (GSA) to supply AI models and services to federal entities. This places Meta alongside vendors like OpenAI and xAI on the government-approved list, expanding choices for agencies planning AI deployments.
Published Sept. 22, 2025
What the GSA Approval Covers
The approval allows federal teams to use Meta's Llama models and related tools under government-wide guidance for AI adoption. This supports the U.S. "AI Action Plan" and offers agencies a path to evaluate, pilot, and scale AI with fewer procurement hurdles.
Meta also notes prior work with U.S. national security partners and a project that sent Llama to the ISS National Laboratory. The company plans major AI infrastructure investment in 2025, which could improve access to compute and model updates over time.
Why Agencies Might Care
- More vendor choice: Adds competitive pressure on pricing, support, and features.
- Open-source model option: Llama reduces lock-in and can be hosted on-prem or in a VPC.
- Data control: "Llama models offer federal agencies the ability to retain full control over data processing and storage."
- Procurement friction reduced: GSA verified federal requirements and streamlined access since models are publicly available.
- Scale potential: Meta's significant AI spend and long-term "Superintelligence" efforts could translate into frequent capability improvements.
Practical Use Cases to Pilot
- Document classification, summarization, and RAG for casework and policy analysis.
- Assisted drafting for memos, FOIA responses, and public notices with audit trails.
- Contact center assistants for common questions, with human-in-the-loop escalation.
- Code assistance for internal tools, scripts, and legacy system refactoring.
- Translation and accessibility support for public-facing services.
- Red-teaming and evaluation environments to test model behavior against agency policies.
Procurement and Compliance Notes
Even with GSA approval, your program still needs to clear internal governance and security gates. Plan for ATO, privacy reviews, records management, Section 508, data residency, and model evaluation standards.
If you use managed cloud services around Llama, confirm vendor FedRAMP status and supply chain controls. For sensitive workloads, clarify hosting boundaries, logging, key management, and data retention before pilots begin.
Data Control and Security
Meta emphasizes that agencies can keep data processing and storage under their control, which suits on-prem and VPC deployments. Open-source availability also enables code review, reproducibility, and custom guardrails aligned to your policy stack.
Set clear tiers for unclassified, CUI, and higher classifications. Restrict training or fine-tuning on sensitive datasets unless policy and architecture explicitly allow it.
Constraints and Risks
Government adoption can move slower than private sector timelines due to oversight and statutory requirements. Some industry leaders argue bureaucracy is hard to change, but steady pilots with clear metrics can still deliver progress.
Public trust matters. Transparent reporting, rigorous evaluation, and documented guardrails will be essential as agencies integrate vendor AI into everyday services.
90-Day Action Plan
- Select two high-value, low-risk use cases (e.g., summarization, internal search) and define success metrics.
- Choose a hosting path (on-prem, VPC, or vendor-managed) and confirm security controls, logging, and data boundaries.
- Form a working group (program lead, security, privacy, legal, procurement, union/HR if applicable).
- Run a Privacy Impact Assessment and update your data inventory; define retention and redaction rules.
- Build an evaluation harness: policy tests, red-team prompts, bias/fairness checks, and task accuracy benchmarks.
- Draft user guidance and human-in-the-loop procedures; train staff before production pilots.
- Leverage GSA guidance for acquisition paths and document lessons learned for the next phase.
Resources
- GSA: Artificial Intelligence
- AI.gov (U.S. Federal AI resources)
- Complete AI Training: Courses by Job