TRAIGA Is Live: What Texas' New AI Law Means For Legal Teams
Texas' Texas Responsible Artificial Intelligence Governance Act (TRAIGA) is now in effect as of January 1, 2026. Passed on June 22, 2025, it's a broad statute that touches private actors and state entities. The headline for counsel: expansive definitions, clear enforcement authority, and targeted guardrails against AI that manipulates human behavior.
Below is a concise breakdown with practical steps to put your clients and org on steady footing.
Scope Starts With A Broad Definition Of AI
TRAIGA defines an "artificial intelligence system" as any machine-based system that infers from inputs to generate outputs (content, decisions, predictions, recommendations) that can influence physical or virtual environments (Sec. 551.001). That's wide by design. Plain software that learns from inputs and influences outcomes may be pulled in, even if you wouldn't label it AI in everyday speech.
Action for counsel: run an inventory. Don't let "it's just automation" become a blind spot. If the system infers and outputs something that affects a user or process, assume it may be in scope.
Who's Covered: Do Business In Texas, You're In
Applicability (Sec. 551.002) reaches any person or entity that promotes, advertises, or conducts business in Texas; produces a product or service used by Texas residents; or develops or deploys AI in Texas. Out-of-state vendors are not insulated if Texans use the product.
If your AI stack runs elsewhere but serves Texas users, you should treat TRAIGA as applicable. Add Texas-specific contractual obligations to vendor agreements and ensure your deployment posture reflects that reach.
Legislative Intent You Can Use
The law's stated purposes (Sec. 551.003) include advancing responsible AI, protecting people from known and foreseeable risks, providing transparency on risks, and giving reasonable notice of AI use by state agencies. Expect these themes to influence enforcement positions and how courts read close calls.
Enforcement And Penalties
The Texas Attorney General is empowered to enforce TRAIGA. The statute includes civil penalties tied to curability: $10,000-$12,000 per curable violation and $80,000-$200,000 per uncurable violation. Keep in mind, penalties can compound across users, events, or features, depending on how a case is framed.
There are also provisions around biometric data usage with emphasis on government entities, which signals heightened scrutiny where sensitive identifiers (e.g., fingerprints, retinal data, voiceprints) intersect with AI-enabled workflows.
Safe Harbors And Affirmative Defenses (Use Them Carefully)
TRAIGA includes safe harbors and affirmative defenses, including for testing. That's helpful, but courts will look for good-faith, documented testing protocols and containment. "We were testing" won't save production behavior or sloppy sandboxing that bleeds into real users.
Counsel should formalize test environments, data handling, red-teaming plans, approval gates, and kill switches. Document intent, scope, controls, and outcomes so the record aligns with a bona fide testing defense.
Behavior Manipulation: The Mental Health Hook
Section 552.052 prohibits developing or deploying AI that intentionally aims to incite or encourage a person to commit physical self-harm (including suicide), harm others, or engage in criminal activity. This is brief, but it hits the highest-risk outcomes that have already surfaced in public controversies and suits.
The word "intentionally" matters. Policies, training data choices, prompt templates, safety layers, and escalation logic will all be evidence of intent. If your model or bot produces self-harm or crime-enabling outputs, expect questions about guardrails, red-teaming, and whether you ignored foreseeable risks.
Constitutional Rights As A Backstop
TRAIGA also targets AI uses that infringe, restrict, or impair rights guaranteed under the U.S. Constitution. That creates exposure for deployments that chill speech, enable viewpoint discrimination in public services, or support unreasonable searches when used by state actors. Expect interplay with First, Fourth, and Fourteenth Amendment doctrines.
Compliance Playbook (Start This Week)
- Inventory: Catalog all systems that infer from inputs and generate outputs affecting users or operations. Flag anything used by Texas residents.
- Use-Case Triage: Identify high-risk functions (health, safety, employment, finance, access to benefits, law enforcement, minors).
- Risk Assessments: Document foreseeable risks and mitigations for each AI use. Align to a recognized framework like the NIST AI RMF.
- Guardrails: Implement refusal policies, self-harm and violence filters, crime-prevention logic, rate limits, and domain boundaries. Log interventions.
- Human Oversight: Define escalation paths for flagged interactions. Train staff on crisis scenarios and reportable events.
- Testing Controls: Separate test vs. prod. Red-team for prompt attacks, jailbreaks, and safety bypasses. Keep dated evidence.
- Content Controls: Monitor for sycophantic behavior that amplifies delusions or dangerous plans. Calibrate responses to avoid persuasion toward harm.
- Vendor Contracts: Add Texas-specific compliance clauses, audit rights, safety SLAs, incident notice timelines, and indemnities.
- Notices And Transparency: Provide clear disclosures where required. For state agencies, prepare notices of AI use consistent with TRAIGA's purpose.
- Biometrics: For public-sector clients, validate statutory limits before deploying AI that uses sensitive identifiers. Minimize retention and access.
- Incident Response: Define triggers, hotline/escalation, and AG engagement protocol. Practice with tabletop drills.
- Geography: If needed, gate features or geofence to adapt to Texas requirements without breaking other markets.
Gray Areas To Watch
How broadly courts read "intentionally aims" will be pivotal. Expect discovery battles over logs, prompts, fine-tuning choices, and safety test results to infer intent. The definition of AI is expansive, so some "smart" automation will likely be challenged as covered systems.
There's also the federal-state tension. Preemption arguments may surface in narrow contexts, but do not bank on them. Plan for a multi-state patchwork for the near term.
Q1 2026 Action Plan
- Next 30 days: Finish inventory, flag Texas exposure, halt obviously risky features until guardrails are validated.
- Next 60 days: Complete risk assessments for high-impact use cases; update vendor contracts and internal policies.
- Next 90 days: Ship safety improvements, finalize incident response, and brief the board or agency head on TRAIGA posture and gaps.
Bottom Line
TRAIGA is broad, enforceable, and aimed at preventing AI from pushing people toward self-harm, violence, or crime. If your system influences users, assume scrutiny. Build proof that you anticipated foreseeable risks and took reasonable steps to prevent them.
Further Resources
Your membership also unlocks: