Texas Enacts TRAIGA: What the New AI Governance Law Means for Businesses and Government
Texas’ TRAIGA law, effective Jan 1, 2026, mandates AI disclosure, bans certain government AI uses, and enforces penalties for violations. It applies to AI developers, deployers, and government entities.

TRAIGA: Key Provisions of Texas’ New Artificial Intelligence Governance Act
On May 31, 2025, the Texas Legislature passed House Bill 149, known as the Texas Responsible Artificial Intelligence Governance Act (TRAIGA). This law sets clear disclosure requirements for AI developers and deployers working with government entities, outlines prohibited AI uses, and establishes civil penalties for violations. Signed into law on June 22, 2025, TRAIGA will take effect on January 1, 2026, joining other states like California, Colorado, and Utah in regulating artificial intelligence.
Who Does TRAIGA Apply To?
Key Definitions
TRAIGA applies to two main groups: covered persons and entities (including developers and deployers of AI systems) and government entities.
Covered Persons and Entities
A covered person is anyone who meets one or more of the following criteria:
- Promotes, advertises, or conducts business in Texas;
- Produces a product or service used by Texas residents;
- Develops or deploys an AI system in Texas.
Developers and Deployers
A developer is a person who creates an AI system offered or provided in Texas. A deployer is someone who actually puts the AI system into use within Texas.
Government Entities
Governmental entities include any Texas state or local administrative units exercising governmental functions under Texas law. Notably, hospital districts and institutions of higher education are excluded.
Consumer
“Consumer” refers to an individual Texas resident acting in an individual or household capacity. Employment or commercial uses are excluded from TRAIGA’s scope.
Artificial Intelligence System
TRAIGA’s definition of an AI system is broad: any machine-based system that infers from inputs to generate outputs—such as content, decisions, predictions, or recommendations—that can influence physical or virtual environments.
Enforcement of TRAIGA
The Texas Attorney General (AG) holds exclusive authority to enforce TRAIGA, except for limited enforcement powers granted to certain licensing state agencies. Importantly, the law does not create a private right of action.
Notice and Opportunity to Cure
Before initiating enforcement action, the AG must issue a written notice of violation. The alleged violator then has 60 days to fix the issue, provide documentation proving the cure, and update internal policies to prevent recurrence.
Civil Penalties
- Curable violations: $10,000 – $12,000 per violation;
- Uncurable violations: $80,000 - $200,000 per violation;
- Ongoing violations: $2,000 - $40,000 per day.
The AG may also seek injunctive relief, attorneys’ fees, and investigative costs.
Safe Harbors
TRAIGA provides safe harbors and affirmative defenses. Liability is avoided if:
- A third party misuses the AI in a prohibited way;
- The person discovers violations through testing or good faith audits;
- The person substantially complies with recognized AI risk management frameworks such as NIST’s AI Risk Management Framework.
State Agency Enforcement Actions
If the AG recommends further action against licensed or certified individuals or entities, relevant state agencies may impose sanctions such as license suspension, probation, revocation, or fines up to $100,000.
Operational Impact of TRAIGA
Disclosure to Consumers
Government agencies must clearly disclose when consumers are interacting with AI. Disclosures must be clear, conspicuous, use plain language, and avoid deceptive design practices (dark patterns), even if the AI interaction seems obvious to a reasonable user.
Prohibited Uses of AI by Government Entities
TRAIGA forbids government use of AI to:
- Assign social scores;
- Uniquely identify individuals using biometric data without consent;
- Incite self-harm, crime, or violence;
- Infringe on rights guaranteed under the U.S. Constitution;
- Unlawfully discriminate against protected classes under state or federal law.
The law clarifies that disparate impact alone is insufficient to prove discriminatory intent. Protected classes include groups defined by race, color, national origin, sex, age, religion, or disability.
Additionally, TRAIGA establishes a sandbox program for controlled AI testing and creates the Texas Artificial Intelligence Council to advise on AI ethics and legal issues.
Key Compliance Considerations
- Applicability assessment: Identify all AI systems developed or deployed within Texas, including third-party tools like chatbots, to determine if TRAIGA applies.
- Use case analysis: Evaluate whether AI systems interact with consumers, impact constitutional rights, affect protected classes, or could be seen as encouraging harmful behavior.
- Notice requirement: Government agencies should develop clear, plain-language disclosures for AI interactions, avoiding dark patterns.
- Risk framework alignment: Align AI programs with recognized frameworks such as NIST’s AI Risk Management Framework to benefit from safe harbor protections.
- Sandbox program: Consider participation to test innovative AI products in a controlled environment with limited regulatory exposure.
Potential Federal Impact: AI Moratorium
On May 22, 2025, the U.S. House of Representatives passed a proposal to impose a 10-year moratorium on state-level AI regulations. This proposal remains in legislative consideration and, if enacted, could preempt TRAIGA and other active state AI laws. Legal professionals should monitor developments closely.
For those seeking to deepen their understanding of AI governance and compliance, exploring specialized training can be beneficial. Resources are available at Complete AI Training.