How State Attorneys General Are Filling the AI Regulatory Gap

State attorneys general are using existing laws to address AI risks like bias, fraud, and privacy breaches. Businesses must stay vigilant as enforcement grows beyond AI-specific legislation.

Categorized in: AI News Legal
Published on: May 20, 2025
How State Attorneys General Are Filling the AI Regulatory Gap

State Attorneys General Step Into AI Regulatory Gap

As generative artificial intelligence (AI) technologies spread across various sectors, state attorneys general (AGs) are raising alarms about potential misuse and legal breaches. Despite only a handful of states—California, Colorado, and Utah—having passed AI-specific legislation, many state AGs are actively using existing privacy, consumer protection, and anti-discrimination laws to address AI-related challenges.

The focus is on how AI systems handle personal data, the risks of fraud through deepfakes, adherence to company representations, and the potential for bias in decision-making. Early in 2024, a bipartisan group of AGs warned the Federal Communications Commission (FCC) about AI-enabled voice impersonations in telemarketing scams.

While states with AI laws will increase enforcement, businesses must remain alert to potential regulatory actions in other states applying traditional laws to AI. Several states, including California, Massachusetts, Oregon, New Jersey, and Texas, have issued guidance or taken enforcement steps on AI without specific AI statutes.

California

California AG Rob Bonta has issued legal advisories emphasizing that companies can face liability under the state's Unfair Competition Law and Civil Rights Act if AI tools mislead consumers or produce discriminatory outcomes. He specifically warns about AI use in healthcare and hiring, cautioning against replacing professionals with opaque AI systems that can cause harm.

Massachusetts

Massachusetts AG Joy Campbell was the first to issue formal guidance highlighting that AI systems could violate existing laws. Misrepresenting AI reliability or making false claims about its functions risks breaching the Massachusetts Consumer Protection Act. The guidance also points to fraud risks involving deepfakes, voice cloning, and chatbots used to deceive consumers.

Privacy concerns arise under the state's data protection standards, requiring AI developers to safeguard personal information. Additionally, Massachusetts warns that AI decisions based on protected characteristics such as race or gender may violate anti-discrimination laws.

Oregon

Former Oregon AG Ellen Rosenblum’s guidance addresses AI’s unpredictability and its potential to threaten privacy, fairness, and accountability. Oregon enforces AI oversight through laws like the Unlawful Trade Practices Act, which prohibits false claims about AI products, and the Consumer Privacy Act, which protects personal data and mandates consumer consent for AI training data.

Consumers also have the right to opt out of AI profiling in critical decisions such as housing, education, or lending. The Consumer Information Protection Act requires AI developers to implement reasonable cybersecurity safeguards. Oregon’s Equality Act forbids discrimination based on protected classes, including discrimination arising from AI use.

New Jersey

New Jersey AG Matthew Platkin launched an initiative focused on AI-driven discrimination and harassment risks. His guidance explains how the state’s Law Against Discrimination (LAD) applies to "algorithmic discrimination" in AI. Covered entities, including employers, housing providers, and credit institutions, must ensure AI tools do not discriminate based on race, gender, or other protected categories.

Use of AI in employment decisions, like hiring or termination, can violate the LAD even without discriminatory intent, especially if the AI system’s design or training leads to bias.

Texas

Texas AG Ken Paxton has taken enforcement action under traditional consumer protection laws. In September 2024, he settled with Pieces Technology, a healthcare AI company, for allegedly misleading consumers about the accuracy of its AI product. The settlement requires disclosure of how accuracy metrics are calculated and prohibits false or misleading statements about AI capabilities.

The settlement does not impose a monetary penalty but obligates Pieces to demonstrate ongoing compliance indefinitely.

Implications for Legal Professionals and Businesses

State AGs are actively scrutinizing AI under existing consumer protection, privacy, and anti-discrimination laws. Companies deploying AI across states should implement clear privacy and cybersecurity practices aligned with applicable laws.

Understanding the AI system’s foundational model and conducting thorough risk assessments before deployment is critical. False or misleading advertising of AI capabilities can lead to enforcement actions, as can discriminatory outcomes—even if unintentional.

Legal teams must ensure AI inputs and outputs are regularly reviewed to prevent bias, especially when decisions involve protected classes such as age, gender, or race. Given the patchwork of state regulations, companies should seek specialized legal counsel and involve all relevant stakeholders—including executives, IT, and compliance teams—in AI governance.

For those interested in AI compliance and legal frameworks, exploring targeted training resources can be beneficial. Consider reviewing courses on AI legal risks and governance available at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide