Connecticut AG: Existing Laws Cover AI, Business Group Urges Caution on New Rules

CT AG confirms AI decisions must follow current civil rights, consumer protection, privacy, and antitrust laws. Nonbinding memo helps teams focus compliance now.

Categorized in: AI News Legal
Published on: Feb 28, 2026
Connecticut AG: Existing Laws Cover AI, Business Group Urges Caution on New Rules

AG Confirms Existing State Laws Cover AI Use

Connecticut Attorney General William Tong issued a Feb. 25 memorandum clarifying a simple point with big consequences: AI does not sit outside existing law. If algorithms are involved in decisions, the same statutes still apply. The 11-page memo is guidance for consumers, businesses, and government-useful now, as AI moves deeper into everyday operations.

The attorney general's office stresses the memo is not binding or precedential. It's an overview of how current legal frameworks reach AI-driven conduct, today. That clarity helps legal teams prioritize what to audit, document, and remediate without waiting for new regulations.

Key legal frameworks that already apply

  • Civil rights and anti-discrimination: Bias in hiring, lending, housing, education, or public accommodations remains unlawful-whether a human or a model makes the call. Disparate treatment and disparate impact theories are still in play.
  • Consumer protection: Unfair or deceptive practices extend to AI outputs, claims about AI capabilities, and dark patterns baked into automated flows. State UDAP laws and federal standards continue to govern disclosures, substantiation, and fairness.
  • Privacy and data security: Existing privacy statutes, security duties, and breach notification laws apply to data used to train, fine-tune, or run models. That includes data minimization, purpose limits, and honoring consumer rights where applicable, such as under the Connecticut Data Privacy Act.
  • Antitrust: Collusion doesn't become legal just because pricing or recommendations are coordinated via algorithms. Information exchanges, MFN clauses, and procurement practices require the same scrutiny when AI is involved.

Why this matters for in-house and outside counsel

Your risk surface didn't disappear-it expanded. Models inherit the obligations of the workflows they touch. That means your existing compliance programs are still the backbone: update them for AI inputs, outputs, and feedback loops.

"The growing complexity of proposed additional AI regulations present a challenge for many small employers," said CBIA's Chris Davis. That reality puts a premium on pragmatic controls over performative policy. Start with the laws you already know and close the gaps created by automation.

Practical steps to close risk now

  • Map use cases: Inventory every AI-assisted decision point (e.g., hiring screening, credit risk, pricing, claims handling, content moderation). Note data sources, human oversight, and affected rights.
  • Bias testing: Run pre-deployment and periodic disparate impact testing for models affecting people's opportunities or access to services. Document methods, sample sizes, and remediations.
  • Consumer disclosures: Substantiate all AI-related claims. Avoid vague "smart" promises and explain material limitations. Ensure opt-outs and consent match actual data flows.
  • Privacy controls: Enforce data minimization, retention limits, and access controls for training and inference datasets. Honor consumer rights requests and keep vendor DPAs/model-use clauses current.
  • Security-by-design: Threat-model prompt injection, model theft, data leakage, and output poisoning. Log model inputs/outputs for forensics and implement RBAC for model access.
  • Antitrust guardrails: Prohibit use of shared algorithms or third-party optimization tools to exchange sensitive competitive information. Train teams on information boundaries and audit pricing/recommendation logic.
  • Human-in-the-loop: For high-stakes outcomes, require meaningful human review with authority to override. Measure error rates and set escalation thresholds.
  • Documentation: Keep model cards or equivalent: purpose, training data provenance, evaluation metrics, known failure modes, and controls. Regulators will ask.

What the memo clarifies

  • AI does not excuse discriminatory outcomes or deceptive practices.
  • Accountability follows the decision, not the interface. If your system influenced it, you own it.
  • Existing privacy and security duties attach to model development and deployment phases.
  • Competition laws apply to algorithmic coordination the same way they do to people.

According to the memorandum, longstanding protections against discrimination, unfair or deceptive practices, and anticompetitive conduct apply equally to decisions made or influenced by algorithms. The office notes the memo provides an overview of applicable principles as AI continues to develop, but it does not create new obligations.

"As confirmed by the attorney general, Connecticut's existing civil rights, consumer protection, privacy, and antitrust laws provide meaningful protections for both consumers and businesses," added CBIA vice president of public policy Chris Davis. "The growing complexity of proposed additional AI regulations layered on top of these existing laws present a challenge for many small employers that lack legal staff and financial resilience needed to manage proposed overlapping and complex regulations."

Resources

For more information, contact CBIA's Chris Davis (860.244.1931).


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)