Connecticut Requires Employers to Disclose AI Use in Hiring and Personnel Decisions
Connecticut has passed legislation regulating how employers use artificial intelligence in employment decisions. Governor Ned Lamont is expected to sign the Connecticut Artificial Intelligence Responsibility and Transparency Act (SB 5) into law, joining California, Illinois, and other states imposing new compliance obligations on companies that deploy automated hiring and personnel tools.
The law applies to any automated system that generates scores, ranks, recommendations, or classifications affecting hiring, promotion, discipline, or termination decisions. Resume-screening tools, video-interview analytics, skills assessments, performance-management software, and workforce-reduction algorithms all fall under the law's scope.
What Employers Must Disclose
Before using an automated employment decision tool, employers must tell job applicants and employees in plain language that the tool may be used, what data it processes, what outputs it generates, and how those outputs influence the decision. This notice must come before the tool is deployed.
When an automated system contributes to an adverse employment decision-a rejection, demotion, or termination-employers must also disclose why. The explanation must specify how much the automated output influenced the decision, what types of data the system processed, and where that data came from.
Anti-Bias Testing Is Now a Practical Requirement
SB 5 amends Connecticut's Fair Employment Practices Act to hold employers liable for discriminatory outcomes produced by automated systems. Courts and regulators will evaluate whether employers conducted anti-bias testing and what they did with the results.
Employers should assess whether their automated tools produce statistically significant disparities based on protected characteristics like race, gender, or age. If disparities appear, employers must determine whether the tool is job-related and necessary for business, whether less discriminatory alternatives exist, and what mitigation steps to take.
The law makes clear that using an automated system does not shield employers from discrimination liability. Responsibility remains with the employer, regardless of whether a vendor created or operates the tool.
Human Review Must Remain in Place
Employers should ensure that automated outputs do not become final decisions without human review. Decision-makers must understand how the tool works, what it cannot do, and retain the authority to question or override its recommendations. Companies should document these override decisions.
Timeline and Enforcement
Employment-related provisions take effect on October 1, 2026, with staggered implementation dates for different requirements. The Connecticut attorney general will enforce the law exclusively under the state's Unfair Trade Practices Act.
Employers should begin now by inventorying automated tools and identifying which ones are covered. By Q3 2026, companies should draft plain-language disclosures and establish human-review workflows. Before year-end, employers should start bias testing for high-impact tools and train HR staff and managers.
Ongoing compliance requires regular audits, updated notices when tools change, documented mitigation efforts, and reassessment whenever business practices or job criteria shift.
For legal teams, staying informed about Connecticut regulatory guidance and any enforcement announcements will be critical as the law takes effect. AI for Legal professionals should also consider how these requirements affect vendor agreements and procurement standards.
Your membership also unlocks: