Lansing-Area Lawmakers, Unions Push for Guardrails on Workplace AI and Employee Monitoring

Lansing-area lawmakers and unions push guardrails on workplace AI and monitoring: notice, fairness, and limits on always-on tracking. Expect human review, bias tests, and penalties.

Published on: Feb 24, 2026
Lansing-Area Lawmakers, Unions Push for Guardrails on Workplace AI and Employee Monitoring

Lansing-area lawmakers and labor unions push guardrails on AI and employee monitoring

Michigan legislators from the Lansing area, working with labor unions, are floating new rules to keep AI and employee monitoring tools in check. The focus: transparency, fairness, and limits on always-on surveillance. If you run HR, manage a public agency, or work on policy, this is your heads-up.

What's on the table

  • Clear notice and transparency: Tell workers when AI or monitoring tools are in use, what data is collected, and how it's used.
  • Human review of automated decisions: People can appeal AI-driven actions (hiring, scheduling, discipline, termination) and get a meaningful explanation.
  • Limits on surveillance: Curb always-on tracking (cameras, keystrokes, location), especially off-hours or in sensitive areas.
  • Data minimization and retention: Collect only what's job-relevant, keep it for defined periods, protect it like it matters.
  • Bias testing and audits: Regular checks for discriminatory impact; document fixes and outcomes.
  • Respect for collective bargaining: No end-runs around unions; negotiate deployment and policy changes.
  • Impact assessments: Pre-deployment reviews of risks, expected benefits, and safeguards; ongoing monitoring.
  • Enforcement with teeth: Complaints, fines, and potential private rights of action for violations.

Why this move now

AI is making hiring faster, scheduling tighter, and oversight constant-but it can miss context and amplify bias. Lawmakers are responding to cases of questionable monitoring and automated decisions that workers couldn't challenge. They're also lining up with national guidance like the NIST AI Risk Management Framework and the EEOC guidance on AI in employment.

What this means for HR leaders

  • Inventory your stack: List every tool that scores, monitors, or decides. Include vendor features you've never turned on.
  • Turn off "always-on" by default: Use the least intrusive settings. Block off-hours tracking unless safety-critical.
  • Standardize notice language: Plain-English explanations of what's collected, why, and for how long.
  • Require vendor assurances: Bias testing, documentation, data sources, model purpose, and update cadence.
  • Set retention rules now: Define what you delete and when. Log access. Encrypt sensitive feeds.
  • Build a human-in-the-loop path: Every adverse automated decision gets human review on request.
  • Create an appeals channel: Fast turnaround, logged outcomes, feedback loop to fix the system.
  • Engage labor early: Share impact assessments, negotiate changes, and agree on safeguards.
  • Train managers: What AI does well, where it fails, and when to override.
  • AI for Human Resources for practical how-tos on audits, notices, and vendor due diligence.

For government and policymakers

  • Define key terms: "Automated decision system," "monitoring," "adverse action," "high-risk use." Clarity prevents loopholes.
  • Use risk tiers: Stricter rules for hiring, firing, pay, safety. Lighter touch for low-risk tools.
  • Require impact assessments: Pre-deployment and annual updates; publish summaries to boost trust.
  • Align with federal guidance: Map requirements to NIST, EEOC, and state privacy laws to reduce compliance friction.
  • Enable worker rights: Notice, access to data used, explanation, and a path to human review.
  • Resource enforcement: Clear complaints process, timelines, and penalties.
  • AI Learning Path for Policy Makers for impact assessments, audits, and oversight structures.

For workers and unions

  • Ask for disclosure: What tools are in use, what data they collect, and how decisions are made.
  • Bargain the rollout: Set rules on monitoring limits, audit rights, and appeals.
  • Document issues: Keep records of errors, false flags, and missed context. You'll need examples.
  • Use your rights: Request human review of adverse decisions and an explanation you can understand.

What to watch in Lansing

  • Committee path: Which committees take it up and whether hearings are scheduled.
  • Scope and exemptions: Small-business thresholds, safety-critical carve-outs, and vendor obligations.
  • Worker rights strength: Private right of action, penalties, and timelines for responses.
  • Preemption questions: How state rules align with federal guidance and local ordinances.

Quick implementation checklist

  • Map every AI/monitoring use and its purpose.
  • Write a single-page worker notice for each use.
  • Set default "least intrusive" settings and off-hours blocks.
  • Adopt a bias testing schedule and pick metrics.
  • Stand up a human-review and appeal process.
  • Define retention/deletion rules; log access.
  • Update contracts with vendors (audits, transparency, security).
  • Train HR, managers, and union reps on roles and triggers.
  • Publish an annual summary of findings and fixes.
  • Assign an owner for continuous monitoring and compliance.

Bottom line

This proposal pushes for common-sense rules: tell people what's happening, limit intrusive tracking, check for bias, and keep a human in the loop. Whether you support or question the details, waiting isn't a plan. Start building these practices now and you'll be ready for whatever comes out of Lansing.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)