Washington bill would put unions at the table before agencies deploy AI

WA lawmakers will revisit HB 1622 to require bargaining before AI affects pay or evaluations. Backers say it protects workers; critics warn of delays and added leverage.

Categorized in: AI News Government
Published on: Nov 24, 2025
Washington bill would put unions at the table before agencies deploy AI

WA Legislature to revisit union bargaining over government AI use

Washington lawmakers will again debate whether public employers must negotiate with unions before rolling out artificial intelligence. House Bill 1622 would require bargaining when AI affects wages or performance evaluations-moving talks from after implementation to before.

The bill passed the House last session, largely along party lines, but stalled in the Senate. Supporters say the change adds guardrails for workers. Opponents-business groups and some local officials-argue it tilts too much power to labor and could slow useful innovation.

What HB 1622 would do

  • Require state and local government employers to bargain over AI when it impacts wages, hours, working conditions, or performance evaluations.
  • Shift bargaining to the front end of AI adoption, not after tools are already deployed.

"Public sector bargaining covers wages, hours and working conditions… without legislation, that bargaining happens after implementation," said Washington State Labor Council President April Sims. "With legislation like House Bill 1622, it would happen before."

The legal backdrop (why state vs. local rules differ)

A 2002 state law blocks bargaining over technology for classified employees at state agencies and higher education. That law was written when the biggest tech choices were desktops, fax machines, and phones-hardly systems that score performance or shape hiring.

For cities, counties, and many other public employers, a separate statute already requires bargaining over technology if it touches wages, hours, or working conditions. HB 1622 would align state rules more closely with that standard.

What's already required today

  • Governor's 2024 direction: State government set a framework for using generative AI ethically and equitably.
  • OFM September memo: Agencies must give six months' notice to unions if generative AI will cause consequential changes to wages, hours, or working conditions. Unions can demand to bargain. Human review is required for employment-related decisions that involve AI.

As Sims put it, "Including workers at the beginning is not a courtesy. It is a practical necessity… It ensures human oversight where it is needed, and it builds trust among staff."

Why this matters for agencies

AI is already in pilots across government-document drafting, benefits intake, records triage, and more. Maryland, for example, is partnering with Anthropic to help residents apply for food aid, Medicaid, and other benefits.

Workforce concerns are real. In a recent survey from Pew Research Center, over half of workers said they're worried about AI's future impact, and about one-in-six said AI is already doing some of their work. See Pew's latest summaries here: Pew on AI.

Federal crosscurrents to watch

  • Reportedly, the administration is weighing an executive order that could push the U.S. Department of Justice to sue states over certain AI regulations. It's unclear whether a bill like HB 1622-which governs labor relations, not the technology itself-would be affected.
  • Congress recently debated a moratorium on state-level AI regulations; that provision was removed after pushback led by U.S. Sen. Maria Cantwell.
  • Another Washington proposal, SB 5708, aims to protect children from AI-fed social media apps. It passed the Senate but stalled in the House and could return in 2026.

What leaders should do now

  • Inventory current and planned AI use. Note anything that could affect pay, hours, duties, evaluations, or discipline.
  • Start early conversations with labor. Even where bargaining isn't yet mandated, pre-briefing avoids delay and builds trust.
  • Plan for OFM's six-month notice (state agencies). Build this lead time into project schedules and vendor timelines.
  • Set human-in-the-loop checkpoints. For hiring, performance, and discipline, require documented human review before decisions are final.
  • Run an AI risk assessment. Bias, data privacy, public records, model drift, and explainability should be checked before launch.
  • Tighten data and vendor controls. Put confidentiality, audit access, model update disclosure, and incident reporting into contracts.
  • Pilot, measure, iterate. Start small with clear success metrics-and publish what you learn to staff and unions.
  • Train managers and front-line staff. Focus on safe use, limits, and escalation paths for errors or bias.

Key takeaways for government teams

  • HB 1622 would move bargaining to the start of AI projects that touch work conditions or evaluations.
  • Local governments already bargain over tech that affects wages, hours, or conditions; state agencies face a different bar set in 2002.
  • Regardless of legislation, OFM rules already require notice and human review for certain AI uses.
  • Early union engagement reduces deployment risk and shortens timelines later.

If you need to upskill your team

If you're setting policy or implementing AI in government, focused training can help your staff avoid common pitfalls and meet compliance timelines. Browse role-based options here: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide