Unions warn HSE over AI rollout without consultation, insist on human-led care and patient safeguards

Unions say Ireland's AI-for-care plan must be worker-led, not imposed. They want codesign, human oversight, and staffing protected so AI supports care-not replaces it.

Categorized in: AI News Healthcare
Published on: Mar 14, 2026
Unions warn HSE over AI rollout without consultation, insist on human-led care and patient safeguards

Healthcare unions call for worker-led AI rollout in Ireland's public health service

Healthcare unions have raised concern following the launch of the Artificial Intelligence for Care Strategy, saying any integration of AI must be shaped with the people who deliver care. They're disappointed that a policy advancing AI was released without meaningful engagement with worker representatives.

SIPTU's Kevin Figgis put it plainly: the future of healthcare must be human-led. Ireland's public health service should be a great place to work, forward-looking on technology, and safely staffed at all times. AI can support care, but not at the expense of direct, in-person patient care.

INMO's Edward Mathews said unions are concerned the HSE moved ahead without proper consultation. AI may bring benefits, but it also brings significant risks. It should enhance personal care, not replace it, and it must go hand-in-hand with investment to grow the workforce and ensure safe staffing across services.

Fรณrsa's Ashley Connolly confirmed unions have sought an urgent meeting with the HSE Chief Technology Officer. Any AI initiatives, she said, need ironclad protections so patient care remains safeguarded through effective human oversight.

Why consultation is non-negotiable

Unions are calling for staff involvement at every step: development, testing, and implementation. That means codesign, clear accountability, and explicit safety measures to protect patients. It also means time and resources to train staff, plus backfill so services aren't stretched while new systems bed in. Launching policy without real engagement is a poor start to a complex rollout.

What safe AI integration looks like

  • Codesign with frontline staff across nursing, midwifery, medical, AHP, admin, and IT, plus patient representation.
  • Clear clinical ownership and human oversight for any AI-influenced decision; no silent automation of clinical judgment.
  • Independent clinical validation before pilots; phased go-lives with defined success metrics and stop criteria.
  • Safe staffing protected by policy; AI cannot be used to justify headcount cuts or unsafe skill-mix changes.
  • Data governance: DPIAs, security controls, data minimisation, and clear consent pathways where applicable.
  • Bias and performance monitoring across demographics; regular audits and transparent model updates.
  • Usability and workload checks so AI reduces admin friction instead of shifting extra tasks onto clinicians.
  • Training with protected time and backfill; role-specific competencies and ongoing support.
  • Incident reporting that captures near-misses, with rapid review and feedback to teams.
  • Clear vendor accountability, audit trails, and the ability to override or switch off systems when safety is at stake.

What this means for frontline staff

Expect pilots in areas like triage support, scheduling, documentation summarisation, imaging pre-reads, and demand forecasting. Ask how tools were validated, what risks were identified, and how you can escalate concerns.

  • Track false alarms, misses, workload shifts, and any impact on patient flow or safety.
  • Request protected time to train, practice, and adapt workflows.
  • Confirm who is accountable for AI-enabled decisions and how incidents are reviewed.
  • Ensure safe staffing is maintained during pilots and beyond.

For managers and clinical leaders: immediate next steps

  • Publish a consultation plan and the membership of your AI steering group.
  • Ring-fence budget for training, backfill, evaluation, and data quality work.
  • Set procurement rules: no deployment without clinical evaluation, DPIA, bias assessment, and audit trails.
  • Define accountability: sign-off owners, monitoring cadence, and incident response timelines.
  • Communicate with patients about where AI is used and how their data is protected.

Helpful resources

For evidence-based guidance on safe, ethical AI in health, see WHO's recommendations on ethics and governance of AI for health. They outline safeguards, risk management, and oversight that align with the unions' calls for accountability and patient safety. WHO: Ethics and governance of AI for health

Healthcare teams building AI literacy and rollout plans may find these resources useful:

Bottom line

AI can help, but care is human. If AI is to support Ireland's public health service, it must be built with the people who keep it running - with safety, transparency, and staffing protected at every step.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)