NYC nurses say hospitals quietly rolled out AI, risking patient safety and nursing jobs

NYC nurses say hospitals are rolling out AI without bedside input, adding patient safety risks and extra work. They want nurses in the room, clear rules, training, and fail-safes.

Categorized in: AI News Healthcare
Published on: Dec 01, 2025
NYC nurses say hospitals quietly rolled out AI, risking patient safety and nursing jobs

NYC nurses warn AI rollouts are sidelining bedside expertise and patient safety

Nurses across New York City say hospitals are deploying artificial intelligence tools without meaningful input from the bedside-creating new safety risks and extra work. At a recent "State of Nursing" Committee on Hospitals meeting, union leaders described systems introduced with little notice, minimal training, and unclear protocols.

"What do we do if the machines stop working?" asked Nancy Hagans, president of the New York State Nurses Association (NYSNA). Her point was simple: real-time care still lives or dies at the bedside, and that requires nurses fully in the loop.

What nurses are reporting

  • Limited involvement in decision-making on AI tools that directly affect workflows and patient monitoring.
  • Equipment appearing in critical units without prior training or defined escalation paths-one ICU reportedly found devices affixed to patients' heads without warning or clear protocols.
  • New AI assistants that require nurses to double-check outputs, adding cognitive load and duplicating work rather than reducing it.

Denash Forbes, a NYSNA director at large and a nurse at Mount Sinai West, pointed to "Sofiya," an AI assistant in the cardiac catheterization lab. According to Forbes, nurses must verify the system's work to prevent errors, which can reduce oversight time elsewhere.

What hospital leadership says

Mount Sinai's chief digital transformation officer, Robbie Freeman, has said the goal is to use AI as a supportive tool to enhance clinical decision-making-not to replace clinicians. That intent is welcomed in principle. The gap, nurses argue, is in how these systems are selected, validated, implemented, and monitored on the ground.

Why this matters for safety and workload

  • Automation bias: Staff may over-trust suggestions and miss deteriorations or atypical presentations.
  • Hidden failure modes: Model errors or downtime can go unnoticed without robust monitoring and fallback plans.
  • Bias and data drift: Outputs can skew by population, setting, or over time without auditing.
  • Workload creep: "AI assistance" often creates parallel checks, documentation, or back-and-forth that adds minutes to every task.

Practical steps for clinical leaders

  • Establish a multidisciplinary AI governance committee (nursing, physicians, pharmacy, IT, legal, quality, patient safety, risk).
  • Co-design with bedside nurses from day one: requirements, validation criteria, workflow mapping, and change management.
  • Run small, opt-in pilots with clear success measures, limits of use, and human-in-the-loop checkpoints.
  • Document explicit failure protocols: how to detect issues, when to stop using the tool, and how to revert to manual care.
  • Require transparent model documentation (intended use, training data sources, known limitations, performance by subgroup).
  • Stand up continuous monitoring: false positives/negatives, near-misses, downtime, and staff feedback channels.
  • Train all users before go-live; certify competency; re-train after updates or scope changes.
  • Confirm HIPAA and data security reviews; define data retention, access controls, and vendor responsibilities.

Practical steps for frontline nurses

  • Ask for the intended use, validation summary, and limits of the AI tool before relying on it.
  • Keep human assessments primary-treat AI outputs like consults, not orders.
  • Double-check high-risk recommendations (med dosing, device settings, triage flags) and document clinical rationale.
  • Report near-misses and tool errors through safety channels; escalate patterns to unit leadership.
  • Request just-in-time training and quick-reference guides; ensure coverage for nights/weekends and float staff.
  • Maintain manual skills and fallback workflows; practice how to proceed if the system fails.

Red flags to watch

  • No bedside nurses on the selection and rollout team.
  • No clear policy on when to accept or override AI output.
  • Lack of training or competency checks prior to use.
  • No audit trail, no performance dashboard, or no near-miss reporting tied to the tool.
  • Vendor claims without peer-reviewed evidence or real-world validation data for your patient population.

Balancing innovation with accountability

AI can support pattern recognition, documentation, and risk stratification, but it is not a replacement for clinical judgment. Without guardrails, it shifts risk to the bedside and invites silent failure. With governance, transparency, and nurse-led workflows, it can be a net benefit instead of a burden.

For policy and safety frameworks, see guidance from the World Health Organization on ethics and governance of AI for health and the U.S. FDA on AI/ML in medical devices.

If your team is building AI literacy

Creating shared language across nursing, providers, and IT shortens the learning curve and reduces rollout friction. A curated overview of role-based AI courses can help teams align on core concepts and safe-use practices: AI courses by job.

Bottom line: include bedside nurses early, prove safety in your population, monitor continuously, and keep a clean exit path. Patient safety depends on it.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide
✨ Cyber Monday Deal! Get 86% OFF - Today Only!
Claim Deal →