AI in Nursing Needs a Product Strategy: Inside the Consortium Trying to Make That Real
The Florida State University College of Nursing launched the Nursing and AI Innovation Consortium (NAIIC) to bring health systems, educators and industry leaders into one plan for AI in nursing. The goal: keep nurses in the loop, set national standards and prove real-world value, not just promise. Partners include Microsoft, Quadrivia AI and Hippocratic AI - with participation expanding to organizations like Cedars-Sinai, Yale School of Nursing and the American Nurses Association.
Nearly 90% of healthcare organizations use AI in some form, yet there's no national framework guiding how it touches patient care. As Jing Wang, Ph.D., dean of FSU College of Nursing and NAIIC co-director, put it, adoption today is fragmented, inequitable and often misaligned with clinical realities. NAIIC is trying to change that with evidence-based standards that meet nursing where it is.
Why This Matters for Product Teams
Nurses are the largest segment of the healthcare workforce. If your product reduces time with patients, adds administrative work or hides decision logic, it will be rejected - by unions, by clinicians and eventually by the market. Your roadmap needs nurse-in-the-loop design, rigorous validation and clear governance from day one.
What NAIIC Is Building
NAIIC is developing comprehensive standards that span the full AI life cycle. Wang described four pillars:
- Research methodologies that reflect actual nursing workflows and patient contexts
- Educational curricula that upskill clinicians on AI literacy and oversight
- Product development guidelines that prioritize safety, equity and explainability
- Clinical implementation playbooks with repeatable processes and metrics
The intent is simple: AI should complement nursing practice, not compete with it. That means transparent systems nurses can verify against their clinical judgment.
The Tension: Protests, Strikes and "Untested Technologies"
Recent demonstrations show where trust is breaking. Nurses at 22 Kaiser Permanente facilities protested layoffs and AI use in September, calling some tools "untested technologies." A separate five-day strike in October focused on wages, but AI remains a flashpoint.
Union leaders have been clear. "You simply cannot replace nurses with technology like artificial intelligence," said Michelle Gutierrez Vo, RN, of the California Nurses Association. Pat Kane, RN, of the New York State Nurses Association, noted their contract at Northwell South Shore includes the right to review AI before implementation - a model worth studying.
For product teams, the signal is loud: involvement, oversight and transparency are not "nice to have." They're table stakes.
Design Principle: Nurse-in-the-Loop
Wang advocates for systems that keep nurses involved across design, deployment and use. Being "in the loop" means nurses validate data, shape clinical relevance and interpret outputs in the context of a patient's story, family and environment. No model can do that alone.
Done right, this approach improves documentation, predicts risks earlier and frees nurses to spend more time with patients. Done poorly, it becomes a black box that erodes trust.
Product Checklist: From Idea to Rollout
- Problem framing: Tie each feature to a measurable clinical problem nurses actually face (e.g., documentation time, patient deterioration alerts, staffing allocation).
- User co-creation: Run design sprints with bedside nurses, educators and informatics leaders. Capture edge cases early.
- Data governance: Define sources, consent, audit trails and de-identification. Document known biases and mitigation steps.
- Model behavior: Require explainability nurses can act on. Provide rationale, confidence and clear next-step guidance.
- Clinical validation: Prospective studies, nurse-reviewed outputs and safety gates before escalation to patient-impacting workflows.
- Change management: Training plans, super-user networks and a rollback path if metrics slip.
- Policy alignment: Map to emerging NAIIC standards and local union agreements, including review rights and oversight committees.
Governance That Sticks
- Human-in-the-loop requirements for all high-risk recommendations
- Bias monitoring across demographic, condition and unit-level segments
- Model drift checks tied to alert quality and outcome metrics
- Incident response playbooks with transparent communication to clinicians
- Post-market surveillance for performance, safety and equity
Proof That Matters: Metrics to Track
- Clinical safety: false positives/negatives, near-misses, escalation accuracy
- Nursing workload: documentation time, task switching, after-shift hours
- Care quality: length of stay, readmissions, adverse event rates
- Equity: performance by demographic and condition subgroups
- Adoption: clinician satisfaction, override rates, abandonment reasons
Procurement and Partnerships
- Ask vendors for transparent training data summaries, validation studies and bias analyses.
- Demand clinician-facing explanations and an audit API for IT and quality teams.
- Include union review windows and clear opt-out criteria in contracts.
- Pilot in one unit with tight feedback loops before scaling across the system.
What Success Looks Like
Nurses retain control. AI reduces low-value work. Recommendations are explainable and verifiable. Outcomes improve across patient groups. And every release closes the loop with the people who use it the most: bedside nurses.
Where to Learn More
- American Nurses Association - policy, standards and clinician perspectives
- National Nurses United - union positions and updates
- Complete AI Training: Courses by Job - role-based upskilling for product and clinical teams
Bottom Line for Product Development
NAIIC is pushing for a shared standard so AI in nursing is evidence-based, equitable and aligned with practice. Build to that standard now. Engage nurses early, make the system explain itself and measure impact like lives depend on it - because they do.
Your membership also unlocks: