Kaiser Permanente's AI push meets worker demands for guardrails
AI is moving deeper into clinical workflows at Kaiser Permanente. Therapists and other mental health professionals are pushing back, asking for clear protections so patient safety and jobs don't get sidelined in the process.
In Northern California, nearly half of behavioral health workers report discomfort with AI tools entering their practice. The tension is simple: reduce busywork without reducing the human care that keeps patients safe.
What's driving the standoff
Kaiser says AI cuts time on notes and paperwork so clinicians can focus on patients. "AI does not replace human assessment and care," spokesperson Candice Lee said, pointing to potential benefits in diagnostics, patient relationships, and clinician time.
Workers see a slippery slope. "They're sort of painting a map that would reduce their need for human workers and human clinicians," said Ilana Marcucci-Morris, a licensed clinical social worker and union bargaining team member. She supports useful tech but warns AI mistakes can carry "grave" consequences for patients.
Where AI is already in use
- AI note-taking and transcription: Some clinicians use Abridge to summarize visits. Therapists flag privacy concerns with highly sensitive discussions. Kaiser says patients must consent, clinicians review outputs, and recordings/transcripts are encrypted and deleted after up to 14 days.
- Predictive monitoring: AI is used to signal when hospitalized patients may deteriorate. These models can help teams intervene earlier, but require oversight and validation.
- Mental health apps: Kaiser offers apps, including at least one with an AI chatbot. Workers fear over-reliance, especially for high-risk patients.
Legal and policy pressure is rising
California labor groups are urging state leaders to pass protections addressing surveillance and job loss. A new bill backed by the California Psychological Association would require clear written consent before recording or transcribing therapy, and prohibit unlicensed entities (including AI tools) from offering therapy in the state. Sen. Steve Padilla, who introduced the bill, says rules need to keep pace with tech growth.
Providers face lawsuits too. A San Diego case alleges Sharp HealthCare used Abridge to record visits without consent; Sharp says it protects privacy and does not use AI tools during therapy sessions. Separate suits from parents against Character.AI and OpenAI allege chatbot interactions contributed to harm among young people.
What this means for clinical practice
If you lead a clinical team, here's a practical checklist to use AI responsibly without risking care quality or trust:
- Explicit consent: Get clear, written consent before any recording or transcription-especially in therapy. Offer a no-penalty opt-out.
- Human-in-the-loop: Require clinicians to review, edit, and sign off on every AI-generated note. No auto-posting to the chart.
- Scope boundaries: Ban AI chatbots from crisis assessment, risk stratification, or clinical decision-making. Use AI to assist, not decide.
- Escalation rules: If an AI transcript or chat flags self-harm, abuse, or safety risks, trigger a documented human review within minutes.
- Privacy controls: Disable recordings by default in high-sensitivity encounters. Limit retention; verify encryption, access logs, and delete timelines.
- Patient transparency: Post simple, plain-language notices explaining what AI is used, why, for how long, and who sees the data.
- Bias and accuracy checks: Audit outputs monthly. Track false positives/negatives and disparities across patient groups.
- Procurement questions: Demand model versioning, training data sources, security attestations, and indemnification in contracts.
- Training and upskilling: Teach staff to spot AI failures, correct summaries, and recognize safety signals. Cross-train to reduce job displacement risk.
- Documentation: Note when AI assisted a record, the tool used, the version, and who reviewed it.
A simple operating model you can adopt
- Assist, don't replace: AI supports admin tasks; clinicians own care and decisions.
- Consent by default: No recording or transcription without written consent.
- 100% review: A human signs every AI output before it enters the chart.
- Clear accountability: Name the person responsible for each use case.
- No autonomous care: AI never initiates diagnosis, treatment, or risk calls.
- Minimal data: Store the least data for the shortest time; verify deletion.
- Patient visibility: Make AI use visible and understandable to patients.
- Continuous audit: Monitor performance and equity; pause if harms appear.
Worker concerns go beyond scribing
A recent analysis by major research groups found medical administrative assistants are among the most exposed to AI because their tasks overlap heavily with automation. The same research warns these workers may have fewer pathways to transition without support, and lists other high-exposure roles like office clerks, insurance sales agents, and translators. This is why clinicians are asking for upskilling, redeployment plans, and guarantees that AI augments care instead of replacing it.
Therapy and chatbots: where the line is
People are using chatbots for advice on tough conversations and everyday stress. Some find value in a conversational format, but AI tools aren't licensed clinicians. Mental health leaders warn that nuance can be lost, and the wrong responses can escalate risk. Dr. John Torous is working with the National Alliance on Mental Illness to create benchmarks that show how different tools respond to mental health prompts, a step toward more clarity for clinicians and patients.
Bottom line from front-line therapists like Marcucci-Morris: "AI is not the savior." It can help with tedious tasks. It should not be mistaken for therapy, clinical judgment, or crisis care.
What to watch next
- Contract talks in Northern California over AI use and job protections.
- Legislation requiring consent for therapy recordings and restricting unlicensed "AI therapy."
- Benchmarks and safety standards for mental health AI tools.
- More clarity on consent, encryption, retention limits, and audit trails from vendors.
Resources
Upskilling for healthcare roles
If your team wants practical AI literacy and role-specific skills, explore job-based learning paths:
Your membership also unlocks: