AI in Employer Health Plans: Claims Decisions, ERISA Duties, and Contract Guardrails
AI now shapes health plan decisions; counsel must manage claims, fiduciary duty, and vendor terms. Need human review, transparency, audits, and documented oversight to meet ERISA.

AI in Employer-Sponsored Group Health Plans: Legal, Ethical, and Fiduciary Considerations
AI is now embedded in health plan administration. For legal teams advising plan sponsors and administrators, three issues demand immediate focus: claims adjudication, fiduciary oversight, and vendor contracts.
AI background
Since late 2022, mainstream AI systems have pushed automated decision-making into everyday operations. Health plans and their vendors now use models to sort, summarize, and recommend outcomes at scale.
The upside is efficiency. The risk is opaque systems driving clinical or financial outcomes that trigger ERISA exposure, state-law violations, or trust breakdowns with participants.
Claims adjudication: autonomous decision-making
Insurers and TPAs increasingly use AI to evaluate eligibility, medical necessity, and preauthorization. Tools can compare requests against plan terms, clinical guidelines, and patient history in seconds. That speed cuts costs, but it also concentrates error and bias if data or rules are flawed.
Training data is a core concern. Three federal transparency datasets encode years of pricing and utilization practices. If used to train AI, they can carry forward embedded biases and systemic artifacts.
- Hospital price transparency: machine-readable files of standard charges and negotiated rates.
- Transparency in Coverage: plan disclosures of in-network rates and historical out-of-network allowed amounts.
- Consolidated Appropriations Act, 2021: additional plan and issuer disclosures.
Preauthorization is a special pressure point. Use AI to triage and expedite clear approvals, but ensure human review for borderline cases and any denials. Keep audit trails: inputs reviewed, rules applied, and who made the final call.
CMS: Health Plan Price Transparency
Fiduciary oversight
Two realities matter under ERISA: modern models are largely opaque, and their accuracy has limits. That makes oversight and prudence harder, not optional.
ERISA requires decisions made solely in the interest of participants and beneficiaries, with care, skill, and diligence. Delegating critical claim outcomes to black-box systems-without testing, guardrails, or human control-risks breaching that duty.
Practical stance: position AI as decision support. Require explainability at the level a human reviewer can understand. Validate outputs against test sets. Track error rates and bias mitigation. Document everything.
DOL: Meeting Your Fiduciary Responsibilities
Vendor contracts
Most plans encounter AI through ASO arrangements with national carriers and TPAs. That does not dilute fiduciary accountability. Delegation requires diligence.
Insist on AI-specific terms. Push for transparency, auditability, and recourse when automated processes fail. Expect resistance-and plan for it-by tying provisions to ERISA duties and emerging state-law requirements.
- Human-in-the-loop: any denial based on medical necessity must be reviewed and finalized by a licensed clinician.
- Testing and audits: vendor disclosure of model purpose, training data sources, testing protocols, error rates, bias controls, and change management.
- Performance guarantees: SLAs and remedies that explicitly cover AI-related errors and delays.
- Indemnification: allocate AI-driven claim and compliance risk to the party controlling the system.
- Data rights and logs: access to decision logs and explanations sufficient for ERISA appeals and audits.
Legal landscape
State law
California's Physicians Make Decisions Act (effective January 1, 2025) bars insurers from relying solely on AI to deny claims based on medical necessity. Similar bills are surfacing in other states. Multistate plans should align ASO contracts and internal procedures with the most restrictive applicable standard to reduce conflict risk.
Federal agency signals
The DOL has warned (including in a now-withdrawn FMLA bulletin) that automated decision tools can scale compliance failures without human oversight. The Treasury Department has flagged concerns with AI bias, explainability, and privacy in financial services-issues that translate to health benefits. Expect more emphasis on human review, documentation, and accountability.
AI Disclosure Act (proposed)
A disclosure regime for AI-generated content has been proposed at the federal level. Even without a mandate, consider voluntary disclosures when AI is used in participant communications or claim workflows to reduce surprise and support trust.
Action items for plan sponsors and fiduciaries
- Inventory AI use: identify where AI touches claims adjudication, preauthorization, appeals, and participant communications-internally and at all vendors.
- Update fiduciary governance: add AI to committee charters, agendas, and risk registers; assign ownership; set reporting cadence.
- Revise contracts: embed disclosure, audit rights, service levels, human review requirements, and indemnities for AI-related failures.
- Validate outputs: obtain vendor testing results, error rates, and bias mitigation evidence; review with counsel and consultants; sample-denial audits each quarter.
- Track state laws: monitor developments (e.g., California) and update compliance playbooks and ASO terms accordingly.
- Educate fiduciaries: brief committees on AI limits, explainability, and ERISA obligations tied to automated systems.
- Maintain records: log inquiries, negotiations, testing reviews, and decisions to demonstrate prudence.
Bottom line
Use AI to speed the routine and surface the complex. Keep humans in control of clinical and final adverse determinations. Contract for transparency and remedies, measure performance, and document oversight. That is how you capture efficiency without sacrificing ERISA compliance or participant trust.
If your legal or benefits team needs practical upskilling on AI workflows and terminology, explore curated options at Complete AI Training - Courses by Job.