AI in Absence and Disability Management: Insights from Bryon Bass on Compliance, Risks, and Practical Solutions
Bryon Bass of DMEC discusses AI’s role in workplace disability and leave management, highlighting the need for human oversight and clear policies. Employers must balance AI benefits with legal compliance and privacy.

Podcast We Get AI for Work™: An Exclusive Interview with Bryon Bass, CEO at the Disability Management Employer Coalition (DMEC)
INTRO
Technology is reshaping workplace law, especially around leaves and accommodations, faster than ever before. On this episode of We Get AI for Work™, Bryon Bass, CEO of the Disability Management Employer Coalition (DMEC), shares insights on the risks and benefits employers face when adopting AI in decision-making. Co-host Joe Lazzarotti, principal in Tampa and co-leader of a Privacy, Data and Cybersecurity Group, explores how organizations can use AI while staying compliant with overlapping federal and state policies—and what that means for business.
AI and Compliance: A Delicate Balance
Joe Lazzarotti: Bryon, you work closely with many organizations on disability and absence management. What are your thoughts on the recent shifts in AI guidance from agencies like the EEOC?
Bryon Bass: We're seeing a lot of guidance released, then pulled back, especially from the EEOC and Department of Labor (DOL). Despite this, enforcement remains anchored to existing laws focused on preventing discrimination. The key concern is avoiding disparate impact when using AI tools in HR decisions, particularly around leave, disability, and accommodations.
The DOL advises against using AI for final eligibility decisions under laws like the Family and Medical Leave Act (FMLA). Employers must ensure AI tools don't inadvertently infringe on employee rights. Also, state laws add complexity—many restrict how AI can use employee data and often require employee consent.
State Moratorium on AI Regulation: What’s Next?
Lazzarotti: What’s your take on the proposed ten-year moratorium in the “One Big Beautiful Bill Act” (H.R. 1)?
Bass: While the bill proposes a moratorium on state AI laws, its enforceability is questionable due to federalism issues. States like California and New York may push back, possibly leading to legal challenges on whether such a moratorium can stand.
DMEC’s AI Think Tank: Survey Insights
DMEC launched an AI think tank last year, involving employer members, brokers, consultants, and tech pros in disability and absence management. They surveyed 130 professionals, mainly employers, to gauge AI understanding and readiness.
Only 60% reported a basic understanding of AI. Many confuse simple automation with true AI, such as large language models or generative AI. About 30% had formal AI policies related to employee benefits decisions, highlighting a gap between AI use and governance.
The top perceived benefit was efficiency, with 85% believing AI can streamline processes. Challenges include system integration, unclear compliance rules, and lack of transparency in AI decision-making. Respondents expressed demand for case studies, ethical guidelines, and practical tools, which DMEC plans to address in upcoming sessions and white papers.
Using AI for Sensitive Data: Transcription and Privacy Concerns
Lazzarotti: Many HR teams are experimenting with AI transcription tools for meetings. Did your survey or think tank discuss potential issues there?
Bass: While not the survey focus, the topic comes up often in subcommittees. Organizations need clear policies on when and how transcription services are used. Some tools, like Microsoft’s Copilot, can summarize meetings without capturing full personal details, reducing privacy risks.
Additionally, restricting AI access to sensitive folders or data libraries is crucial. These steps help lower the chance of exposing confidential or personal employee information.
Performance Monitoring and Disability Accommodations
Lazzarotti: With remote work on the rise, performance management platforms are popular. How do these tools impact employees with disabilities?
Bass: This is a concern. Monitoring tools often don’t account for disabilities that affect typing speed, vision, or cognition. They may flag an employee as underperforming without context. Ideally, accommodations should be individualized, but AI monitoring rarely reflects this nuance.
Employers should be cautious and consider how algorithms are designed and tested, ensuring they don’t unfairly penalize employees with disabilities.
AI Governance: Three Key Recommendations
Lazzarotti: Finally, from a governance standpoint, what are three critical considerations for employers adopting AI in absence and disability management?
Bass: First, human oversight is essential. AI predictions can carry biases, especially when built on historical data reflecting socioeconomic or racial disparities. For example, predictive models in insurance have in some cases unfairly raised rates for older or Black customers.
Second, transparency around AI algorithms is crucial. Employers must understand how decisions are made and validate that AI tools comply with nondiscrimination laws.
Third, formal policies and continuous education are needed. Employers should establish clear guidelines for AI use, monitor compliance, and keep teams informed as AI technologies evolve.
DMEC is committed to providing employers with tools and resources to address these challenges responsibly. For those interested in learning more about AI’s role in workplace management, exploring courses on AI ethics and compliance can be valuable. Resources like Complete AI Training’s HR-focused courses offer practical knowledge to guide decision-making.