We Get AI for Work™: Insights from Bryon Bass, CEO of the Disability Management Employer Coalition (DMEC)
Technology is reshaping workplace law faster than many anticipated, especially around leaves and accommodations. In a recent episode of We Get AI for Work™, Bryon Bass, CEO of DMEC, shared practical perspectives on how employers can adopt AI responsibly while staying compliant with complex federal and state regulations.
Balancing AI Adoption with Compliance
Bryon highlights that despite shifts in government guidance, especially from agencies like the EEOC and DOL, the core legal principles remain unchanged: employers must avoid discrimination and protect employee rights under laws such as the Family and Medical Leave Act (FMLA).
The DOL advises caution against using AI tools to make final eligibility decisions regarding serious health conditions. Employers must ensure these tools do not inadvertently reduce employee protections or cause disparate impacts.
Given the patchwork of state laws expanding leave rights and regulating AI use in employment decisions, employers face added complexity. Some states require employee consent before using AI on their data, which further complicates implementation.
Understanding Employer Readiness and AI Knowledge
DMEC's AI think tank surveyed 130 professionals in absence and disability management. Results showed only 60% had a basic understanding of AI, reflecting a widespread knowledge gap. Many confuse simple automation with true AI technologies like large language models and generative AI.
Only about 30% of respondents had formal AI policies related to employee benefits decisions, indicating many organizations may be using AI without clear guidelines. Nonetheless, 85% see efficiency as a top benefit AI can bring, despite concerns about systems integration, compliance ambiguity, and lack of transparency.
DMEC is working on resources such as white papers, ethical guidelines, and practical tools to help employers build effective AI governance and vendor assessment strategies.
Handling Sensitive Data and AI Transcription Tools
Many organizations are turning to AI transcription services to streamline note-taking and meeting summarization. Bryon notes that emerging AI tools, like Microsoft’s Copilot, offer features to summarize meetings without capturing sensitive or identifiable information in full transcripts.
Employers should establish clear internal policies on when and how transcription AI is used, balancing efficiency with privacy and data protection concerns. Limiting AI access to sensitive data folders is another practical safeguard to reduce risks.
Monitoring Remote Workers and Disability Considerations
Performance monitoring platforms for remote employees raise additional issues. These tools often lack accommodations for disabilities such as visual impairments or cognitive challenges that naturally affect typing speed or task completion.
Employers must be cautious when interpreting performance data from AI monitoring tools. Individualized assessments remain crucial, as automated metrics may not accurately reflect an employee’s true performance or circumstances.
Three Governance Essentials for AI in Absence and Disability Management
- Human Oversight: Always incorporate human review in AI-driven decisions. Predictive models can unintentionally embed bias, so oversight helps catch and correct unfair outcomes.
- Transparency: Understand and clarify how AI algorithms work. Ask vendors detailed questions about their data sources, model training, and testing procedures to ensure fairness and compliance.
- Policy Development: Formalize organizational policies on AI use, including employee data protection, consent requirements, and ethical guidelines aligned with federal and state laws.
Bryon warns against over-reliance on AI prediction without scrutiny, citing examples where insurance companies’ algorithms led to unfair treatment of older adults and minority groups due to biased data.
Looking Ahead
AI tools will continue to evolve and expand in HR and disability management. Employers should start by building foundational knowledge, establishing clear policies, and applying human judgment alongside AI outputs. Resources like DMEC’s upcoming AI-focused sessions and white papers can provide valuable guidance.
For managers interested in strengthening their AI knowledge and governance skills, exploring targeted training can be a solid next step. Visit Complete AI Training’s course offerings to find programs tailored for management professionals.
Your membership also unlocks: