AI & HR Legal Update: Highlights from North Carolina's Community College HR Conference
At the first annual conference of the North Carolina Association of Community College Human Resources Professionals, Womble Bond Dickinson attorney Gerard Clodomir delivered "AI & HR Legal Update." The session covered how HR teams are using AI, current and emerging rules, state trends, case studies, and practical compliance steps. The event took place October 9-10 at Carteret Community College in Morehead City, N.C.
How HR Teams Are Using AI
HR departments are testing AI for resume screening, candidate messaging, interview scheduling, and employee support chat. Some teams use tools for skills matching, performance insights, and policy Q&A. The opportunity is clear, but the risks are specific and manageable with the right controls.
Regulatory Focus Areas HR Should Track
Federal agencies are active. The Equal Employment Opportunity Commission has issued guidance on avoiding discriminatory impact when using algorithms in hiring and promotion. See the EEOC's technical assistance on AI and adverse impact for practical direction.
State and local activity is growing. New York City's AEDT rule requires bias audits and candidate notices for certain automated hiring tools. Other states are moving in similar directions, so multi-state employers should align to the strictest applicable standard.
Compliance Strategies: Before, During, and Ongoing Use
- Before implementation
- Define use cases, risks, and success metrics; avoid high-risk automation without human review.
- Perform vendor due diligence: model purpose, training data, validation, bias testing, data retention, security, and support.
- Map data flows; document lawful bases, notice needs, and confidentiality requirements.
- Create clear policies for AI use, human oversight, accessibility, and accommodations.
- Stand up cross-functional governance (HR, Legal, IT, DEI, Security).
- During deployment
- Provide required notices to applicants/employees; obtain consent where applicable.
- Validate tools for job-relatedness and business necessity; maintain documentation.
- Keep a human in the loop for material decisions; allow appeals or human review.
- Test for disparate impact on a recurring schedule; remediate promptly if found.
- Protect sensitive data; limit access; log usage and decisions.
- Ongoing oversight
- Re-audit models after updates or drift; monitor accuracy and fairness over time.
- Track legal changes across jurisdictions; update policies and notices accordingly.
- Train HR and hiring managers on appropriate use, bias risks, and documentation.
- Review vendor SLAs and incident response; sunset tools that fail standards.
Case Studies: Patterns to Avoid
- Screening tools that exclude qualified candidates due to proxies (e.g., gaps or zip codes) without validation.
- Chatbots that give inconsistent or discriminatory responses without guardrails or auditing.
- Automated assessments used as sole decision-makers, without accommodations or human review.
The fix is consistent: validate, document, notify, monitor, and keep humans accountable.
About Gerard Clodomir
Gerard Clodomir is an experienced litigator focused on state and federal employment laws. He has defended cases in state and federal courts and before agencies including the Equal Employment Opportunity Commission and the Department of Labor. He practices in Womble Bond Dickinson's Greensboro, N.C. office.
Next Steps for HR Leaders
- Run an inventory of any tools that screen, rank, score, or recommend candidates or employees.
- Fill gaps: notices, validation documentation, bias testing cadence, and human review checkpoints.
- Educate your team and set a clear approval process before expanding AI use cases.
If your HR team needs practical upskilling on AI tools and workflows, explore curated learning paths by role at Complete AI Training.
Your membership also unlocks: