AI in Hiring and Firing: Why Employers Remain Legally Accountable for HR Decisions

AI in hiring boosts efficiency but raises legal risks around bias and fairness. Employers must ensure transparency, maintain human oversight, and document decisions carefully.

Categorized in: AI News Human Resources Legal
Published on: Jun 17, 2025
AI in Hiring and Firing: Why Employers Remain Legally Accountable for HR Decisions

Hiring or Firing with AI? Legal Risks Employers Must Know

AI tools are increasingly integrated into recruitment processes worldwide, including Singapore. Platforms like LinkedIn’s Hiring Assistant are automating repetitive tasks such as resume screening and interview scheduling to boost efficiency. While this automation addresses time-consuming tasks—especially when recruiters spend hours manually reviewing CVs—it raises critical questions about fairness and transparency in hiring decisions.

Employers adopting AI in HR face scrutiny over how these tools make decisions, and whether they can justify those decisions if challenged legally. This concern is heightened in Singapore, where the upcoming Workplace Fairness Act 2025 will tighten obligations on employers to prevent bias in employment.

The Risk of Algorithmic Discrimination

Christopher Tan, Partner at K&L Gates, highlights that the primary legal risk lies in algorithmic discrimination. AI systems often function as “black boxes,” offering little insight into how conclusions are reached. Without clear disclosure on training data, processes, and bias safeguards from AI developers, defending hiring decisions based on these tools becomes difficult.

Unlike human recruiters, AI cannot be cross-examined or required to explain its choices. If challenged on fairness, employers must be able to explain their decision-making process clearly. Since AI tools rarely provide this transparency, responsibility ultimately falls on the employer to ensure decisions are unbiased.

When Is AI Use Legally Defensible?

AI can be appropriate for filtering objective data like educational qualifications or licensing requirements. For example, excluding candidates who lack recognized legal qualifications is a defensible use of AI filters. Similarly, applications with valid occupational requirements, such as gender-specific roles under lawful conditions, may justify AI-based filtering.

However, using AI to assess subjective factors like cultural fit or promotion suitability is risky. Current AI technology cannot reliably justify such nuanced decisions. In these cases, significant human judgment must remain central.

Current AI Adoption and Growing Legal Scrutiny

AI use in HR remains limited in Singapore, with few reported legal issues so far. But this is likely to change as adoption grows and the Workplace Fairness Act takes effect around 2026 or 2027. Under this legislation, employers are fully responsible for any discriminatory hiring or firing practices—even when using third-party AI tools. There is no legal exemption shifting blame to AI developers.

Reputational Risks Beyond Legal Exposure

Even without lawsuits, biased AI tools can damage an organization’s reputation. Public trust erodes quickly if a company is seen relying on unfair AI in hiring or termination decisions. This can undermine employer branding and hamper talent acquisition efforts.

HR teams must thoroughly vet AI tools and seek transparency on how systems operate. Using a biased AI tool may reflect poorly on a company’s oversight and due diligence.

Questions HR Should Ask About AI Tools

  • What data was the AI trained on? Was it large and diverse enough?
  • What safeguards prevent bias?
  • Are there internal test results showing performance across demographics?
  • Is personal data (like CVs) being used to train the system further?

Employers should be cautious, especially when using free AI platforms that may incorporate user data into their models, raising privacy concerns.

Documenting the Decision Process and Maintaining Human Oversight

Documentation must go beyond final hiring outcomes. Employers should record how AI tools were used, what criteria were applied, and where human judgment influenced decisions. AI should support, not replace, human decision-making.

AI tools are suitable for filtering objective data but should never be the ultimate decision-maker. Proper training in AI tool usage, including prompt engineering, is crucial to ensure effective and fair application.

AI in Redundancy and Performance Evaluations

Some employers are using AI to flag underperforming employees or shortlist candidates for retrenchment. For example, professional services firms may use utilisation metrics to identify employees below a threshold.

AI can highlight candidates for review, but the final decision must consider context, such as contributions not captured by raw data. Supervisors’ input remains vital to avoid unfair outcomes.

Similarly, roles measured by sales or productivity metrics can use AI for initial filtering, but human judgment is needed to interpret the full picture.

Practical Steps for Responsible AI Use in HR

  • Understand the AI tools you adopt thoroughly.
  • Train HR personnel on proper AI usage and prompt engineering techniques.
  • Use AI to filter and flag candidates based on objective criteria.
  • Maintain human oversight for all subjective or final employment decisions.
  • Document the entire decision-making process clearly.

AI should serve HR teams by handling repetitive tasks, freeing time for nuanced human judgment. Proper controls, transparency, and responsible use can reduce legal risks and protect your organization’s reputation.

For HR professionals looking to enhance their AI skills responsibly, exploring targeted training on AI tools and prompt engineering can be valuable. Courses are available at Complete AI Training to help HR teams implement AI effectively and ethically.