Asia's companies lag on AI worker protections, transparency report shows
Only 7 per cent of more than 1,200 companies surveyed across Asia disclose any safeguards to protect workers from artificial intelligence risks, according to a new benchmark study by AI Company Data Initiative (AICDI), backed by Thomson Reuters Foundation and UNESCO.
The finding reveals a significant transparency gap between regions. European and North American companies report higher rates of worker protections, including dedicated complaint mechanisms for AI-related harms such as surveillance, algorithmic bias, and automated decision-making.
The absence of formal complaint channels limits companies' ability to detect problems early, undermines employee trust in AI systems, and increases the likelihood that issues go unreported, AICDI said in its analysis of nearly 3,000 global companies across industrial, technology, chemicals, metals, and mining sectors.
Company size drives the transparency divide
The study identified a structural reason for Asia's weak showing: smaller companies are far less likely to publicly disclose AI governance practices. In AICDI's sample, half of small-cap firms surveyed are based in Asia.
Large-cap companies significantly outpace smaller firms in reporting formal AI oversight bodies and dedicated governance resources. This suggests AI adoption patterns reflect both company size and regional industry structure, with technology leadership concentrated in larger North American firms and a more distributed profile among smaller Asian companies.
Worker training remains the exception, not the rule
Even in regions with stronger transparency, most companies are unprepared to help workers adapt to AI. Fewer than one in three firms worldwide offer any AI-related training to staff.
Where training exists, it typically reaches only leadership roles. This leaves frontline and non-technical workers - those most exposed to AI-driven changes - without basic knowledge of how AI tools function or how their roles may shift.
AICDI said this unstructured approach heightens risks by leaving workers without practical guidance on using AI systems safely.
Governance gaps persist globally
Fewer than half of companies report any formal AI strategy or guidelines. Companies are also sharing limited public information about the ethical impacts of their AI systems.
Investor demand for clarity is growing. A 2025 PwC study found that 42 per cent of investors want more transparency on companies' AI investments, while another 42 per cent seek clearer information on AI returns and cost savings.
AICDI noted that the same systems improving speed, cost, and personalisation can create scaled risks "often silently and unevenly when governance doesn't keep pace."
For HR professionals managing AI adoption, the report underscores the need for documented safeguards, worker training programs, and clear accountability structures. AI for Human Resources and AI for CHROs (Chief Human Resources Officers) provide guidance on building responsible AI practices within organisations.
Your membership also unlocks: