Taiwan passes AI Basic Act to balance innovation with privacy, transparency, and accountability

Taiwan passed its Artificial Intelligence Basic Act, naming NSTC lead and forming a Cabinet panel. Expect risk labels, privacy and bias checks, and stronger governance.

Categorized in: AI News Government Legal
Published on: Dec 24, 2025
Taiwan passes AI Basic Act to balance innovation with privacy, transparency, and accountability

Taiwan Enacts Artificial Intelligence Basic Act: Key Points for Government and Legal Teams

Taiwan's Legislature has passed the third and final reading of the Artificial Intelligence Basic Act, confirming a national framework for the development and use of AI. "There are no textual revisions. By resolution, the draft Artificial Intelligence Basic Act is hereby enacted," said Deputy Legislative Speaker Johnny Chiang (江啟臣).

The law makes the National Science and Technology Council (NSTC) the central competent authority, with county and city governments responsible for local implementation. It also directs the Cabinet to form a committee-convened by the premier and including experts, industry representatives, agency heads, and local leaders-to set a national AI development framework.

Institutional Setup

  • Central authority: National Science and Technology Council (NSTC).
  • Local authority: County and city governments handle local oversight and support.
  • National coordination: A Cabinet-level committee will draft the country's AI development framework and align policy across ministries and stakeholders.

For background on the NSTC's role, see the agency's site: NSTC.

Policy Objectives and Guiding Principles

The government must promote AI research and application while balancing social welfare, digital equity, innovation, and national competitiveness. Seven principles anchor the law and provide a compliance baseline for public bodies and vendors working with them.

  • Sustainability and well-being
  • Human autonomy
  • Privacy protection and data governance
  • Cybersecurity and safety
  • Transparency and explainability
  • Fairness and non-discrimination
  • Accountability

Risk Controls and Prohibited Impacts

Authorities are required to avoid AI applications that infringe on people's lives, bodily integrity, freedom, or property. Systems that undermine social order, national security, or the environment are likewise out of bounds.

The act calls out specific risks: biased or discriminatory outcomes, false advertising, misinformation, and fabricated content. High-risk AI products or systems must carry clear notices or warnings so users understand potential impacts.

What to Prepare Now

Lawmakers from both major parties signaled that the next phase centers on enforcement measures, cross-ministerial coordination, and broad participation from industry and society. To be ready, agencies and contractors should move early on basic governance and documentation.

  • Inventory AI use cases across programs and classify potential risk; plan labeling and warnings for high-risk systems.
  • Tighten privacy, data governance, and cybersecurity controls consistent with the seven principles.
  • Define accountability: owners, approvers, incident response, and escalation paths for AI-enabled services.
  • Set up checks for bias and discriminatory outcomes; monitor and correct with repeatable procedures.
  • Adopt content provenance and verification steps to limit misinformation and fabricated content in public services.
  • Track guidance from the Cabinet committee and NSTC; participate in consultations to align procurements and pilots.

Resources

Upskilling for Public Sector Teams

If you're building internal capability for AI policy, procurement, or oversight, see curated learning paths by job function: Complete AI Training - Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide