From Foundry to Framework: Taiwan's Draft AI Basic Act Balances Growth with Safeguards

Taiwan's Draft AI Basic Act sets a government-first, risk-based playbook. Expect clearer liability, data rights, and transparency as chip-led AI grows.

Published on: Dec 28, 2025
From Foundry to Framework: Taiwan's Draft AI Basic Act Balances Growth with Safeguards

Taiwan's Draft AI Basic Act: strategy, risks, and what to do next

Taiwan moved on August 28, 2025 to formalize AI governance with the Draft Artificial Intelligence Basic Act. It sets a government-first framework led by the Ministry of Digital Affairs, while leaving sector-specific rules to be built out after passage.

The timing is deliberate. Taiwan's chip industry, now projected to exceed NT$6.4 trillion in 2025, puts the island at the center of AI infrastructure. The Act aims to align that hardware advantage with policy, risk controls, and international cooperation-without freezing private innovation.

At a glance: who, what, when, where, why

  • Who: Executive Yuan (drafter), Legislative Yuan (review), Ministry of Digital Affairs (lead), plus NSTC, FSC, FTC, sectoral regulators.
  • What: A "basic act" setting 15 government objectives: resource allocation, regulatory updates, AI education, risk classification, liability rules for high-risk AI, data governance, and international cooperation.
  • When: Submitted August 28, 2025, building on AI Action Plan 1.0 (2018-2021), AI Action Plan 2.0 (2023-2026), and multiple guidelines issued 2019-2024.
  • Where: Taiwan-wide; impacts semiconductors, financial services, government operations, startups, and research.
  • Why: Balance growth and risk after the surge of generative AI; close gaps in liability, data use, fairness, and consumer protection; align with global standards while preserving flexibility.

What the Draft AI Basic Act actually does

It does not regulate the private sector directly. Instead, it sets the playbook for government and signals how sectoral rules will be built.

  • Governance and scope: Ministry of Digital Affairs named as competent authority; sectoral regulators get latitude to adapt rules by industry.
  • 15 policy objectives: Funding, tax/finance incentives, regulatory adjustments, sandboxes, public-private partnerships, international R&D, AI education, safety and bias prevention, risk classification and accountability, open data and reuse, and workforce transition support.
  • Risk-first approach: A national risk classification framework aligned with international norms (think the EU's risk tiers) that regulators can adopt by sector. High-risk AI gets clear liability conditions, remedies, compensation, or insurance.
  • R&D carve-out: Pure R&D is exempt from accountability mechanisms-unless it's tested in the real world or used to deliver services.
  • Government use of AI: Agencies must run risk assessments, set internal controls, and disclose AI usage appropriately.

How this fits with Taiwan's existing guidance

Taiwan has used non-binding guidance to move fast without freezing innovation. Expect those guide rails to become the base layer for sectoral rules after the Act passes.

  • AI R&D Guidelines (2019): Human-centered values, safety, privacy, transparency, explainability, accountability. Voluntary, but influential.
  • Public Sector Gen AI Guidelines (2023): No classified or personal data into GenAI, disclosure where appropriate, AI output cannot replace independent judgment.
  • Financial AI Guidelines (2024): Governance, fairness, privacy, security, transparency, sustainability. Risk assessments across the AI lifecycle for banks, insurers, and securities firms.
  • Draft AI Evaluation Guidelines (2024): Four risk levels (unacceptable, high, limited, low), TEVV testing (safety, explainability, reliability, fairness, accuracy, security). High-risk products recommended for third-party testing centers.

These align broadly with global trends toward risk-based governance. For context on risk tiers in other regions, see the EU AI Act overview from the European Parliament: link.

Fairness, transparency, and accountability

Fairness is non-negotiable across Taiwan's documents: avoid bias and discrimination; use diverse, high-quality data; enable external feedback; keep a human meaningfully involved where it matters. Financial and evaluation guidelines call for explainability and disclosures so people understand if AI is in the loop.

Accountability shows up in two places: internal governance (controls, oversight, audits) and external responsibility (clear liability for high-risk AI, remedy, compensation, insurance). Expect sectoral regulators to define thresholds, proof standards, and audit expectations.

Data policy and IP: where risks show up

Privacy and data governance are core principles. The Act pushes data minimization, privacy by design/default, and greater use of open or non-sensitive data. A separate draft Act for Data Innovation (June 2025) aims to create legal mechanisms for data access, sharing, and reuse across sectors.

On IP, recent rulings by the Taiwan Intellectual Property Office stress two points: training on third-party works needs authorization, and AI-only outputs without human creative input are not protected by copyright. Teams should build licensing strategies, log data provenance, and document human contribution in outputs.

Liability: gaps today, structure tomorrow

Current liability relies on the Civil Code and Consumer Protection Act. Proving causation and fault with AI systems is hard, especially without clear industry standards. This is acute in autonomous vehicles and medical applications.

The Draft AI Basic Act directs government to clarify liability allocation for high-risk AI. Expect sectoral rules to define who bears responsibility and how insurance and remedy schemes apply.

Competition and consumer protection

The Fair Trade Commission is watching AI-enabled conduct: personalized pricing, data-fueled discrimination, concerted actions, and misleading ads. Enforcement is already happening-see the 2024 fine against use of a competitor's name by a self-learning system for keyword ads.

Separately, the 2024 Fraud Crime Harm Prevention Statute requires disclosure when ads use deepfakes or AI-generated personal images. Expect more transparency requirements for content authenticity and ad integrity.

Government and judiciary use of AI

Courts are piloting practical tools: custody outcome prediction (research-facing), sentencing information systems for consistency, and draft-generation assistants for DUI and fraud cases. The Judicial Yuan has emphasized that final decisions stay with judges, with guidance and external consultation to keep standards high.

Semiconductors and the bigger bet

Taiwan's foundries supply the compute backbone for global AI. Policy now aims to convert hardware dominance into durable advantages: trusted AI, test and evaluation capacity, and international partnerships. Expect incentives, sandboxes, and cross-border research to expand around chips-plus-AI integration.

What leaders should do now

Executives and Strategy

  • Map your AI portfolio to the coming risk tiers; flag anything plausibly "high-risk."
  • Stand up AI governance: decision rights, model inventory, audit trails, incident response, human oversight gates.
  • Secure training data rights; document data lineage and licenses. Track human contribution in outputs.
  • Budget for third-party testing or evaluation for sensitive uses; prep for disclosures and logs.

Government and Public Sector

  • Implement agency-level AI risk assessments and internal control rules now; document AI usage and limitations.
  • Adopt privacy by design/default; use open/non-sensitive datasets where possible.
  • Participate in sandboxes and international testbeds; share evaluation results to build trust.

IT and Development

  • Integrate TEVV-like testing gates: safety, explainability, reliability, fairness, accuracy, and security.
  • Instrument systems for traceability: dataset versions, labels, feature lineage, model versions, prompts, and outputs.
  • Build human-in-the-loop controls for sensitive tasks and record interventions.

Legal and Compliance

  • Draft model risk policies; define approval thresholds and model change controls.
  • Review IP exposure in training, fine-tuning, and generation; update supplier and data contracts.
  • Prepare sector-specific compliance playbooks (finance, healthcare, mobility, ads) anticipating local rules.
  • Assess liability coverage and consider insurance for high-risk AI deployments.

What to watch next

  • Final text and passage of the AI Basic Act; any expansion of agency powers.
  • Sectoral rules defining "high-risk," documentation packs, testing requirements, and disclosure formats.
  • Data Innovation legislation and open data mechanisms that affect AI training pipelines.
  • FTC actions on personalized pricing, data-driven discrimination, and AI-enabled ad practices.
  • Judicial guidance on AI use in courts and any early case law on AI liability or copyright.

Education and workforce

The Act pushes AI education across schools, industry, and government. For teams building role-specific capability, curated training by job function can speed adoption while reducing risk.

Explore AI courses by job role to upskill product, data, compliance, and engineering teams in parallel.

Timeline

  • 2017 - AI Grand Strategy for Small Country project to invest in AI ecosystem
  • 2018 - Taiwan AI Action Plan 1.0 (2018-2021) launched
  • September 2019 - AI Technology R&D Guidelines issued by NSTC
  • 2022 - Taiwan Bar Association allows lawyer ads on approved platforms
  • Late 2022 - Generative AI surge strengthens Taiwan's hardware role
  • February 2023 - Judicial Yuan launches AI sentencing information system
  • June 2023 - Executive Yuan approves AI Action Plan 2.0 (2023-2026)
  • August 2023 - Public Sector Gen AI Guidelines issued
  • December 2023 - FTC publishes White Paper on Digital Economy competition policy
  • March 2024 - Draft AI Evaluation Guidelines released (risk levels)
  • April 2024 - FTC fines Agoda for AI-powered keyword ad confusion
  • June 2024 - FSC issues Financial AI Guidelines
  • July 2024 - Fraud Crime Harm Prevention Statute mandates deepfake disclosure in ads
  • August 2025 - Executive Yuan submits Draft AI Basic Act; MODA designated as competent authority
  • November 2025 - Comprehensive analysis published ahead of Legislative Yuan review

Bottom line

Taiwan is moving with a pragmatic model: government-led guardrails, risk-based evaluation, and sectoral rules to come. For leaders, the smart move is to build governance, testing, and data rights into your AI stack now-so you're ready when the details land.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide