Marcos pushes fast AI adoption in government: what agencies should do now
In Busan after the APEC Leaders' Meeting in Gyeongju, President Ferdinand Marcos Jr. said the government will integrate artificial intelligence "as much as we can, as soon as we can." He warned, "You're missing a chance if you wait. AI is going to come; it's like a wave. No matter what you do, you'll get wet."
He added that adoption must be secure and responsible: "If you do not learn how to use AI in the best way, and in a secure way, and in a benevolent way, you will really be left behind." The administration will seek further studies and expert consultation. The Private Sector Advisory Council (Education and Jobs) has also met with him to discuss AI literacy to prepare the workforce and grow local capacity to build AI systems.
What this means for government teams
The message is clear: move now, but move safely. Below is a practical plan agencies can start within 90 days while broader policies are finalized.
- Appoint an AI lead and cross-functional task force. Include operations, IT, data, legal, procurement, and risk.
- Map high-volume use cases. Rank by impact, feasibility, data readiness, and risk. Start with 3 quick wins.
- Launch pilots with clear guardrails. Examples: document summarization for case files, citizen inquiry triage, procurement spend analysis. Keep humans in the loop.
- Publish a first-cut AI use policy. Cover acceptable use, data classification, human oversight, records retention, audit logging, and incident reporting.
- Adopt proven risk frameworks. Use the NIST AI Risk Management Framework and the OECD AI Principles to anchor evaluations.
- Protect privacy and security from day one. Redact PII in prompts, enforce least-privilege access, require vendor security attestations, and enable content filtering.
- Update procurement. Use outcome-based specs, sandbox clauses for pilot-to-scale, portability/exit terms, and bias, safety, and uptime requirements.
- Skill the workforce. Run AI literacy sprints for all staff; add role-based tracks for analysts, policy writers, and frontline service teams. Consider curated options like Complete AI Training: Courses by Job.
- Set governance gates. Require risk assessments for new use cases, bias testing on datasets, and periodic independent reviews.
- Measure and share results. Track cycle time, error rates, citizen satisfaction, and cost per case. Publish a simple dashboard to build trust.
Guardrails before scale
AI should help civil servants work faster and make fewer mistakes, not add risk. Put these basics in place before you expand pilots.
- Human oversight. Keep a reviewer for decisions that affect benefits, enforcement, or eligibility.
- Records and transparency. Log prompts, outputs, and approvals. Provide public notices for AI-assisted services.
- Data quality. Clean source data and document lineage. Poor inputs will produce unreliable outputs.
- Model choice. Evaluate cloud, on-prem, and open-source options. Avoid lock-in through open standards and data portability.
- Fairness and accessibility. Test for bias across demographics and meet accessibility requirements for citizens and staff.
Where training fits
The PSAC discussion on AI literacy is timely. Every agency needs a baseline program for all employees and deeper upskilling for roles that draft policy, analyze data, and serve citizens.
Start with short, scenario-based modules tied to daily tasks, not theory. If you need a quick starting point, see Complete AI Training: Latest AI Courses.
The bottom line
Move fast on pilots, move carefully on scale, and invest in people. The directive is to adopt AI "as much as we can, as soon as we can," but with security, accountability, and public trust at the center.
If agencies start now-with clear use cases, firm guardrails, and hands-on training-the country can benefit sooner while staying safe and compliant.
Your membership also unlocks: