NZ government tests Paerata AI for New Year Honours citations, with human checks

DPMC will trial Paerata to draft New Year's Honours citations, with staff review and honouree proofing. Runs in secure Azure; no data retention or learning; privacy risk low.

Categorized in: AI News Government
Published on: Jan 08, 2026
NZ government tests Paerata AI for New Year Honours citations, with human checks

DPMC to trial generative AI for New Year's Honours citations - controlled, low-risk, human-checked

The Department of the Prime Minister and Cabinet plans to use a government-built generative AI tool to draft New Year's Honours citations. The Honours Unit will test this approach and only decide if it's worth continuing after it has been used at scale.

Drafts will be produced by the tool and then checked by a staff member before being sent to the honouree for proofing. Unsuccessful nominations remain unpublished, as usual.

The tool and the guardrails

The AI tool, called Paerata, runs inside Microsoft's Azure cloud within government controls. It was developed by the Treasury and the Central Agencies Shared Services group for internal use and made available to agencies in May 2024.

An exemption approved by the Cabinet Secretary allows the Honours Unit to process personal information with Paerata. This exemption is required because government policy normally restricts using personal information with AI systems.

The department says Paerata does not learn from the data, does not retain it, and does not share it outside the department. Source material and outputs are stored in a secure honours database and saved to the Cabinet Office's iManage instance for checking and workflow.

What data is in scope

The citations combine the nomination and letters of support. That can include details often found on a CV (roles, service, affiliations), political and religious activity where relevant to service, community work, contact information for nominees, nominators, and supporters, and some health information.

The application notes citations are an "ideal use case" for Gen-AI because it can synthesise information into concise summaries while keeping personal information confidential and preserving public trust.

Process and timing

Only the Paerata user will input the nomination materials and receive the draft. Staff then review and refine the draft before sending it to the honouree for proofing prior to publication.

AI was not used to write citations for the 2026 New Year's Honours because most were written before internal approval was granted. After substantive use, the Honours Unit will assess whether the approach is worthwhile.

Privacy assessment

The privacy impact of using Paerata for drafting citations is rated low, with the risk of harm assessed as negligible. Security controls include data isolation, no model training on inputs, and restricted access to the honours database.

What other agencies can copy right now

  • Start with a narrow, low-risk use case that produces short, structured outputs.
  • Keep the tool in a secured environment with strict access controls and logging.
  • Use a clear exemption or policy approval where personal information is processed.
  • Apply human-in-the-loop review and give the subject a final proof step when appropriate.
  • Document data handling: no learning from inputs, no external sharing, defined retention.
  • Store drafts and sources in your existing records system to maintain auditability.

Questions to settle before scaling

  • Accuracy and context: Does the model omit key service details or overstate claims?
  • Bias and tone: Are political, religious, and health references handled fairly and only where relevant to service?
  • Redaction and minimisation: Are you limiting inputs to only what's needed for the citation?
  • Accountability: Who signs off each draft, and how are changes tracked in iManage (or your equivalent)?
  • FOI readiness: Can you produce a clear record of prompts, drafts, and approvals if requested?

Why this matters

Citations are repetitive, time-bound, and sensitive. That's a strong test bed for AI in government-tight scope, clear standards, measurable output quality, and firm privacy controls.

This year's list included seven new Knights and Dames, among them Helen Danesh-Meyer, Coral Shaw, Rod Drury, and Scott Dixon, plus about 170 other New Zealanders. As the trial runs, expect more guidance to flow to other teams looking at similar drafting tasks.

Helpful resources


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide