Home Affairs appoints ex-Microsoft Copilot specialist Rishi Nicolai as director of AI adoption

Home Affairs hires ex-Microsoft exec Rishi Nicolai to lead internal AI adoption. Pilots include a code bot, Phi models for analysis, and a Q&A citing legislation.

Categorized in: AI News IT and Development
Published on: Sep 25, 2025
Home Affairs appoints ex-Microsoft Copilot specialist Rishi Nicolai as director of AI adoption

Home Affairs hires ex-Microsoft exec Rishi Nicolai to lead AI adoption

Home Affairs has appointed former Microsoft executive Rishi Nicolai as director of AI adoption, tasked with accelerating generative AI across the department. The role focuses on internal use within Home Affairs, supporting productivity goals across core functions.

Nicolai spent 13 years at Microsoft, most recently working as a Copilot behavioural specialist. He signalled a focus on responsible, high-impact public sector use of AI.

Where Home Affairs is already experimenting

The department has been piloting AI since July, showing early-stage proofs-of-concept built by small teams on short timelines.

  • A "simple" chatbot built with the open-source tool Ollama in two weeks to assist with updates to the legacy Java codebase.
  • Use of Microsoft's small-language-models (SLMs) starting with Phi-2 for sentiment analysis on APS census data and to automate Australian Border Force (ABF) culture survey processing; later upgraded to Phi-4 and trained on relevant data for visa-related queries.
  • A Q&A bot capable of citing relevant legislation alongside answers to support accuracy and auditability.

What's next on the stack

Home Affairs plans to expand AI use cases on its AWS platform and fine-tune models on stronger compute. Expect a mix of managed services and self-hosted models as teams balance cost, latency, data residency, and control.

Why this matters for IT and development teams

  • SLM-first pattern: Small models like Phi-2/Phi-4 can be enough for internal workloads, with faster inference, lower cost, and simpler governance.
  • Prototyping velocity: Two-week builds are feasible when scope is narrow and the data footprint is clear.
  • Grounded responses: Retrieval with citations to legislation improves trust, supports audits, and reduces hallucinations.
  • Legacy support: Assistive bots can accelerate code comprehension and refactoring for Java and other aging stacks-keep human approval in the loop.
  • Data safeguards: Treat fine-tuning and prompts as sensitive; apply red-teaming, PII handling, and content filters from day one.
  • Platform choices: On AWS, decide between managed endpoints and containers for self-hosting; align GPU/CPU mix with latency and cost targets.

Implementation checklist

  • Pick 2-3 low-risk pilots: internal policy Q&A with citations, survey analysis, or a code migration assistant.
  • Stand up a secure sandbox: Ollama on a locked-down VM or Kubernetes; wire in a document store for retrieval.
  • Model selection: Evaluate SLMs (Phi family) alongside larger baselines; measure quality against real tasks, not benchmarks alone.
  • Define metrics early: citation accuracy, response latency, cost per request, code suggestion acceptance rate.
  • Governance: human-in-the-loop, logging, role-based access, eval suites, and incident response for bad outputs.
  • Plan for scale on AWS: choose instance types, autoscaling, and MLOps workflows for promotion from pilot to production.

Resources