ChatGPT Upload Puts 3,000 Northern Rivers Flood Aid Applicants' Data at Risk

Up to 3,000 Northern Rivers flood program applicants may have data exposed after a contractor uploaded a spreadsheet to ChatGPT. Officials investigating; no public release found.

Categorized in: AI News Government
Published on: Oct 06, 2025
ChatGPT Upload Puts 3,000 Northern Rivers Flood Aid Applicants' Data at Risk

NSW flood program applicants face potential data exposure after contractor shared spreadsheet with ChatGPT

Up to 3000 applicants to the Northern Rivers Resilient Homes Program may have had personal data exposed after a former contractor uploaded a spreadsheet to ChatGPT between March 12-15, 2025. The NSW Reconstruction Authority (RA) says there is no evidence the information has been made public, but it cannot be ruled out while investigations continue.

The data may include names, addresses, contact details, and health information. The file reportedly contained 10 columns and more than 12,000 rows of entries.

Cyber Security NSW and forensic specialists are investigating to determine the scope and risk. The RA has notified NSW's Privacy Commissioner, strengthened internal processes, and issued guidance to staff on using non-sanctioned AI platforms.

Who is affected and what's being done

The Northern Rivers Resilient Homes Program was set up after devastating late-2022 floods to help communities recover and make homes more resilient to future events. The RA acknowledges it took time to notify people due to the depth of the review and has initiated an independent review into how the breach was managed.

  • ID Support NSW will contact affected people within the next week.
  • Cyber Security NSW is monitoring the internet and dark web for any sign of exposure.
  • Forensic analysis results are due in the coming days to clarify what was involved and whether any data was accessed by third parties.

Immediate actions for NSW public sector leaders

  • Reinforce a hard ban on uploading any personal or sensitive data to public AI tools. Require the use of sanctioned, logged AI environments only.
  • Publish a plain-English AI use policy and quick-reference "dos and don'ts" for staff and contractors.
  • Implement data loss prevention (DLP) rules that automatically block uploads of spreadsheets or documents containing PII/health data to external sites.
  • Stand up an approved AI sandbox with content filters, audit logging, and pre-approved templates for analysis tasks.
  • Scan files for PII before sharing (server-side and endpoint). Mask or anonymise by default.
  • Tighten contractor onboarding/offboarding: least-privilege access, mandatory training, and immediate credential revocation on exit.
  • Require two-person review for any AI use involving citizen data, even inside sanctioned tools.
  • Run an incident response tabletop focused on AI misuse; pre-draft notification language and coordinate with privacy and legal.

Guidance for staff handling citizen data

  • Do not paste PII or health information into public AI tools. If you must prototype, use synthetic or fully anonymised data.
  • Use approved government systems for analysis. If unsure whether a tool is sanctioned, ask your security or privacy team first.
  • Label sensitive files clearly and store them in controlled locations. Log any AI-related use cases and the datasets involved.
  • Report suspected incidents immediately so monitoring and containment can begin quickly.

Support and official resources

  • ID Support NSW can assist people concerned about identity or credential misuse.
  • For guidance on security practices and current advisories, see Cyber Security NSW.

If your team needs structured, practical upskilling on safe use of generative AI in government, see our ChatGPT training resources.