Contractor's ChatGPT Upload Exposes NSW Flood Victims' Data, Dark Web Watch Begins

A contractor uploaded flood-recovery data to an AI tool, exposing personal and health info for up to 3,000 NSW residents. Insurers: tighten payment checks and govern AI use.

Categorized in: AI News Insurance
Published on: Oct 08, 2025
Contractor's ChatGPT Upload Exposes NSW Flood Victims' Data, Dark Web Watch Begins

NSW flood recovery breach: AI upload exposes resident data - what insurers need to do now

A contractor working on New South Wales's Resilient Homes Program uploaded a spreadsheet with sensitive details to an AI tool, exposing personal and health data tied to flood recovery efforts in the Northern Rivers.

The NSW Reconstruction Authority (RA) said up to 3,000 people may be affected. The file reportedly contained more than 12,000 rows, including names, addresses, contact details, and some health information.

What happened

The breach occurred between March 12 and 15 when a former contractor uploaded a Microsoft Excel file to ChatGPT. The data relates to the $920 million Resilient Homes Program, which covers buybacks, rebuilding, and resilience upgrades after the 2022 floods.

RA notified the NSW Privacy Commissioner, issued new guidance on the use of non-sanctioned AI platforms, and put safeguards in place to prevent repeat incidents. So far, RA reports no evidence of third-party access and assesses the risk of misuse as low while monitoring for exposure, including on the dark web.

Why this matters for insurers

  • Social engineering risk: Names, addresses, and health context enable convincing phishing and payment fraud targeting claimants and vendors.
  • Claims integrity: Increased potential for account takeover, false change-of-bank requests, and redirected settlements.
  • Regulatory exposure: Privacy and data handling duties extend to contractors and AI tools used in workflows.
  • Third-party risk: Vendor AI usage creates blind spots if not governed, logged, and contractually restricted.

Immediate actions for carriers, brokers, and claims teams

  • Harden payment controls: Enforce call-backs to verified numbers for any payout or bank detail changes; block email-only changes.
  • Tighten FNOL and contact centre scripts: Add verification steps for callers claiming to be flood-affected in Northern Rivers; use shared passphrases or multi-factor checks.
  • Activate fraud analytics: Flag mismatched contact details, sudden banking updates, and requests to expedite large payments.
  • Coordinate outreach: Proactively warn impacted policyholders about phishing and impersonation attempts; provide safe contact channels.
  • Review incident playbooks: Ensure alignment with Australian privacy expectations and breach notification thresholds.

Guidance for commercial clients, councils, and recovery partners

  • Set clear AI rules: Ban uploading personal, health, payment, or policy data into public AI tools. Use only approved, enterprise-grade solutions with logging and data controls.
  • Data minimisation: Redact or tokenize sensitive fields before any AI query; prefer synthetic data for testing.
  • Access and DLP: Enforce least privilege, block paste/upload of sensitive data to unsanctioned sites, and log AI usage.
  • Contract controls: Require vendors to disclose AI use, prohibit public AI for sensitive work, and agree to breach notification and audit rights.
  • Training: Run targeted refreshers on phishing, payment fraud, and AI data handling.

What to tell affected customers

  • Be sceptical of unexpected calls, texts, or emails about payments, bank details, or identity checks-contact the insurer or RA via official numbers.
  • Never share OTPs or portal passwords; enable multi-factor authentication on accounts.
  • Validate bank changes and invoices via a known phone number before paying.
  • Report suspicious activity immediately to your insurer and relevant authorities.

Expert signals insurers should track

Cyber criminology experts warn exposed data can fuel payment fraud, credential theft, and identity crime. Security leaders also point to a growing pattern: employee use of generative AI outside approved controls is driving internal data exposure.

The 2025 Verizon Data Breach Investigations Report notes increased malicious use of AI for phishing, influence operations, and malware-and highlights internal vulnerabilities from unsanctioned AI use. The takeaway for carriers: AI risk is both an external threat vector and an internal control problem.

Underwriting and risk advisory prompts

  • Do you permit public AI tools? If yes, which data classes are blocked, and how is usage logged?
  • Do vendor contracts prohibit public AI for sensitive tasks and mandate breach notification?
  • Is DLP configured to detect uploads to AI sites and block sensitive fields?
  • What is the process for redaction/tokenisation before AI use?
  • How often are staff trained on AI and payment fraud risks? How is effectiveness measured?

AI governance that actually gets adopted

  • Default-deny public AI for sensitive data; offer an approved alternative with enterprise controls.
  • Provide templates and redaction tools so teams can work without risking exposure.
  • Log prompts and responses for sanctioned AI use; review regularly.
  • Run short, role-specific training tied to real incidents and payment controls.

References and further reading

If your team needs practical upskilling on safe, compliant AI use in underwriting, claims, and operations, see our curated programs by job role: Complete AI Training - Courses by Job.