NSW flood victims’ data exposed in AI-related breach

Authorities monitor dark web for signs of exposed information

NSW flood victims’ data exposed in AI-related breach

Cyber

By Roxanne Libatique

A significant data breach has affected thousands of residents involved in New South Wales’s flood recovery efforts, after a government contractor uploaded sensitive information to an artificial intelligence platform.

The NSW Reconstruction Authority (RA) confirmed that the personal and health data of up to 3,000 individuals from the Northern Rivers region may have been compromised as a result of this incident.

The breach is linked to the Resilient Homes Program, a $920 million initiative funded by both the NSW and Australian governments to assist homeowners impacted by the 2022 floods.

The program, which covers buybacks, rebuilding, and resilience upgrades, has been a key part of recovery in the Central West and Northern Rivers.

According to the RA, a former contractor uploaded a Microsoft Excel file containing more than 12,000 rows of data to ChatGPT, an AI tool. The spreadsheet included names, addresses, contact details, and some health information.

Authority response and communication challenges

Upon discovering the breach, which occurred between March 12 and 15, the RA says it acted to limit further exposure.

The authority has since notified the NSW Privacy Commissioner and implemented new internal protocols regarding the use of AI platforms.

“We have reviewed and strengthened internal systems and processes and issued clear guidance to staff on the use of non-sanctioned AI platforms. Safeguards are now in place to prevent future incidents,” it said.

Ongoing risk monitoring and expert commentary

Although the RA has assessed the risk of misuse as low, it is working with Cyber Security NSW to monitor for any signs of the data appearing online, including on the dark web.

“So far, there is no evidence that any of the uploaded data has been accessed by a third party,” the authority said, adding that a full assessment of the breach is ongoing.

Mandy Turner, an adjunct lecturer in cyber criminology at The University of Queensland, highlighted potential risks if the data were to fall into the wrong hands.

“A threat actor could use the information available to trick people into making payments, provide account log-in credentials, or hand over more information that could be used for identity theft or other crimes,” she said, as reported by Information Age.

The RA has advised those potentially affected to be alert for suspicious communications requesting personal details and is contacting impacted individuals directly.

AI data handling practices under scrutiny

The incident has renewed scrutiny of how sensitive data is managed when using generative AI tools.

Jon Robertson, founder of cybersecurity firm Tarian Cyber, pointed out that uploading data to AI platforms without proper consent or understanding of data handling practices is a key concern.

“The data was uploaded to an AI tool, without consent, without an understanding of how it will be stored and used after the main task at hand is completed,” he said, as reported by Information Age.

Turner also noted that information submitted to AI platforms may be used for training purposes, raising questions about data control and privacy.

She added that no online platform is immune from data exposure risks, referencing previous incidents where AI platforms experienced leaks or had user data indexed by search engines.

Industry context: AI threats and internal vulnerabilities

This breach comes as the insurance sector and other industries face growing challenges from both external and internal threats related to artificial intelligence.

The 2025 Data Breach Investigations Report from Verizon Business found that malicious use of AI has increased, with attackers leveraging these tools for phishing, influence campaigns, and malware development.

The report also noted that employees’ use of generative AI platforms, often outside approved security measures, has contributed to data exposure.

The report concluded: “AI is now a factor in both external attacks and internal vulnerabilities, requiring organisations to address risks on multiple fronts.”

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!