Inside the compliance risks of AI integration

Companies face rising expectations around how data is accessed and used

Inside the compliance risks of AI integration

Risk Management News

By Kenneth Araullo

Artificial intelligence is becoming a mainstay in corporate compliance functions, streamlining tasks from automated contract reviews to continuous fraud monitoring. But while AI can bring efficiencies, its implementation also introduces regulatory and operational risks that organisations must address.

John Kim, principal at Control Risks, said regulators are increasingly expecting companies to hold AI-enabled systems to the same compliance standards as any other business function.

“To the extent that a business uses AI to achieve its business objectives, it expects that business to achieve its compliance requirements,” Kim said, citing recent guidance from the US Department of Justice (DOJ).

The DOJ’s message, Kim said, is clear: AI is both a compliance tool and a potential liability. “The path forward involves balancing innovation with accountability, transparency and a commitment to ethical design,” he said.

AI-related risk exposure, according to Kim, falls into three primary categories: bias and discrimination, misuse, and data privacy vulnerabilities. He emphasised that each of these areas requires proactive oversight if compliance teams are to deploy AI responsibly and effectively.

“AI tools rely on defined datasets for training,” Kim said. He explained that flaws in the training data – whether from historical inequities, gaps in data, or poor assumptions – can cause the system to replicate or even magnify existing biases.

 “An AI-powered internal risk monitoring tool might flag an employee with a flexible work arrangement to accommodate a family health issue as having suspicious logins,” he said. “Unless handled properly, this could expose the business to a discrimination claim.”

To prevent such outcomes, Kim recommended routine testing and auditing of AI outputs. “Compliance leaders must ensure that design and training processes account for fairness and ethics, and that they align with the company’s values,” he said.

AI risks

Kim also highlighted the threat of misuse, especially by individuals exploiting AI systems for fraudulent activity.

“Advanced algorithms can help bad actors evade sanctions, launder money or decipher a company’s internal controls,” he said.

Internal risks are equally pressing. “Insiders could use AI to enable or facilitate schemes like insider trading, embezzlement or billing-related fraud,” Kim said. He stressed that regulators expect compliance programs to demonstrate robust oversight: “AI systems monitoring is a central priority for compliance teams.”

Sensitive data is another concern. AI tools used in compliance often require access to financial, personal, or proprietary information, which creates potential exposure under global data protection laws.

“AI systems thrive on data,” Kim said. “And AI systems most useful to compliance professionals will likely contain personal, financial, proprietary or other sensitive business and third-party information.”

That reality places added scrutiny on how data is handled. “AI-enabled compliance programs must account for the treatment of sensitive data – both at rest in their systems of records and when used by the AI tools and the compliance team,” Kim said.

Approaching compliance processes

On integrating AI into compliance processes, Kim advised a targeted, practical approach. “Decision-makers should resist deploying an AI solution for AI’s sake or to keep up with business leaders’ wish to follow the trend,” he said. “Instead, they should insist on thoughtful, bottom-up implementation plan that aligns with specific compliance objectives.”

AI regulation is evolving, and Kim warned that companies need to monitor international developments.

“Multinational companies must track changes across the global enforcement ecosystem and update their compliance programs accordingly,” he said.

He added that while regulatory focus may shift in detail, certain expectations will remain constant. “Regulators’ current emphasis on privacy, transparency, and auditability will not likely change,” Kim said. “Consequently, forward-thinking organisations can build or buy AI tools that can support future regulatory shifts that require greater, or different, disclosures or protections.”

Looking ahead, Kim believes that AI will play a larger role in compliance programs. “AI will move more to the forefront of compliance programs over the next five years,” he said. “It will offer deeper insights and foster faster response times.”

But despite growing pressure to innovate quickly, Kim advised a deliberate and strategic approach.

“The hype that often surrounds new tech can cloud judgement,” he said. “Rather than racing to not be left behind, professionals must manage these adoption steps sensibly.”

What are your thoughts on this story? Please feel free to share your comments below.

Keep up with the latest news and events

Join our mailing list, it’s free!