As artificial intelligence (AI) moves deeper into the financial system, firms will need stronger governance, operational safeguards and workforce skills to manage a new generation of fast-moving and potentially systemic risks, according to the Global Risk Institute (GRI).
The findings come from the second phase of GRI's Financial Industry Forum on AI (FAFAI II), which brought together senior executives from leading financial institutions, academics and public-sector bodies to examine AI-related risks, mitigants and opportunities.
FIFAI II is a partnership between GRI and key Canadian authorities, including the Office of the Superintendent of Financial Institutions (OSFI), the Bank of Canada, the Department of Finance Canada, the Financial Consumer Agency of Canada (FCAC) and FINTRAC. Drawing on four GRI‑led workshops, the forum focused on escalating cyber threats, third‑party risk, financial well‑being and consumer protection, financial crime and financial stability, and culminated in a final report introducing an “AGILE” framework for navigating AI risk.
“Artificial intelligence is transforming financial services faster than existing governance frameworks are evolving,” said Sonia Baxendale, president and CEO of GRI. “What makes this moment different is that managing AI risk is no longer confined to individual institutions. It requires collaboration across the financial ecosystem.”
AI adoption is no longer confined to pilot projects. Participants noted that models are now embedded in credit decisioning, pricing, trading, fraud detection and customer interaction, meaning risk management and governance need to evolve in step with deployment rather than playing catch‑up.
Against that backdrop, three themes emerged as priorities for financial institutions: elevating AI governance to the boardroom, reinforcing operational resilience and building AI literacy across the workforce.
The forum stressed that AI is now a strategic governance issue as much as a technology one. As more advanced systems, including emerging forms of autonomous or “agentic” AI, are rolled out, boards and executive teams will need clearer sight of where AI is used, how it is monitored and who is accountable when things go wrong.
Key elements include raising board‑level awareness of AI‑related risks, clarifying decision‑making responsibility for AI‑driven outcomes, and embedding oversight mechanisms flexible enough to keep pace with rapid changes in models, use cases and regulatory expectations.
This intersects with directors’ and officers’ (D&O) exposure. Market analysis from Allianz has already highlighted AI governance as a live D&O issue, with weak oversight affecting valuation, disclosure and investor confidence as regulators move from guidance to enforcement on AI‑related harms.
The discussions also underscored how AI adoption is amplifying existing operational risks. As firms lean more heavily on AI tools, cloud infrastructure and external data and model providers, they are becoming more dependent on technology supply chains beyond their direct control.
Participants pointed to the need to reinforce basic controls: strong cyber hygiene, rigorous third‑party risk management and clearer oversight of technology and data dependencies. Concentration risk in cloud and AI service providers and the potential for a single outage or compromise to impact multiple institutions at once was flagged as a particular concern.
This aligns with broader risk trends. Allianz’s 2026 Risk Barometer showed cyber incidents remain the top global business threat for the fifth straight year, while artificial intelligence has jumped to second place, its highest‑ever ranking. The report noted that AI is “supercharging threats, increasing the attack surface and adding to existing vulnerabilities,” underscoring the need for robust cyber, tech E&O and business interruption coverage as institutions digitize.
A third priority is talent. As AI tools spread across front‑, middle‑ and back‑office functions, firms will need to invest in skills and training so that employees at all levels – from engineers and risk teams to executives and board members – understand both what AI can do and where it can fail.
Building sector‑wide AI literacy is seen as critical not only for responsible deployment, but also for detecting new threats such as AI‑enabled fraud, deepfakes and more sophisticated cyber attacks. For insurers, that includes ensuring underwriting and claims teams are equipped to interrogate AI‑generated outputs, challenge models where warranted and explain AI‑linked decisions to clients and regulators.
The FIFAI II findings sit alongside a tightening regulatory and liability environment around AI.
In Europe, for example, the EU Artificial Intelligence Act is due to take effect from August 2026 and will require, among other provisions, that deepfakes be disclosed as AI‑generated or manipulated content. Cyber specialists expect such regimes to influence how insurers assess AI exposures, draft policy language and price coverage for risks such as synthetic media, automated decision‑making and AI‑driven fraud.
Allianz has also noted the emergence of AI‑related securities claims in North America, accounting for about 8% of filings in 2025, and warned that divergent national rules are likely to become a source of litigation where AI governance and disclosure fall short.
Taken together, the FIFAI II discussions reinforce a shift in how the industry views AI risk - not just as a firm‑level technology issue, but as a potential source of sector‑wide stress if models, data or key vendors fail in correlated ways.