Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
A new report from the Bank for International Settlements (BIS), lays bare the opportunities and risks posed by artificial intelligence (AI) in the financial sector. While AI promises transformative efficiency and insights, the report warns that poor governance, opaque decision-making, and overreliance on third-party providers could leave financial institutions vulnerable.
The report – authored by Juan Carlos Crisanto, Cris Benson Leuterio, Jermy Prenio, and Jeffery Yong – calls for a “risk-based approach” to integrating AI, with emphasis on data security, fairness, and human oversight to avoid compounding systemic risks.
The BIS report identifies several critical challenges that financial institutions face when deploying AI technologies. These challenges, the authors note, stem from the dual pressures of managing innovation and maintaining robust risk management frameworks. At the heart of these issues is the rapid expansion of AI use cases, particularly in credit underwriting, fraud detection, and customer-facing services such as chatbots.
However, the authors caution that these advances are not without risk.
“Heightened model risk can be caused by a lack of explainability of AI models,” the report states, highlighting the challenges financial institutions face in assessing the appropriateness of AI-driven decisions. AI models, particularly generative AI systems, often operate as black boxes, making it difficult to verify their outputs or understand how specific decisions were reached.
Transparency and explainability are key concerns, especially in high-stakes use cases such as credit and insurance underwriting. The report underscores that decision-makers need a clear understanding of how AI systems operate to ensure their outputs align with regulatory expectations and institutional risk appetites.
“Explainability, interpretability, and auditability involve internal disclosure or transparency particularly to the board and senior management so they can better understand the risks and implications of AI use,” the authors note.
The report also calls attention to data-related risks, which are exacerbated by the growing reliance on third-party providers for AI models and cloud-based services. These external relationships, while offering scalability and cost-efficiency, introduce vulnerabilities such as data breaches and vendor lock-in.
“The concentration of cloud and AI service providers to a few large global technology firms strengthens the argument for putting in place direct oversight frameworks for these service providers,” the report observes. Yet, in many jurisdictions, regulatory approaches continue to rely on financial institutions to manage these risks internally.
From a governance perspective, the BIS report highlights the need for clear accountability frameworks, particularly as AI becomes more deeply integrated into core financial activities. It recommends that financial institutions establish robust oversight mechanisms, including “human-in-the-loop” or “human-on-the-loop” systems, to ensure human intervention remains central in decision-making processes. This is particularly important in mitigating risks associated with AI-driven outputs that could lead to harmful customer outcomes.
The BIS authors also emphasize the critical role of expertise and skills in managing AI risks. A lack of technical proficiency at senior levels could result in insufficient oversight and ineffective risk mitigation. As AI technologies evolve, institutions must ensure that their teams are equipped to understand and manage the complexities of these systems.
On the regulatory front, the report notes significant disparities in how jurisdictions are approaching AI oversight. While some, like the European Union, have adopted rules-based frameworks such as the AI Act, others favor principles-based approaches that focus on high-level guidelines. Common themes across these regulatory approaches include reliability, fairness, and accountability, but newer guidance is beginning to address issues like sustainability and intellectual property as well.
International collaboration is flagged as a pressing need. The absence of a globally accepted definition of AI complicates regulatory consistency and hampers the ability to address cross-border risks effectively. “The lack of a globally accepted definition of AI prevents a better understanding of AI use cases in the global financial sector and the identification of specific areas where risks may be heightened,” the report warns.
The challenges extend to the deployment of generative AI, which, while promising transformative benefits, poses unique risks. The authors note that financial institutions remain cautious about using generative AI in customer-facing roles due to concerns over data privacy, model accuracy, and consumer trust. They cite the potential for “hallucination,” where AI systems generate inaccurate or inappropriate outputs, as a particularly troubling issue for high-risk applications.
The BIS report does not shy away from addressing the broader systemic risks associated with AI adoption. Increased interconnectedness, fuelled by reliance on a small number of technology providers, could amplify vulnerabilities across the financial ecosystem. Similarly, herding behaviour—where multiple institutions use similar AI models and datasets—could lead to procyclical risks and market distortions.