• Group

BNP Paribas co-signs the white paper: Risk control of artificial intelligence systems

Published On 19.12.2022

BNP Paribas has participated in the drafting of a white paper on "Controlling the risks of Artificial Intelligence systems", led by the Hub France IA association with the support of La Banque Postale and Société Générale.

A partnership to identify risks in AI systems

Within the framework of a partnership with the Hub France IA (France Artificial Intelligence), BNP Paribas Group Data Office and RISK AIR teams took part in the "Banking and Auditability" working group, with the objective to propose a systematic and operational method for identifying the specific risks of Artificial Intelligence (AI) systems, and to propose remedial measures.

This collaboration involved actors from the three lines of defense of banks' internal control systems, and is an excellent example of collaboration between peers to exchange around common principles. The benefits of this reflection on AI concern not only the financial sector, but more broadly any actor implementing such systems.

What does AI involve at BNP Paribas?

The development and deployment of Artificial Intelligence is one of the levers of the Bank's 2025 strategic plan. More than 500 use cases are currently in production and the goal is to double the number of use cases and the associated value creation by 2025

The Group's AI & Analytics teams are all participating in this effort. The uses cover issues such as improving operational efficiency, risk management and customer value creation. In addition, BNP Paribas has launched an initiative to industrialise and scale up the use of AI through a collaborative task force, ai@scale to accelerate work on data science platforms, risk management, value measurement, the operating model for putting models into production and the pooling of certain assets across the Group.

Focus on AI risk management

In many industries, both banking and non-banking, and for at least the last ten years, the use of Artificial Intelligence has multiplied, leading to the emergence of new risks. Among them, the use of AI for illegal or malicious activities (deepfake, phishing, manipulation...) but also the risks of reproducing biases or opacity of operation.

The working group's approach with the France Hub IA has thus consisted in describing the development and deployment process of AI solutions as a whole, then the specific risks brought or exacerbated by AI, and finally to analyze more specifically some selected risks based on their importance and their specific link with AI. An inventory of the risks at each stage of the development and deployment process of an AI system was thus established, then discussed to achieve a common vision of the risks. 

The result of this work is available in the white paper (in French) :

"Risk control of Artificial Intelligence systems"

Anticipating future European regulations

The beginning of the work also coincided with the first publication of the IA Act published by the European Commission on 21 April 2021. This was an opportunity for the members of the working group to contribute to a common response to this proposed regulation and to exchange more generally on the risk management context specific to Artificial Intelligence systems.

What lessons can be learned from this collaboration?

"The subject of risks and auditability of artificial intelligence models is at the heart of our concerns, given the recent rise of these techniques within financial institutions and given our strong culture in model risk management. I therefore found it very useful to collaborate openly with our peers in other large banking groups. 

These exchanges helped refine our mutual understanding of the relative severity of the various risks associated with artificial intelligence. What surprised me, but also comforted me, was that we were to a large extent very much aligned: beyond modeling risks, the main sources of perceived risks are certainly data management and the lack of understanding of the strengths and weaknesses of this tool that AI represents.

Explainability is one of the themes we have particularly explored at BNP Paribas, essential to ensure the proper use of models, adoption, the ability to audit, and more globally to generate confidence in the solution. While academic research on explainable AI has been prolific in recent years, there are still few studies on the operational insertion of such methods. We have refined our understanding of the needs in terms of explainability of the people who will develop and use these tools on a day-to-day basis."

Lea Deleris, BNP Paribas - RISK AIR

Read the paper: What Does It Mean to Explain? A User-Centered Study on AI Explainability

"The upcoming European regulation on Artificial Intelligence (AI Act) has led us to reflect on the elements of AI that raise questions today. In particular, there is now a greater awareness that a successful adoption of AI is not only quantitative, in terms of the number of users or the performance of the AI model for example, but also qualitative, i.e. that users or regulators must have confidence in both the technology and the way it is implemented in order to respect fundamental principles of fairness and transparency for example. This is why we talk about 'responsible AI'".

Jérôme Lebecq, BNP Paribas - Group Data Office, Data Science Office

Keep in touch and receive our newsletter!

Select your topics of interest and frequency of delivery.