Explainable AI in the financial sector

AI in financiele sector
The way in which AI is utilized in finance is becoming more and more complex, often including how a decision is reached. However, the parties that have a stake in financial services –such as customers and regulators– either require or have a right to explanation, for instance on how personal data is used in lending. The question arises what kind of explanations of AI processes are required for each stakeholder depending on the situation.

Explainable AI (XAI) is the research field that strives to open the black box of algorithms. In our approach, that process starts with getting a clear picture of the kind of explanation that is required in each specific situation for the various types of stakeholders impacted by AI. Subsequent questions explore which forms of AI (models) lend themselves well to explanation, and which XAI solution is best suited to generate that explanation. In our research, we have defined XAI as being: a set of methods and techniques to provide stakeholders with an appropriate explanation of the functioning and / or the results of an AI solution in such a way that that explanation is understandable and addresses the concerns of those stakeholders.

Objective

The aim of the project is to conduct practice-oriented research into explainability, in collaboration with organizations in the financial sector, thereby identifying the preconditions for successfully applying explainability. This process consists of first clearly identifying the relevant stakeholders, then mapping which concerns they have, which information they require to meet these concerns, and how that explanation can best be conveyed. Organizations we currently work with include financial service providers and regulators in the sector.

Results

  • A framework for explainable AI with types of stakeholders and types of explanations, specifically geared toward the financial sector. This framework will be outlined in the following whitepaper: XAI in the financial sector 'a conceptual framework for explainable AI'.
  • We collaborated in an explorative research on explainability with DNB, AFM, the Dutch banking association and three Dutch banks. The framework developed by the HU was applied in this research. Results of this research can be found here. Furthermore, an academic paper was submitted and accepted by the 33e Benelux Conference on Artificial Intelligence.
  • Together with consortiumpartners Floryn, Researchable and de Volksbank, we conducted a one-year research project into the aspects that need consideration in the implementation of explainable AI. As a result a checklist has been published and a whitepaper in which the checklist is explained. Furthermore an academic paper has been submitted to the HHAI2023 conference. More information on this project can be found on this page.
  • A subsidy application for a two-year RAAK-SME project has been granted. This project, called FIN-X, aims to develop tools that give internal users of AI applications more and better insight into their operation and outcomes. More information about this project can be found via the project link below.


In collaboration with the Copenhagen Business School and the Association of Insurers, Hogeschool Utrecht conducted research in 2023 on the role of explainable AI in fraud detection for insurance claims. The results of the research have been documented in a white paper.
The key conclusion from the research is that the implementation of AI in fraud detection represents a business transformation with numerous ethical and organizational considerations. The explainability of the AI system is deemed crucial, both from an ethical standpoint (as part of the transparency principle) and from a practical perspective (as a means to gain trust and acceptance from internal stakeholders and for effective collaboration between humans and machines). The practical implementation of explainable AI is still a topic of discussion and research within the industry.

 

Duration

01 June 2020 - 31 March 2025

Approach

At Hogeschool Utrecht, we strive for practice-oriented research and therefore focus research on XAI at the level of use cases. For each use case, we aim to identify which stakeholders need which explanation. This approach allows us to establish a link from the literature to practice and to report novel insights. An example of a use case that is being investigated is lending to consumers (consumer credit). Ultimately, we will work towards a verified framework with accompanying principles and guidelines for XAI geared to the entire financial sector.

Financial service providers or other parties in the financial ecosystem that are interested in working with us are invited to contact us.

Related projects

HU researchers involved in the research

Related research groups

Any questions or do you want to collaborate?