| | Credit Risk Management

How explainable Machine Learning can improve the acceptance of Artificial Intelligence in Finance

The use cases in finance for applying machine learning models to improve existing processes are numerous. During my career, I have seen many companies through the development and implementation of machine learning models in the field of risk and fraud management.

With machine learning, great new tools have been added to our analytical tool case. Their popularity has been nothing but growing ever since.

However, at the same time as machine learning became more widespread, the sceptic voices have also been growing. Decisions made by an algorithm are by nature opaque at first sight. To the outsider, data is put into a model, which then performs its “magic” inside a Black box and ultimately delivers a result which may seem anything between logical, illogical, helpful, unjust, or just downright cryptic.

In order to spread machine learning methods even further, it is crucial to increase the ability of all stakeholders to understand and interpret the delivered results. This will boost the acceptance of artificial intelligence within organisations that use it to streamline their financial decisions, but also acceptance amongst regulatory bodies and acceptance amongst the public that is affected by the decisions made by the algorithms.The good news is: Acceptance can be increased significantly by using various approaches to make machine learning models more explainable.

Global interpretability: How the model makes decisions

First, global interpretability will help to understand the functioning of the model as a whole. Global interpretability means providing an answer to questions such as “Which features were used to build the model and how important is each feature for the result?” E.g., when using machine learning in fraud prevention and detection, features such as a device’s metadata, user account data and personal data such as a user’s age could be taken into consideration.

Each of these features may be of different importance when building a model. At the same time, the interaction of features within a model may result in a completely different picture. E.g. while user age could be of minor importance when used as a single variable by itself, the combination of user age and device data could be an interesting feature to integrate into a machine learning model for to combat fraud. Automated feature engineering may help to create new features.

Local interpretability: A single decision explained

Secondly, interpretability is of major importance on a local level. If a given user is surprised to receive goods and services only against prepayment (due to him being classified as potential fraudster), checking each feature characteristic of the model against the customer’s data and behaviour helps to explain how and why this result came about.

Even though interpretability is a crucial factor for the success and acceptance of machine learning models, it is not always considered in the conception phase of a model. What is your experience with explainable machine learning? Do you use machine learning models in your organization? How are you tackling this challenge? I would love to receive an email with your questions and any experiences or comments you may want to share with us.

Related Posts