Ensuring customer trust and fair decisions in an automated world
Banks are increasingly using intelligent algorithms to automate decisions that affect customers’ lives, from granting loans to authorising transactions. But with algorithmic logic now outstripping the powers of the human brain, customers need assurance they can trust the decisions being made. In short, have the algorithms got it right?
Our data strategists and data scientists partnered with Rabobank to develop a new framework that will enable the bank to control the risks associated with developing and deploying algorithms. As part of our approach, we applied AI in a completely new way to check the decision-making logic driving the bank’s algorithms.
The result is a robust and practical framework to check that algorithms are working as they’re intended to, generating decisions that are fair and unbiased. Going forward, the framework can be developed in line with the rapid evolution of AI, allowing Rabobank to ensure its IT audits remain effective into the future.
- Brought a unique mix of expertise in AI development and IT audit to develop a ground-breaking solution
- Identified more than 50 risks spanning the algorithm lifecycle, allowing for robust risk mitigation at every stage
- Applied AI analysis in a completely new way to test the decision models incorporated in the bank’s algorithms
- Created a practical framework to inform audits and enable ‘assurance by design’ in the development of new algorithms
Innovating for smarter AI audit
Analysis carried out by complex algorithms is increasingly behind many business decisions. Algorithm-generated insights can highlight opportunities to serve customers better, enable organisations to make more accurate predictions, help prevent fraud and identify new business models.
Because the insights AI delivers can affect so many stakeholders – customers, suppliers, employees, investors – they need to be fair as well as remaining secure and complying with data privacy laws.
But as algorithms become more complex, it’s becoming harder to guarantee. The newest algorithms use artificial intelligence (AI) and machine learning (ML) to continually enhance their analytic capabilities. But the processes and logic they develop are too complex for the human brain to grasp. This means conventional IT assurance falls short when it comes to verifying that algorithms are working as intended. For customers who might end up on the wrong end of a biased or erroneous decision, this is bad news.
Partnering to find a solution
We’ve been working with Rabobank since 2012 to find innovative ways to apply AI to improve their sales operations and the experience their customers receive. As AI has evolved, we’ve helped them keep pace by creating a comprehensive algorithm assurance framework. This new framework will enable the bank to identify and mitigate the risks associated with algorithms and to give customers confidence in the bank’s decisions. We also created an innovative way to apply AI to test the decision models underpinning the bank’s AI/ML algorithms and built this into the framework.
We were uniquely equipped for the role. Our deep data science expertise, including experience in developing AI models, means we understand the advanced technology that sits behind algorithms and the systems that support them. We married this with our experience in IT audit to expand Rabobank’s existing IT risk control framework to meet the challenge of intelligent algorithms. Some 40 per cent of the framework is now specific to AI.
Uncovering development risks
We worked as a blended team with Rabobank, supplementing the bank’s expertise with our data science capabilities. In an initial scoping session, we considered the areas where algorithms might no longer be working as intended. For example, were algorithms accurately granting clearance as part of Know Your Customer checks? Were they assessing credit-worthiness without showing bias towards or against different groups? We also explored opportunities to apply AI and ML to understand and manage these risks.
We then partnered with Rabobank to interview the data scientists who develop the bank’s algorithms. We wanted to understand the procedures they follow through the development process, which is often rapid and intense, and the risks the process might throw up. These interviews allowed us to pinpoint all the risks the new control framework needed to cover.
Next, we needed to make sense of our findings. We combined our expertise in AI and IT assurance to interpret the data from a risk and audit perspective. We identified more than 50 risks, spanning 11 different stages of AI development and production. These included, for example, the risk that data feeding into algorithms might contain mis-fielded values, that inadequate test metrics might be used at the pilot stage, that the algorithm is causing unethical behaviour, or that outcomes might not be validated once an algorithm was in operation.
Innovating with AI
We then turned our attention to the decision models driving the algorithms. The decision model is the template for the logic that an algorithm applies to analyse data inputs. We wanted to check that the decisions models being used still matched the original models and hadn’t been distorted by AI/ML-guided evolution. In other words, were the algorithms still doing what they were supposed to?
In each case we used machine learning algorithms to extract the apparent decision model being used. By analysing this underlying model, we were able to compare it to the documentation on how the model should behave. This allowed us to demonstrate that the rules were being adequately implemented by the decision model.
We also generated fresh synthetic data: virtual customers that are not in the bank’s database but are very similar to the ones it already knows. This gave us additional and fresh data points to test the model with to demonstrate that it not just works for known cases, but also for potential future cases.
Delivering a framework for the future
We combined our risk analysis and AI insights into an integrated framework that enables Rabobank to audit its intelligent algorithms effectively. The framework is a highly practical tool. It offers a comprehensive list of risks over the algorithmic lifecycle and describes the controls needed to mitigate each one. It includes questions to assess whether controls are being applied and identifies evidence that could indicate they are.
The ground-breaking framework is a living document that can be developed in line with the rapid evolution of AI, allowing Rabobank to ensure its IT audits remain relevant and effective into the future. The bank will also be able to use the framework to enable ‘assurance by design’, building in control as it develops more algorithms to automate more decisions. As a result, customers looking to secure mortgages, get business loans, make transactions and more, can trust the bank to get these decisions right.