Skip to content

Share

  • Add this article to your LinkedIn page
  • Add this article to your Twitter feed
  • Add this article to your Facebook page
  • Email this article
  • View or print a PDF of this page
  • Share further
  • Add this article to your Pinterest board
  • Add this article to your Google page
  • Share this article on Reddit
  • Share this article on StumbleUpon
  • Bookmark this page
PA IN THE MEDIA

Is your AI doing what it's supposed to do?

First published in Executive People

Increasingly, AI is influencing the decision-making process in organizations. It is therefore essential that they have the underlying algorithms under control, so that they can rely on the advice that AI provides. To do this, Rabobank and PA consulting have developed a new assurance programme that not only provides insight into how AI arrives at particular conclusions, but also provides concrete recommendations to overcome any challenges.

Organizations are producing and consuming more and more data from which they then do their best to extract as many valuable insights as possible. Data analysis helps to improve customer experience, make forecasts more accurate, manage risks better and test new business models. This enormous amount of data within modern organizations can no longer be analysed manually. For granular insights, organizations must rely on automation provided through algorithms. And that does them no harm: AI continuously learns to understand datasets better, making the insights it provides increasingly valuable.

Today, AI-driven, intelligent algorithms are often seen in production environments, where they have a direct impact on customers, employees and suppliers. This poses additional challenges, because the greater the impact, the more important it becomes to ensure that these algorithms function correctly.

As an organization, you must be able to rely on automated processes to actually deliver the desired results. And when customers and other users rely on algorithms, they must be confident that they are unbiased, secure, and compliant with laws and regulations

Existing assurance programmes are not sufficient

To provide this level of trust, organizations have been relying on IT assurance programmes for years. Unfortunately, these programmes are not suitable for modern, complex algorithms. First, the mechanisms in these algorithms are often not that easy to capture in simple rules, making them difficult for the human brain to understand. Second, these algorithms are often developed under enormous time pressure. Because every organization wants to extract value from its data as quickly as possible, the focus in the development phase is often on speed - and less on manageability. Sometimes it is not even clear who exactly within the organization is responsible for the functioning of an algorithm! There is a parallel with the security of applications. In this area best practice should be “security by design” - making sure that security is paramount in application development. For algorithms this should be “assurance by design”. However, as in application development, this is not yet standard practice in algorithm design. As a result, organizations run the risk, for example, that auditors will have to analyse algorithms after the event to determine whether they have functioned correctly, and existing assurance programmes are not designed for this. The story becomes even more complex because algorithms also raise ethical questions. It's not just about security and privacy, but also built-in biases in algorithms or manipulation of data sets. Assurance programmes do take these aspects into account, but in the end they stick to general recommendations that do not directly provide practical solutions to address the problems they have uncovered.

Algorithm Assurance and AI Assurance Support

It is clear that assurance programmes need an update. Rabobank and PA Consulting have therefore jointly developed a new algorithm assurance framework that, unlike many existing frameworks, assesses the entire lifecycle of AI models. This allows auditors to identify errors more quickly and also indicate which concrete improvements need to be made in processes, organization or technology. An important pillar in this algorithm assurance approach is a new risk management model. The AI ​​Risk Control Framework fully maps out the risks of using an AI model. This framework looks at the entire lifecycle, from design and development to shut down and disconnection of the datasets used. For example, not only is the robustness of the algorithm itself assessed, but also the organization and governance surrounding it, including the production environment and the method of data management. It also focuses on sensitive areas such as data privacy, security and ethics.

This assurance programme also takes into account new professional insights that can be processed immediately. This includes the risks that arise throughout the lifecycle of an AI model. The potential risks have been mapped out for each step in the life cycle, together with the control measures to minimise these risks. Also included is the provision of the evidence required for audits to demonstrate that the correct control measures are in place. This makes this programme suitable for almost any AI model.

In addition, a new solution has been developed to test the output of the AI models, known as the AI Assurance Support Model. Even if the underlying algorithm is not accessible, this model can test the results of AI models. With many SaaS solutions, this kind of algorithm can, for example, be a 'black box' that you as an end user cannot reach. Using its own AI, this Support Model controls the incoming and outgoing data flows and allows the logic of the AI model to be tested.

Three important conclusions

Rabobank and PA draw three main conclusions from these new Algorithm Assurance solutions, compared to traditional audit approaches in IT:

1. Auditors must combine assurance, IT and data science. Data scientists can ask the right questions about AI models, such as the choices made during the design process. The input of a data scientist is also necessary for the development of AI Assurance Support models.

2. The addition of data science also changes the conventional audit process. The best way to develop a statistically significant AI Assurance Support Model is to have a data scientist constantly refine the model. The audit process must integrate this iterative approach to achieve optimal results.

3. Algorithm Assurance actually shows whether AI models are doing what they are supposed to do. This is the main difference from conventional auditing approaches. This enables Rabobank and PA’s approach to create trust among management, employees, customers and supervisors.

• Authors:
Hans Roelfsema, Data Transformation Lead, PA Consulting
Michiel Krol, Head of Audit Data Excellence, Rabobank
Theo-Jan Renkema, Chief IT & Digital Audit - Rabobank & Professor of Data Analytics & Audit - Tilburg University

Curious about how AI will revolutionise retail banking? Our interactive illustration can help you understand.

Find out more

Contact the author

  • Hans Roelfsema

    Hans Roelfsema

    PA analytics expert

    Hans leads the Dutch Data Transformation team, delivering innovative data solutions with our clients across different industries such as financial services, telcos, public services and manufacturing.

    Insights by Hans Roelfsema

Contact AI and automation team

Mark Griep

Mark Griep

Andrew Jaminson

Andrew Jaminson

Lee Howells

Lee Howells

Charles Koontz

Charles Koontz

Morten Ib Ingstrup

Morten Ib Ingstrup