Insight

Four steps to unlock GenAI value in Nordic financial services

Stefan Knapp Martin Jessen

By Stefan Knapp, Martin Jessen

The potential of generative AI is no longer in question. Even if it was, the genie is out of the bottle. Our research shows that 95 percent of organisations across the Nordics are using some form of AI, and 55 percent use generative AI multiple times weekly. As this shift gains pace, banks are seeking new ways to harness generative AI’s potential to revolutionise the way they operate, function, and serve customers and society.

In the Nordic financial sector, machine learning and AI has long been on the radar, particularly in areas like fraud detection, credit scoring, and process automation. Now, the emergence of generative AI has propelled them fully into the era of the intelligent financial services organisation. And the debate has shifted to two issues that are inextricably linked: how to gain value from AI; and the control parameters needed to build trust and ensure effective governance.

To better understand the opportunities and complexities ahead, we partnered with Copenhagen Fintech – interviewing Nordic financial services leaders to explore how they can set the right guardrails for generative AI success.

From our conversations, we found that while even experienced hands can struggle to keep pace with the new wave of governance and ethical complexities brought about by generative AI, there are four broad steps that leaders can take to ensure their usage is responsible, risk-managed, and more likely to deliver a return.

Report
Unlock responsible GenAI value in financial services
Our Responsible AI report explores four practical steps Nordic financial institutions can take to capture value from generative AI while maintaining trust, control, and effective governance.

Determine where to play – and the rules of the game

The first step is to determine if there are any generative AI use cases, or types of generative AI, that would be out of bounds. It could, for example, be using generative AI to generate targeted marketing for individuals – at least until the capability matures.

Defining these out-of-bound areas helps to focus use-case development. As limiting as it may seem when descoping use cases, nothing is worse than spending time, resources, and energy on innovating, only to have all the work and motivation discarded at a later stage.

At the same time, it’s vital to determine the principles of action in the areas where your organisation will play. To make it real, senior leadership will need to be seen to lead by example, embedding core principles such as fairness, transparency, and accountability into culture, processes, and decision-making.

In our experience, organisations often struggle with generative AI use cases not through lack of desire or talent – but because they lack clarity around what level of risk is acceptable and the supporting governance mechanisms. Contrary to the myth that governance works against innovation, we often find the opposite. Effective and efficient governance enables innovation, and the lack of risk-based controls and governance acts as an inhibitor.

Defining a risk appetite can be an uncomfortable topic, but financial organisations are built on evaluating and taking on acceptable risks. Ida Bach, Head of GenAI at AL Bank, told us: “It is essential that risk appetite and risks are defined and discussed at strategic level/C-level, so we do not have to start from scratch with each use-case. By clearly outlining the boundaries for acceptable risk, we enable informed decision-making and avoid reinventing the risk assessment for every single solution.”

Being able to articulate, as an organisation, if you want to be a first mover, close follower, or late adopter is key. Then, articulate what types of mistakes could be tolerated, and which could not, while maturing within the AI space. This enables you to go into development with open eyes and clear guidelines.

Address cross-cutting risks

The mass adoption of AI has introduced several risks; some entirely new risks, and some which are new twists on existing risks. Whether the risks are new or evolving, the crucial aspect is that AI-related risks are cross-cutting, encompassing model, compliance, security, and operational risk. This means that a multidisciplinary team is needed to monitor and address these, acting as a hub with ‘spoke’ input from other specialist areas. It’s the same approach that many leading organisations have taken with ESG-related risk.

Camilla Neuenschwander, Chief Special Adviser with the Danish Financial Supervisory Authority, told us: “We do not go out with a message to companies telling them not to use artificial intelligence because it is dangerous. We simply say that you need to be aware of the risks involved when you choose to use it.”

One such cross-cutting risk which impacts on trust – both internally and externally – is where generative AI lacks explainability. Here, Model Risk should step in and investigate the model using MLOps (machine learning operations) and metrics that can unmask potential biases. The benefit of having multidisciplinary support is that Compliance colleagues are geared to ask questions such as, “Can the transcription model understand all regional accents?”.

Approaches such as these are frequently referred to as having a ‘human in the loop’. We prefer the concept of a ‘human on top’ – where they can intervene and override where necessary, and – crucially – show the organisation that they are in control. This control is vital if you plan to deploy autonomous AI agents as part of an ‘agent factory’.

Michael Munck, CEO at 2021.AI, told us: “If you truly want an agent factory, then governance and risk management need to run in a super streamlined way beneath the development layer, and you must have human oversight at the top, constantly monitoring what’s happening.”

Secure the path to production with early-stage interventions

Across almost all the experts we interviewed, there was consensus that it is crucial to conduct early, cross-functional screening on generative AI use cases before anything is built.

A simple, early-stage screening questionnaire can capture the key elements: the type of AI used; potential consequences; what data it plans to use; and the current level of oversight. Ultimately, this data will also help build a foundational AI inventory – ensuring your organisation has an overview of the tools in development – helping to determine the tools and processes that have achieved value.

The business, now understanding the potential risks, then has a clear choice: either to stop the development of something that would later struggle to scale across the organisation; or building the tool with a clear mandate to proceed, and with clear sight of the main risks to be mitigated.

This approach also ensures that the relevant business owners and technology people fully understand any concerns and risks relating to the solution, while also ensuring that everyone is up to date on the latest developments.

Find out more

Generative AI continues to develop at pace. As Thomas Eatmon, Head of Responsible AI at Danske Bank, told us: “We are already preparing ourselves for the next wave, because we have only just seen the beginning of what AI can do.” In response, the aim is to deploy responsibly, not comprehensively. By setting the right principles and guardrails now, you can maintain control and better prepare to create and capture value.

About the authors

Stefan Knapp
Stefan Knapp PA financial services expert
Martin Jessen
Martin Jessen PA Responsible AI expert

Responsible AI

Ensure the ethical, transparent, accountable, and effective use of AI.

Financial services

Empowered by technology and innovative thought, insurers, banks, and asset managers must move forward by fully understanding their role in the global ecosystem. How will you become a force for good?

Bring ingenuity to your inbox.

Subscribe for the latest insights and event invites on strategy, innovation, technology, and transformation.

Explore more

Contact the team

We look forward to hearing from you.