Skip to content

Share

  • Add this article to your LinkedIn page
  • Add this article to your Twitter feed
  • Add this article to your Facebook page
  • Email this article
  • View or print a PDF of this page
  • Share further
  • Add this article to your Pinterest board
  • Add this article to your Google page
  • Share this article on Reddit
  • Share this article on StumbleUpon
  • Bookmark this page
PA OPINION

Man vs. Machine: From RoboCop to the future of AI in European financial services

We are already in the middle of a global race to leverage AI to redefine business and value creation, and the EU is aiming to drive and enable European businesses. Nordic financial sector executives need to understand the implications of the AI agenda being defined. Only with such an understanding can they leverage the opportunities, foresee threats, control the associated risks and ensure compliance with future requirements related to AI.

RoboCop: You are under arrest.

Dick Jones: Looks like I'm in pretty serious trouble. You better take me in.

RoboCop: I will. [RoboCop glitches and twists]

Dick Jones: What's the matter, officer?

RoboCop: [Robocop falls to the ground on one knee]

Dick Jones: I'll tell you what's the matter. It's a little insurance policy called Directive Four - my little contribution to your psychological profile. It says that any attempt to arrest a senior officer […] results in shutdown. What did you think, that you were an ordinary officer? No, you're our product! And we can't have our products turn against us, can we?

Taken from the first RoboCop movie released in 1987, this quote is an example of the classic AI Control Problem, which poses the question of how to balance the tension between building a super intelligent agent that will aid its creators while avoiding inadvertently building a superintelligence that will harm them.

This is the same control problem that led the late Stephen and numerous AI experts to sign the Open letter on artificial intelligence, calling for research on the societal impact of AI. “The development of full artificial intelligence (AI) could spell the end of the human race. It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded,” the letter stated.

So, what does this have to do with financial services? Nordea, Danske Bank, DNB or any other banks’ chatbots will hardly take over and create hell on earth for humans. Well, at least not yet.

However, AI develops rapidly and is used in an ever-increasing number of ways. Geopolitical dynamics and differences in culture create different perspectives on how data, the lifeblood of AI, can and should be collected and used, and the boundaries within which AI should operate. We need a framework that stimulates competition, innovation and value creation while we remain in control and true to our values. This requires serious debate in financial services.

The EU has shaped most aspects of European financial services regulation. Now it sees the need to be a driving force in enabling and shaping the future of European AI. So, what exactly does this mean, which processes are ahead of us and what can we expect the implications to be for Nordic financial services?

Following the 2017 review of the EU Digital Single Market Strategy, the EU has pushed a string of initiatives to strengthen its AI competitiveness by 2025. In 2018, a large number of EU and EEA countries agreed to join forces and launch the EU Strategy for AI, outlining a European approach to boost investment and set ethical guidelines for AI use. To engage with stakeholders and leverage external expertise, the European AI Alliance and High Level Expert Group (HLEG) on AI have been established.

The AI Alliance is a forum engaged in a broad and open discussion of all aspects of artificial intelligence development and its impacts. The HLEG consists of selected AI experts from academia, civil society and industry. It supports the AI strategy implementation by elaborating recommendations on future-related policy development and ethical, legal and societal issues associated with AI, including socio-economic challenges.

In 2019, the HLEG published a revised definition of AI, as well as a set of new ethics guidelines for trustworthy AI that included a detailed AI ethics assessment for firms to use. The assessment has now been piloted in real-life AI development and, based on the feedback, a revised version is expected in 2020.

In parallel, revisions of existing EU legal frameworks, for example regarding safety and liability, and any needs to introduce new legislation are being analysed. A key finding so far is that the AI requirements for “transparency, traceability and human oversight” are not well enough covered, and we can therefore expect these areas to be reinforced.

“The financial services sector is likely to be seen both as a risky sector and as having AI applications which cause significant risks.”

In February 2020, the EU reached a new milestone when it published the EU White Paper on Artificial Intelligence and The European Data Strategy.

The whitepaper argues that “to address the opportunities and challenges of AI, the EU must act as one and define its own way, based on European values” for trustworthy AI to realise industrial strength.

It warns fragmentation would “undermine the objectives of trust, legal certainty and market uptake,” but recognises it has already started, citing Denmark’s newly launched Data Ethics Seal as one example. A “regulatory and investment-oriented approach” is suggested, “with the twin objectives of promoting the uptake of AI and addressing the risks.”

Introducing “a new legislation specifically on AI may be needed […] to make the EU legal framework fit for the current and anticipated technological and commercial developments.” However, too heavy legal restrictions may hamper innovation so “a risk-based approach” is proposed to “ensure that regulatory intervention is proportionate.” To identify high risk AI applications, the whitepaper proposes two criteria:

  1. The AI application is employed in a sector where significant risks can be expected to occur.
  2. The AI application is used in such a manner that significant risks are likely to arise, for instance producing legal or similarly significant effects for the rights of an individual or a company. The financial services sector is likely to be seen both as a risky sector and as having AI applications which cause significant risks. In addition, some AI applications, like remote biometric identification for Know Your Customer processes, will be considered high risk regardless of sector.

Financial institutions will therefore likely face regulatory requirements around:

  • the data used to train AI systems
  • the keeping of records in r elation to the programming of the algorithm, the data used to train the systems and, in some cases, the data itself
  • clear in formation as t o the AI system’s capabilities and limitations, and when citizens are interacting with an AI system and not a human being
  • the robustness and ac curacy of AI systems in all its tasks
  • human oversight.

They’ll lead the clarification, interpretation and detailing of requirements for financial services. While the whitepaper on AI advances the thinking on European AI regulation, the European Data Strategy aims to power the European data economy with the necessary data and capabilities for competitiveness and innovation. Fundamental to the strategy is a single market for data where:

  • data can flow within the EU and across sec tors f or the benefit of all, bringing together and connecting a lot of the data that is already being collected European rules f or privacy and da ta protection, as w ell as competition law, are fully respected
  • the rules for access to, and use of , da ta ar e f air, pr actical and clear.

The ambition for 2025 is that €4-6 billion should be invested in common, interoperable European data spaces, next generation standards, tools and infrastructures to store and process data, and a European federation of cloud infrastructure and services. The aim is to grow the value of the EU data economy from €301 billion in 2018 to €829 billion in 2025, an increase from 2.4% to 5.8% of EU GDP.

AI is a general-purpose technology and much of the above applies to the financial services sector, even if it’s not sector specific. What is sector specific is the financial data space. It will enable better sharing of data and build further on PSD2 and the EU’s efforts to enable open banking.

In other words, financial institutions can expect more data to become available for innovation, but also changes in how the data should be reported. Furthermore, they can expect the push for improved and wider-reaching open banking to continue. The EU Commission explicitly states that it “will continue to ensure full implementation of PSD2 and explore additional steps and initiatives building on this approach”. More details on concrete initiatives in this area will be known in the upcoming Digital Finance Strategy, to be released in Q3 2020.

We are already in the middle of a global race to leverage AI to redefine business and value creation, and the EU is aiming to drive and enable European businesses. Nordic finance sector executives need to understand the implications of the AI agenda being defined. Only with such an understanding can they leverage the opportunities, foresee threats, control the associated risks and ensure compliance with future requirements related to AI.

Explore our latest insight and perspectives on the Nordic Financial Services market

Find out more

Contact the author

Contact the financial services team

×

By using this website, you accept the use of cookies. For more information on how to manage cookies, please read our privacy policy.