Over the last five years there has been a massive increase in data gathered, stored and reported as financial services firms increasingly strive to achieve totally digital operations and respond to regulatory demands. And with it comes a whole new set of risks.
This means they now have more data they need to protect, at the same time as cyber attackers have become increasingly sophisticated. All of that has made firms more vulnerable to attacks and technology failures.
The potential fallout from these is also increasing. We have all seen the impact of recent technology failures on consumers and the financial system. In the case of UK bank TSB, heads rolled and many would say justifiably so. For attacks that include personal data, the stakes are now even higher as a result of the huge fines that can be imposed under GDPR [the general data protection regulation]. This is a global issue too, in the US, state law-makers are also seeking to introduce similar regulation – for example the California Consumer Privacy Act – to protect consumers from the risk of data being poorly handled by firms.
A further source of risk is the range of legacy technology used by financial services firms and which require specific skill sets to maintain. This is creating a particular challenge as specialist system engineers are reaching retirement age, creating a skills shortage unless the sector can accelerate the many legacy replacement programmes that are underway.
The shift to new technologies does not mean you can forget about resilience
Design for failure
The shift to the cloud is a key theme for financial services firms and can be a way of addressing weaknesses in existing systems. While it is true that it is inherently more resilient, if a firm transfers a mess to the cloud it can still fail. A “lift and shift” approach to cloud, in its worst form, simply moves existing legacy problems to a different location.
What is needed is a clear focus on adopting best practice to improve resilience. This means designing for failure. With commodity cloud and automated “just in time” provisioning, this need not be expensive. A DevOps model should also be used which includes continuous integration, automated testing, environmental consistency and repeatable deployment. It should be underpinned by automated infrastructure including what are known as immutable servers, which cannot be changed, and so enable faster recovery after a problem.
Resilience should then be built in by using multiple availability zones and data centre regions.
Businesses like Netflix, that use cloud services, regularly conduct destructive testing on their infrastructure to simulate different failure scenarios and prove that their architecture is resilient. Having to design for failure, knowing that it will be regularly and rigorously tested, means they are better equipped and prepared to cope with a real failure.
Financial services firms should take note; it’s no surprise that Netflix fared better than most during a major outage of Amazon’s cloud service, AWS.
Firms also need to respond to regulators who are now focussing on finding a balance between innovation and ensuring financial stability through operational resilience. With the number of high profile operational disruptions we’ve seen recently, it seems that regulatory concern is not misplaced.
The Bank of England wants to set standards on cyber recovery plans and for firms to demonstrate that they are meeting these standards through a proposed ‘cyber stress test’.
They have published a discussion paper, Building the UK financial sector’s operational resilience, that sets out their expectations and firms will need to think about how they are going to respond.
In addition, its 2018 Financial Stability report says: “Firms have primary responsibility for their ability to resist and recover from cyber incidents”. This makes it clear that the supervisory authorities expect boards to take responsibility for the cyber resilience of their firms.
Getting the balance between promoting system-wide resilience and the resilience of any one player in the system.
Open banking revolution
Open banking requires that initially nine banks provide the functionality for data exchange.
This means that they must provide the interface to allow third party account aggregators or payment providers to access customer balance information, transaction history and to be able to facilitate payments.
Open banking arguably reduces risk compared to traditional banking aggregation services. This is because new and existing aggregators and providers need to implement security controls as enforced by European regulators, for example the Financial Conduct Authority in the UK.
Clear guidance has been provided by the European Banking Authority (EBA) and this is being interpreted and applied by each regulator.
The guidance covers areas such as cyber security and access controls, but also broader operational resilience controls. Traditional banking aggregation services use screen-scraping and form filling. Compared to those techniques, the use of the open banking interfaces is significantly more secure – for example the provider does not need to store username and password details as they can use a crypto-token issued by the bank.
What all these developments underline is that technology and operational resilience need to be at the top of the agenda of financial services firms’ leaders. The reputational and regulatory risks are too high to be ignored.