Following each of the recent leading UK bank outages we were met with some variation of the stock response: “there is a complex technical issue”.
The implication that many draw is that these huge institutions are powerless to prevent these crashes from occurring; they are an act of God and beyond the control or foresight of mere mortals.
Considering the frequency of these glitches, these excuses no longer wash.
As expressed by the chair of the Treasury Select Committee, Andrew Tyrie: “The frequency of these failures across the financial services sector suggests a systemic weakness in IT infrastructure”.
Is there something fundamentally inadequate at the core of these IT systems? It appears bank executives do not see the benefit in a thorough review of their systems and the underlying code, opting instead to patch up the cracks as and when they occur.
The major UK banks must overcome this ignorance and complacency. They must get to grips with the systems which have caused them so much trouble and their customers so much inconvenience in recent years. They must finally recognise that investing in IT systems is not an extravagance, but an essential expenditure.
Many of the core applications used by UK banks, which were the first to computerise, are decades old. Since the time these applications were coded, expectations of banking services have changed and the British public’s demands have been shaped by the rapid progress of consumer IT.
Online banking, contactless payments and money transfer services have introduced new and complex channels which existing IT platforms were not built for but must now accommodate.
A variety of new applications coded in different languages and built on top of an outdated foundation add even greater complexity until reviewing systems for vulnerabilities becomes almost impossible. Few even know how to tackle this issue.
Traditional software quality assurance is based on functional and load testing and some level of manual (peer level) code reviews.
These decades-old practices in software development do not account for structural faults within the software architecture. They are relevant for software development of individual units, but less useful across layers of applications.
A structural fault hides deep in the code and could remain undetected for years until the addition of new functionality, such as online banking, triggers the fault and results in a system crash. With a third of glitches being the result of such structural flaws, this approach to testing is entirely insufficient for complex legacy systems.
Read more on banking:
- An evolving banking model: What can corporate banks learn from consumer finance?
- Andrew Tyrie strikes out against bank levy with call for publication of balance sheets
- Challenger bank bosses meet with Treasury to open up business lending
Making the invisible, visible
At an executive level, software integrity is viewed as intangible and abstract and hence left for the “techies” to handle. It is this lack of visibility and measurement that banks must address head on. Today’s banks are software-based enterprises, totally reliant on IT, that have to transform their approach if they want to stay relevant to the UK customer.
Our vast experience working with highly complex IT systems means that at CAST we know that thorough, automated analysis comparing source code against code quality standards, such as those agreed by the Consortium for IT Software Quality (CISQ), is required to measure the architectural integrity of software.
These reviews provide insight into which applications represent a business risk and are liable to cause problems at some point. They are the health checks that these old and inflexible IT systems require to keep ticking away.
Judging software quality against the CISQ standards lets organisations detect poorly written and potentially damaging code and identify and measure technical debt. The UK’s major banks must review their ageing IT systems against such benchmarks to minimise the risk of the all-too-frequent collapses which have alienated account holders and damaged the brand reputations they have built up over decades or even centuries.
Invest now, save later
It is understandable and logical that businesses look to cut out all unnecessary expenditure and maximise profits. But maintaining and understanding the integrity of business-critical IT infrastructure is not only absolutely essential, but actually much costlier to ignore in the long term.
RBS was fined a total of £56m in 2014 by regulators for a malfunction that was prompted by a software upgrade. Following the incident, the bank was also compelled to set aside £125m to compensate those affected. This cost is far greater than the initiative required to detect such issues.
With an expanding and increasingly competitive marketplace, the UK banking consumer has more options and justification for switching banks than ever before. Challenger banks such as Metro and Virgin Money – some offering more appealing rates of interest and few weighed down with the same unwieldy and dated IT – are providing the British public with compelling alternatives to their larger counterparts.
Any business manager looks to maximise the return on their assets. In the case of the UK banks, the computer systems which gave them an edge for decades are now actually a liability.
UK banking management must engage with their IT teams and make software integrity a priority. Ultimately, the cost of failing to do so will be greater than the negative headlines and a potentially embarrassing social media backlash.
However, should any bankers feel that the outage problem is becoming too embarrassing, they can follow in the footsteps of the high-flying investment banker who decided to become a Kabbalah teacher.
Vishal Bhatnagar is senior VP and country manager at software vendor CAST UK
Share this story