Frances Coppola has an excellent post talking about the legacy systems problem that besets the banking sector, pertinent in the context of the RBS systems outage that meant that for 3 hours on Monday no RBS customers could access cash or process card transactions of any kind.
She describes how technology investment has traditionally been focused on building front-end applications, meaning that the core systems that run the basic banking processes have not been upgraded. So inspite of the huge increase in processing power and storage capacity in modern IT systems, banks like RBS still have massive traditional mainframe computing systems at their heart.
The thing that prevents these old systems being replaced outright, she says, is their sheer size, complexity and criticality. Over time, the huge cost and risk involved in replacing them has meant that these organisations have adopted a system of 'wrapping' – effectively treating the core legacy system as a black box which remains largely untouched whilst a 'shell' of additional applications that create customer interfaces, point-of-sale functionality, settlement processing and even real-time updates are created around it. These newer applications rely heavily on the information contained within the core system but over time build complexity upon complexity as the 'shell' gets larger, issues of technological compatibility and connectivity have to be worked through, and customers rely on them more and more.
This means that there may well be divergence in the financial information presented to customers and that which appears to the bank (before reconciliation occurs), the potential for system errors and data corruption grows, and the risk of significant failure just increases over time:
"The more fragmented your systems architecture, and the more it relies upon stable interconnections between different technologies, the riskier it becomes…the "pasta rule" still applies: the more your systems architecture looks like spaghetti, the higher risk it will be."
The CEO of RBS admitted yesterday that the company had been failing to invest properly in its systems for decades. A big reason for this, says Coppolla, is that "infrastructure is boring and the cost of replacing it is a hit to short-term profits", so in effect "they've reduced their balance sheet risk, but not their operational risk." Banks that are focused instead on rapid expansion have the very real potential of building up a patchwork of incompatible technologies, and failing to invest in suitable time or resources for systems support or proper integration:
"It's rather like the risk of a major operation (which could result in death but might lead to full recovery) versus medical treatment to control symptoms – you get iller but you don't die, at least not for a while."
As long as it keeps on running, as long as it's easier to patch it up or build a workaround, the longer it goes on, the more complex the interdependencies become, and the risk and the associated costs build.
In the case of huge banking systems the risks can elevate to levels which have implications for the whole economy, but this kind of legacy systems issue is not confined to the banking sector. In my experience (more recently incuding that gained from working with Econsultancy on digital structures and resourcing, and agility and innovation – see below) legacy technologies are quantifiably the single most significant barrier to making companies fit for purpose for the modern digital world in which they find themselves.
And of-course we're talking here not just about the technology, but the policies, practices, skills and behaviours that surround it. The consequences of an overly short-term focus on profits may be far greater than is visible to those looking in from the outside.