Published on February 1st, 2013 | by Guest Writer0
What Caused The RBS/Natwest Computer Failure?
The RBS and Natwest bank computer failure was the most significant system crash in UK history, and may well be a contender for one of the work computer system calamities of recent times. It affected all of the automated systems that customers rely on, meaning that millions of people were left unable to access their money from cash machines, pay cheques weren’t deposited, and bills weren’t paid. The bank has since had to compensate people for any losses incurred as a result of this issue, and that combined with the cost of fixing it has meant a lot of money down the drain. But what exactly was it that caused this mess-up in the first place?
The Initial Problem
The fault itself seems to have come from batch scheduling software used by the company. This software, called CA-7, was widely used to automatically update particular details in peoples’ accounts. It’s a fairly standard piece of kit, and it saves a lot of time that would otherwise have to be spent manually updating details. There isn’t anything intrinsically wrong with this software, but it seems that an update applied to the system caused a major glitch.
This isn’t an unprecedented mishap, so one of the most confusing things about the issue is the fact that it became such a disaster and took so long to fix. Experienced programmers and IT specialists should be able to revert to an older update and rectify the issue that way without too much time and effort. So that raises the question of why it went from being an update error to a catastrophic system meltdown.
The Cleanup Debacle
Something that people are pointing to as the precipitator of this error is the outsourcing of the upkeep of the CA-7 software to offshore industries. Thousands of people are relied on to make this system work, but recently Natwest decided to cut many UK jobs and move the roles to India. This saved a lot of money on paper, but as RBS and Natwest count the cost of their failure it must raise questions about the legitimacy of moving such essential roles overseas.
It seems likely that workers in India didn’t respond correctly when the error occurred, and then couldn’t fix it or provide the documentation needed to help fix it across the water. It’s too easy sometimes to point the finger at overseas workers when it comes to system failures, but in this instance the fact that the problem wasn’t handled in a manner you’d expect from experienced staff does point to a change in specialists working on programming roles being the main problem.
So, next time you think about outsourcing your development or IT support overseas, be sure that you know exactly who will be taking control!
- License: Creative Commons image source
Thanks to James Harroe from Net Star in the UK for this post. For all your IT support needs, visit www.netstar.co.uk.