Select Page

Listening to a radio broadcast today about the risk associated with technology we heard a senior manager apologising for the poor quality of service provided to his organisation’s customers. This was due, he claimed, to: “Problems with the new computer system”.

It’s weird isn’t it, that more than half a century after computer technology (IT Systems) were first introduced into commercial organisations that we are still blaming computer systems for what are fundamentally human problems. Maybe we are blaming ‘computer systems’ in the hope that the listener – who by and large won’t be an IT Professional – will shake their head wisely and agree that technology is a baffling, and sometimes uncontrollable, thing.

But if we look at the problem from a risk based reviewer’s perspective, and from the many published reports on ‘Problems with new systems’ we keep finding references to: “Poorly articulated, documented and managed change support and change management regimes” and “Errors not being identified, communicated and cleared through formal working practices” and, again, “Systems becoming unstable, following incomplete testing, leading to fragile working environments and frustrated users and consumers.”

There are well known best practices such as ITIL, developed especially for the management of IT services from a user perspective. And there is ISO 20000, that further uses ITIL as a springboard to generate an international standard for IT Service Management. A read of these would at least provide a clue to the direction to be taken and what ‘good practice’ might look like. And, none of the good practices is impossible to attain, all that’s required is concentration on process and sequence and not making arbitrary decisions to bypass critical control steps.

So if we know what the underpinning problems are – why don’t we fix those first, by applying the solutions that are available? Or, is speed of implementation seen as more important than satisfied users, customers and consumers?