Site icon Alan Mather – In The Eye Of The Storm

Legacy dependencies

Back in September I was wondering whether our efforts to put governments online would result in a new legacy system problem. I’ve been thinking some more about that, wondering how it arises, whether it’s true and what we do.

First up, I define a legacy as a monolithic, backward facing system that is harnessed by fortress government to do its bidding. It likely has limited functionality that the customer can exploit and it is also performs a vast range of functions without any ability to split it up replacing the functions of any one bit with a new bit, without massively negatively impacting the wider system (either through breaking it or plainly increasing the migration risk). Just because they’re monolithic and backward facing doesn’t mean people won’t deploy them forward facing and pretend that they’re part of a new architecture – the other bits of the definition come into play then.

What prompted this recently was the news that the Inland Revenue, in a bold decision judging by the press reports, have chosen not to renew their IT contract with EDS, but to award it to a new player. It almost doesn’t matter who the new player is – the point is that it’s not the same as before. This was followed by some worrying warnings about potential delays in handling the job, although it strikes me there’s a “would say that, wouldn’t they” angle to anything like that. What was more interesting today was a story that said the key 200 people who handle the IT might not move. I can’t find that story right now, but it’s somewhere.

My guess is that any corporation or organisation, public or private, has far less than 200 people who understand the IT they have – especially in a legacy world where the systems have been around 20 years or more. It’s probably nearer 20 or even, in some cases, 2 people. This may be one of the reasons that changes take so long to push through the system – the only people who “get” the impact are busy sorting through a bunch of other tasks.

If we build more systems in our silos, whether that is country by country (for a multi-national), org by org (in a corp) or agency by agency (in a government), then surely the consequence is that we concentrate the knowledge in fewer numbers of people than we might otherwise. If we blast open our systems, fragmenting functionality into more discrete units, then we can spread knowledge of how they work across more people. The key people then become those who know how data flows between the systems. This is just as risky potentially – after all, few can hold the big picture in their head. There must I think be a tradeoff between fragmenting systems to reduce people dependency and fragmenting them so much that there is a new dependency on the big picture.

Noone wants to be held hostage to a few people knowing a system – but there’s a real risk that building systems in silo organisations rather than for cross-organisational need results in less flexibility, more transition risk and less business benefit. We’ve all done that once. Would we want to do it again?

Exit mobile version