Building New Legacy

How do we know the systems we are building today aren’t tomorrow’s legacy? Are we consciously working to ensure that the code we write isn’t spaghetti-like? that interfaces can be easily disassembled? that modules of capability can be unplugged and replaced by other, newer and richer ones?

I’ve seen some examples recently that show this isn’t always the case. One organisation, barely five years old, has already found that its architecture is wholly unsuitable for its current business looks, let alone what it will need to look like as its industry goes through some big changes.

Sometimes this is the result of moving too quickly – the opportunity that the business plan said needs to be exploited is there right now and so first mover advantage theory says that you have to be there now. Any problems can be fixed later goes the thinking. Except they can’t, because once the strings and tin cans are in place, there are new opportunities to exploit. There’s just no time to fix the underlying flaws, so they’re built on, with sedimentary layer after layer of new, often equally flawed, technology.

Is the choice, then, to move more slowly? To not get there first? Sometimes that doesn’t help either – move too slowly and costs go up whilst revenues don’t begin soon enough to offset those losses. Taking too long means competitors exploit the opportunity you were after – sure they may be stacking up issues for themselves later, but maybe they have engineered their capability better, or maybe they’re going so fast they don’t know what issues they’re setting up.

There’s no easy answer. Just as there never is. The challenge is how you maintain a clear vision of capability that will support today’s known business need as well as tomorrow’s.

How you disaggregate capability and tie systems together is important too. The bigger the system and the more capability you wrap into it, the harder it will be to disentangle.

Alongside this, the fewer controls you put around the data that enters the system (including formats, error checking, recency tests etc), the harder it will be to administer the system – and to transfer the data to any new capability.

Sometimes you have to look at what’s in front of you and realIse that “you can’t get there from here”, and slow down the burn and figure out how you start again, whilst keeping everything going in the old world.

Ethics, Old Software and Negative Interest Rates

From today’s FT Letters:

In a world where interest rates had, for centuries, been positive, it’s not hard to see why a programmer would put some validation into code to check for a positive number. Even now when I read about “fat finger” errors where a trader mistakenly buys or sells a number of shares with several more zeros than expected, I wonder why there isn’t more validation (or some secondary control that routes unusual transactions to a second person for checking). BombMoscow might, of course, have needed several levels of such controls, whether a parameter or hard coded.

The Legacy Challenge

What’s a legacy system? Some would say that it’s a system running on no longer supported components – old versions of software, impossible to upgrade databases, operating systems that were released before the Internet was a thing, or servers that still have Pentiums inside. Another way of looking at it would be when the number of people who know how the system works is fewer than the fingers on one hand and you are concerned that if anyone else retires, you will no longer be able to make any changes to the system.

Those are all true. There are plenty of corporations and government entities running systems that have some or all of those. Much of the backbone of UK government’s technology is based on systems built in the 70s, 80s and 90s. Our tax is collected, our benefits paid and our customs transactions policed by such systems.

There is, though, more to a legacy system than that. It could even be said that a legacy system is one that went live yesterday – because unless you have a plan to invest in your shiny new IT asset, all it will do by itself is decay and rust.

As systems get bigger and more complicated, whole areas of code will be looked at less and less frequently; familiarity with how that code works decreases. Personnel churn, whether in house developers or supplier staff, means that new people have less experience with the code than those who went before. Wholesale code reviews are rare. Code optimisation – revisiting already working code to re-factor it- as a way of teaching new staff how the whole thing works seems a forgotten discipline.

We talk now of technical debt – code that was fine when it was launched but is really holding back development of new capability now. It’s too complicated. It’s hard coded. It has dead ends. It doesn’t interface to new tools. This code can be a day old too.

IT systems are strategic assets, like bridges, tunnels, dams and roads. The internals need to be inspected, cleaned, operated and polished. Sure you can stand back and look at them and say how nice they are and how impressive the construction is. But the day you slow down on your maintenance is the day they become part of your legacy estate.

Put another way, it takes a project to get your system live. But if you think that’s the end of the story, you’ll find that it’s only the beginning. Too many projects move on to the next thing, leaving others to pick up what they’ve left behind … with no budget, no support and no chance.

You can pay now, or you can pay later, but you’re going to pay.

Computer Says No

The FT has a front page story today saying that Ulster Bank is absorbing the cost of negative interest rates (on money it has deposited at the ECB) because its systems can’t handle a minus sign. Doubtless whoever wrote the code, maybe in the 80s, never thought rates would fall below zero.

We had a similar problem at a bank in the 90s when our COBOL based general ledger couldn’t handle the number of zeros in the Turkish lita; we wrote to their central bank and PM to see if they wouldn’t mind looping a couple off so that we could continue to process transactions. History does not record the answer, but I suspect there came none.

Legacy systems were in the news in government IT this week as it was stated that there was no central register of such systems, that they are blocking data sharing and that there’s no plan to move off them. GDS, says Alison Pritchard, the interim leader, will be looking for money in the next spending review to deal with the problem.

This is, of course, an admirable aim. The trouble is, departments have been trying to deal with these systems for two decades – borders, immigration, farm payments, student loans, benefits, PAYE, customs etc all sit on systems coded in the 70s, 80s and early 90s. Legacy aka stuff that works. Just not the way we need it to work now.

Every department can point at one, and sometimes several, attempts to get off these systems … and yet the success rate is poor. Otherwise why would they still be around?

The agile world does not lend itself well to legacy replacement. Few businesses would accept the idea that their fully functional system would be replaced in a year or two with a less functional MVP. What would make the grade? How would everything else be handled? Could you run both in sync?

In the early 2000s a few of us tried to convince departments to adopt an “Egg” model and build a new business inside the existing business – one that was purely internet facing and that would have less capability than the existing systems but that would grow fast. Once someone (business or person) was inside the system, we would support them in that new system, whatever it took – but it would be a one way ticket. We would gradually migrate everyone into that system, adding functionality and moving ever more complicated customers as the capability grew.

It’s a challenging strategy. It would have been easier in the 2000s. Harder now. Much harder. But possible. With commitment. And a lot of planning.