Bad news by rocket … bad news by rickshaw
There may be a gap in the market … but is there a market in the cap.
Bad news by rocket … bad news by rickshaw
There may be a gap in the market … but is there a market in the cap.
Late last week a seemingly comprehensive takedown Amazon, titled “Amazon’s extraordinary grip on British data“, appeared in the Telegraph, written by Harry de Quetteville.
Read quickly it would suggest that Amazon, through perhaps fair and foul means, has secured too great a share of UK Government’s cloud business and that this poses an increasingly systemic risk to digital services and, inevitably, to consumer data.
Read more slowly, the article brings together some old allegations and some truths and joins them together so as get to the point where I ask “ok, so what do you want to do about it”, but it doesn’t suggest any particular action. That’s not to be said that there’s no need for action, just that this isn’t the place to find the argument.
The main points of the Telegraph’s case are seemingly based on “figures leaked” (as far as I know, all of this data is public) to the newspaper:
Let’s start by setting out the wider context of the cloud market:
There’s a almost “by the by” figure quoted that I can’t source, where Lloyd’s of London apparently said that “even a temporary shutdown at a major cloud provider like AWS could wreak almost $20bn in business losses.” The Lloyd’s report I downloaded says:
What’s clear from all of the figures is that the cloud market is expanding quickly, that Amazon has seized a large share of that market but is under pressure from growing rivals, and that there is an increasing concentration of workloads deployed to the cloud.
It’s also true that governments generally, but particularly UK government, are a long way from a wholesale move to the cloud with few front line, transactional services, deployed. Most of those services are still stuck in traditional data centres, anchored by legacy systems that are slow to change and that will resist, for years to come, a move to a cloud environmnet. Instead, work will likely be sliced away from them, a little at a time, as new applications are built and the various transformation projects see at least some success.
When the move to cloud started, government was still clinging to the idea that its data somehow needed protection beyond that used by banks, supermarkets and retailers. There was a vast industry propping up the IL3 / Restricted classification (where perhaps 75-80% of government data sat, mostly emails asking “what’s for lunch?”). This classification made cloud practically impossible – IL3 data could not sit on the same servers or storage as lower (or higher) classified data, it needed to be in the UK andsecured in data centres that Tom Cruise and the rest of the Mission Impossible team couldn’t get into. Let’s not even get into IL4. And, yes, I recognise that the use of IL3 and IL4 in regards to data isn’t quite right, but it was by far the most used way of referring to that data.
Then, in 2014, after some years of work, government made a relatively sudden, and dramatic, switch. 95% of data was “Official” and could be handled with commercial products and security. A small part was “Official Sensitive” which required additional handling controls, but no change in the technical environment.
And so the public cloud market became a viable option for governments systems – all of them, not just websites and transactional front ends but potentially anything that government did (that didn’t fall into the 5% of things that are secret and above).
Government was relatively slow to recognise this – after all, there was a vast army of people who had been brought up to think about data in terms of the “restricted” classification, and such a seismic change would take time. There are still some departments that insist on a UK presence, but there are many who say “official is official” and anywhere in the UK is fine
It was this, more than anything, that blew the doors off the G-Cloud market. You can see the rise in Lot 1/IaaS cloud spend from April 2014 onwards. That was not just broad awareness of cloud as an option, but the recognition that the old rules no longer applied.
The UK’s small and medium companies had built infrastructures based around the IL3 model. It was more expensive, took longer, and forced them through the formal accreditation model. Few made it through; only those with strong engineering standards and good process discipline and, perhaps, relatively deep pockets. But once “official” came along, much of that work was over the top, driving cost and overhead into the model and it wasn’t enough of a moat to keep the scale players out.
I’ve let contracts worth several hundred million pounds in total and worked with people who have done 5, 10 or 20x that amount. I’ve never met anyone in government who bought something because of a relationship with a former colleague or because of any bias for or against any supplier. Competition is fearsome. Big players can outspend small players. They can compete on price and features. Small players can still win. Small players can become big players. Skate where the puck is going, not where it was.
How does a government department choose a cloud provider?
Whilst the original aim of G-Cloud was to be able to type in a specification of what was wanted and have the system spit out some costs (along with iTunes style reviews), the reality is that getting a quote is more complicated than that. The assumption, then, was perhaps that cloud services would be true commodity, paying by the minute, hour or day for servers, storage and networks. That largely isn’t the case today.
There are three components to a typical evaluation
1) How much will it cost?
2) What is the range of products that I can deploy and how easily can I make that happen? Is the supplier seen by independent bodies as a leader or a laggard.
3) Do I, or my existing partners, already have the skills needed to manage this environmment?
Most customers will likely start with (3), move to (2) and then evaluate (1) for the suppliers that make it through.
Is there a bias here? With AWS having close to 50% market share of the entire cloud market, the market will be full of people with AWS skills, followed closely by those with Azure skills (given the predominance of Microsoft environments for e.g. Active Directory, email etc in government). Departments will look at their existing staff, or that of their suppliers, or who they can recruit, and pick their strategy based on the available talent.
Departments will also look at Gartner, or Forrester, and see who is in the lead. They will talk to a range of supplier partners and see who is using what. They will consult their peers and see who is doing what.
But there’s no bias against, or for, any given supplier. We can see that when we read about companies who have been hauled over the coals by one department and the very next week they get a new contract from a different department. Don’t read conspiracy into anything government ever does; it’s far more likely to be cockup.
Is there a revolving door?
People come into government from the outside world and people leave government to go to the outside world. In the mid-2000s there was a large influx of very senior Accenture people joining government; did Accenture benefit? If anything, they probably lost out as the newcomers were overcautious rather than overzealous.
Government departments don’t choose a provider because a former colleague or Cabinet Office power broker is employed by the supplier. As anywhere, relationships persist for a period – not as long as you would think – and so some suppliers are better able to inform potential customers of the range of their offer, but this is not a simple relationship. Some people are well liked, some are well respected and some are neither. There are 17,000 people in government IT. They all play a role. Some will stay, some will go. Some make decisions, some don’t.
Also, a bid informed by a former colleague could be better written than one uninformed. This advantage doesn’t last beyond a few weeks. I’ve worked on a lot of bids (both as buyer and seller) and I’m still amazed how many suppliers fail to answer the question, don’t address the scoring criteria, or waffle away beyond the word count. If you’ve been a buyer, you will likely be able to teach a supplier how to write a bid; but there are any number of people who can do that,
There is little in the way of inside information about what government is or isn’t doing or what its strategy will look like. Spend a couple of hours with an architect or bid manager in any Systems Integrator that has worked for several departments and you will know as much about government IT strategy as anyone on the inside.
Do costs escalate (and are suppliers lowballing)?
Once a contract is signed, and proved to be working, it would be unusual if more work was not put through that same contract.
What’s different about cloud is mostly a function of the sift from capex to opex. Servers largely sit there and rust. The cost is the cost. Maybe they’re 10% used for most of their lives, with occasional higher spikes. But the cost for them doesn’t change. Any fluctuations in power are wrapped into a giant overhead number that isn’t probed too closely.
Cloud environments consume cash all the time though. Spin up a server and forget to spin it down and it will cost you money. Fire up more capacity than you need, and it will cost you money. Set up a development environment for a project and, when the project start is delayed by governance questions, don’t spin it down, and it will cost you money. Plan for more capacity than you needed and don’t dynamically adjust it, and it will cost you money. Need some more security, that’s extra? Different products, that’s more as well. If you don’t know what you need when you set out, it will certainly cost more than you expected when you’re done.
Many departments will have woken up to this new cost model when they received their first bill and it was 3x or 5x what they expected. Cost disciplines will then have been imposed, probably unsuccessfully. Over time, these will be improving, but there are still going to be plenty of cases of sticker shock, both for new and existing cloud customers, I’m sure.
But if the service is working, more projects will be put through the same vehicle, sometimes with additional procurement checks, sometimes without. The Inland Revenue’s original contract with EDS was valued, in 1992, at some £200m/year. 10 years later it was £400m and not long after that, with the addition of HMCE (to form HMRC), and the transition to CapGemini, it was easily £1bn.
Did EDS lowball the cost? Probably. And it probably hurt them for a while until new business began to flow through the contract – in 1992, the IR did not have a position on Internet services, but as it began to add them in the late 90s, its costs would have gone up, without offsetting reductions elsewhere.
Do suppliers lowball the cost today? Far less so, because the old adage “price it low and make it up on change control” is difficult to pull off now and with unit costs available and many services or goods being bought at a unit cost rate, it would be difficult to pull the wool over the eyes of a buyer.
Is tax paid part of the evaluation?
For thirty years until the cloud came along, most big departments relied on their outsourced suppliers to handle technology – they bought servers, cabled them up, deployed products, patched them (sometimes) and fed and watered them. Many costs were capitalised and nearly everything was bought through a managed services deal because VAT could be reclaimed that way.
Existing contracts were used because it avoided new procurements and ensured that there was “one throat to choke”, i.e. one supplier on the hook for any problems. Most of these technology suppliers were (and are) based outside of the UK and their tax affairs are not considered in the evaluation of their offers.
HMRC, some will recall, did a deal with a property company registered in Bermuda, called Mapeley, that doesn’t pay tax in the UK.
Tax just isn’t part of the evaluation, for any kind of contract. Supplier finances are – that is, the ability of a company to scale to support a government customer, or to withstand the loss of a large customer.
Is 1/3rd of government information stored in AWS?
No. Next question.
IaaS expenditure is perhaps £10-12m/month (through end of 2018). Total government IT spend, as I’ve covered here before, is somewhere between £7bn and £14bn/year. In the early days of the Crown Hosting business case, hosting costs were reckoned to be up to 25% of that cost. Some 70% of the spend is “keep the lights on” for existing systems.
Most government data is still stored on servers and storage owner by government or its integrators and sits in data centres, some owned by government, but most owned by those integrators. Web front ends, email, development and test environments are increasingly moving to the cloud, but the real data is still a long way from being cloud ready.
Are 80% of contracts won by large providers?
Historically, no. UKcloud revenues over the life of G-Cloud are £86m with AWS at around £63m (through end of 2018). AWS’ share is plainly growing fast though – because of skills in the marketplace, independent views of the range of products and supportability, and because of price.
Momentum suggests that existing contracts will get larger and it will be harder (and harder) for contracts to move between providers, because of the risk of disruption during transition, the lack of skill and the difficulty of making a benefits case for incurring the cost of transition when the savings probably won’t offset that cost.
So what should we do?
It’s easy to say “nothing.” Government doesn’t pick winners and has rarely been successful in trying to skew the market. The cloud market is still new, but growing fast, and it’s hard to say whether today’s winners will still be there tomorrow.
G-Cloud contracts last only two years and, in theory, there is an opportunity to recompete then – see what’s new in the market, explore new pricing options and transition to the new best in class (or Most Economically Advantageous Tender as it’s known)
But transition is hard, as I wrote here in March 2014. And see this one, talking about mobile phones, from 2009 (with excerpts from a 2003 piece). If services aren’t designed to transition, then it’s unlikely to ever happen.
That suggests that we, as government customers, should:
1) Consciously design services to be portable, recognising that will likely increase costs up front (which will make the business case harder to get through), but that future payback could offset those costs; if the supplier knows you can’t transition, you’re in a worse position than if you have choices
2) Build tools and capabilities that support multiple cloud environments so that we can pick the right cloud for the problem we are trying to solve. If you have all of your workloads in one supplier and in one region, you are at risk if there is a problem there, be it fat fingers or a lightning strike.
3) Train our existing teams and keep them up to date with new technologies and services. Encourage them to be curious about what else is out there. Of course they will be more valuable to others, including cloud companies, when you do this, but that’s a fact of life. You will lose people (to other departments and to suppliers) and also gain people (from other departments and from suppliers).
And, as government suppliers, we should:
1) Recognise that big players exist in big markets and that special treatment is rarely available. They may not pay tax in this jurisdiction, but that’s a matter for law, not procurement. They may hire people from government; you have already done the same and you will continue to look out for the opportunity. Don’t bleat, compete.
2) Go where the big players aren’t going. Offer more, for less, or at least for the same. Provide products that compound your customers investment – they’re no longer buying assets for capex, but they will want increased benefit for their spend, so offer new things.
3) Move up the stack. IaaS was always going to be a tough business to compete in. WIth big players able to sweat their assets 24/7, anyone not able to swap workloads between regions and attract customers from multiple sectors that can better overlap peak workloads, is going to struggle. So don’t go there, go where the bigger opportunities are. Government departments aren’t often buying dropbox, so what’s your equivalent for instance?
1) Expect government to intervene and give you preferential treatment because you are small and in the UK. Expect such preferential treatment if you have a better product, at a better price that gets closest to solving the specific problem that the customer has.
2) Expect government to break up a bigger business, or change its structure so that you can better compete. It might happen, sure, but your servers will have long since rusted away by the time that happens.
Years ago, I spent a happy three years living in Paris. I’d moved there via Germany, then Austria. I didn’t take much with me and the one thing I was happiest to leave behind was my TV. I didn’t own a TV for perhaps a decade.
Each European country I lived in had some quirky laws – that’s quirky when compared with the UK equivalents. For instance, shops in Vienna closed at lunchtime on Saturday and didn’t open on Sunday. The one exception was a store that mostly sold CDs and DVDs, right near the Hofburg (the old royal palace) that had apparently earned the right to stay open, when it sold milk and other essentials, direct to the royal family. It seemed that the law protected that right, even though there was no royal family and it didn’t sell milk.
I was perhaps not surprised to read recently that there are plenty of anachronistic laws covering French TV. For instance
The French government is considering changing these laws, but not until the end of 2020. Plainly the restrictions don’t apply to Youtube, Netflix or Amazon Prime. Netflix, alone, has 5m users in France. TV is struggling already; and it’s even more hobbled with such laws.
There are, of course, plenty of other more important issues going on that demand the attention of any country’s executive, and so perhaps it’s not a surprise that, even in 2019, laws such as these exist.
But in the digital world where, for instance, in the UK, we legislated for digital signatures to be valid as far back as 2000, it’s interesting to look at the barriers that other countries have in place, for historical reasons, to making progress in the next decade.
How do we know the systems we are building today aren’t tomorrow’s legacy? Are we consciously working to ensure that the code we write isn’t spaghetti-like? that interfaces can be easily disassembled? that modules of capability can be unplugged and replaced by other, newer and richer ones?
I’ve seen some examples recently that show this isn’t always the case. One organisation, barely five years old, has already found that its architecture is wholly unsuitable for its current business looks, let alone what it will need to look like as its industry goes through some big changes.
Sometimes this is the result of moving too quickly – the opportunity that the business plan said needs to be exploited is there right now and so first mover advantage theory says that you have to be there now. Any problems can be fixed later goes the thinking. Except they can’t, because once the strings and tin cans are in place, there are new opportunities to exploit. There’s just no time to fix the underlying flaws, so they’re built on, with sedimentary layer after layer of new, often equally flawed, technology.
Is the choice, then, to move more slowly? To not get there first? Sometimes that doesn’t help either – move too slowly and costs go up whilst revenues don’t begin soon enough to offset those losses. Taking too long means competitors exploit the opportunity you were after – sure they may be stacking up issues for themselves later, but maybe they have engineered their capability better, or maybe they’re going so fast they don’t know what issues they’re setting up.
There’s no easy answer. Just as there never is. The challenge is how you maintain a clear vision of capability that will support today’s known business need as well as tomorrow’s.
How you disaggregate capability and tie systems together is important too. The bigger the system and the more capability you wrap into it, the harder it will be to disentangle.
Alongside this, the fewer controls you put around the data that enters the system (including formats, error checking, recency tests etc), the harder it will be to administer the system – and to transfer the data to any new capability.
Sometimes you have to look at what’s in front of you and realIse that “you can’t get there from here”, and slow down the burn and figure out how you start again, whilst keeping everything going in the old world.
About the medals
Government has a cash problem. It simply doesn’t have enough cash allocated to running costs for IT. Projects that were traditionally funded out of capital are, in the cloud world, funded out of operating budgets. This is going to hurt.
For many years, IT projects have been funded by capex (capital expenditure). Whatever came out of the project – servers, software licences, code, automation tools etc – sat on the balance sheet and was depreciated over an agreed period. Usually, for software, it was thought to be too long a period, but given that many of our systems are still working 20, 30 or even 40 years after launch, and so long since depreciated to zero, we clearly under-estimated the longevity of code. Similarly, we probably over-estimated the life of laptops and mobile phones where 5-7 years depreciation is common, but they have quickly become replaceable after 2 or maybe 3 years.
With the move to cloud, the entire infrastructure base switch from capex to opex – that is, it’s funded out of day to day expenses and nothing is held on the balance sheet. Millions of pounds of servers (and all the switches, routers and other kit associated with them, as well as some software, where SaaS products are used) left the balance sheet.
Governments tend to be capital rich – there are few departments who complain about not having enough capex. Capex buys actual things – in IT terms, servers with flashing lights and spinning disks that can be looked at, making the spend tangible (hence the use of tangible and intangible assets for different kinds of IT assets).
This has created a challenge for some departments who want to spend their capital, but also want to move to the cloud. There was a similar challenge early in the cloud era when VAT was not recoverable, putting further pressure on strained opex budgets.
I’m seeing a change though, now, where even software development is run as an opex project – on the basis that the code is expected to turn over rapidly and be replaced through an iterative agile approach. If a project goes wrong – at a micro or macro level – there’s no write-off (which can be important to some). At the same time, treating everything as opex means that, in some cases, there’s a building soon to be legacy code base (becuase it’s a fallacy to think that this code is iterated and replaced regularly) that is going unmaintained, meaning that there’s ever more spaghetti code that isn’t being looked at or tweaked. Knowledge of that code base is held by a smaller and smaller set of people … and changes to it become more difficult as a result.
It’s a strange move – one that perhaps implies that there is less scrutiny over opex spend, or that the systems being built will not be in use for the long term and so don’t quite count as assets. But IT systems have a habit of surprising us and sticking around for far longer than expected – ask the developers, if you can find them, of the big systems that pay benefits, collect tax, monitor imports, check passports at border etc what the expected life of their system was when they built it and the answer will never (ever) be “oh, decades.”
That’s not to say that there isn’t a case for classifying some IT spend as opex. If you are a fast moving startup building products for a new market and striving to reach product/market fit, you might be crazy to think that it was worth having IT on the balance sheet. If you know that you are building a prototype and will throw it away in a few weeks or months, it would, again, be crazy to capitalise it. If you’re doing R&D work and you’re not sure what will come out of it, you might well classify it as opex initially and revisit later to see if assets were created and then re-classify it.
I suspect that the tensions between capex and opex in government still have more room to play out
Easily the most well organised race in the London area (I’ve run most of them from 10k to full marathon), the Royal Parks Half keeps, ahem, knocking it out of the park. The start area today could have substituted for a Tough Mudder, but that didn’t dampen anyone’s enthusiasm.
I ran the inaugural race in 2008 and have only missed a couple since, collecting my 10th race t-shirt today. I don’t run it quite as fast as I used to but it’s still the most enjoyable race on my calendar.
I hope to return next year in better shape, for my 11th shirt. Kudos to the team who put this awesome event together each year. You rock!
Eliud Kipchoge’s 1:59:40 marathon, run in the beautiful city of Vienna (my home for a while long ago) is an astonishing run. Before the race Kipchoge said “it’s like the first man to go to the moon … this is about history … it’s about leaving a legacy.”
He went on to say “I want to show the world that when you trust in something and have faith in what you are doing, you will achieve it, whether you’re a runner, a teacher or a lawyer.”
Inspirational stuff. This is a man who rarely loses – he’s won 11 out of 12 marathons and has the two fastest official times on record (the current best in a race is 2:01:39, though he ran 2:00:26 in Monza with a similar setup to the Vienna attempt).
I’ve crossed a few marathon finishing lines around the world myself. Just finishing is a huge buzz. Being the first to finish in under 2 hours is just extraordinary.
Those who have run the London Marathon in a “typical” time (let’s call that a little under 4 hours, which is what roughly 20% of people achieve) will likely know that moment when they turn right having crossed Tower Bridge, at roughly 10 miles, and see “the elites” coming the other way, having long since left Docklands behind, as they hit 22 miles or so. At that point there are usually still half a dozen in the lead group and a few scattered behind, all running with metronomic precision. It’s simultaneously uplifting – up close and personal with incredible athletes – and soul destroying, as you realise they’re near the end and you still have 16 miles to go. Were it not for the amazing crowds there, some might be tempted to throw in the towel.
At the London 2012 Olympics and, at many other London marathons, I’ve sat in a stand at the finish line watching everyone come home. It’s a thrill to watch the great athletes, but it’s just as big a rush to see everyone else complete the race, some struggling as they come round the bend by Buckingham Palace, and most picking up speed as they see the finish line. It is, after all, a race.
Kipchoge is first. Others will doubtless follow. Perhaps soon, but perhaps not. Sub 2 in race conditions is the next extraordinary marathon achievement, but it feels like a stretch from here.
Me, I’m happy finishing a half marathon in less than 2 hours these days.
Dyson announced yesterday that it’s planned foray into Electric Vehicles was no longer commercially viable. The original investment was expected to be £2-2.5bn.
EVs are no more for Dyson, it seems, but it’s still going to press ahead with its investment in batteries, particularly solid state varieties. If I recall correctly, about half of the original investment was for the car itself and half for batteries – so this is still an investmnet of over £1bn in new capabilities (those batteries could still supply cars for other manufacturers of course, or could be solely for Dyson’s current and any future products).
Why decide this now?
It could, and probably is, all of these. The ROI profile must look daunting, especially in an uncertain market where realistic large scale take up is a decade away (at the mass market, global level – the point of pickup in volumes is perhaps 3-5 years away).
Perhaps the most important point is Dyson went all in, quickly. And then all out, quickly. He saw an opportunity and funded it, but then saw that it wasn’t going to pan out as planned, and so pulled the funding. J
This strikes me as the mind of an instute businessman, used to carrying out experiments, working at full capacity
– He saw an opportunity and allocated investment capital to it. We don’t know how much of the £2.5bn has been spent, or is wasted effort, or can’t be unwound of course.
– Early in the experiment it was clear that there were significant hurdles, e.g.
And so he reviewed his investment/experiment and decided it would take more time and money than he wanted to commit. Could he have figured that out by commissioning studies to assess feasibility and so on? Sure he could, and maybe he did and that’s why it’s been cancelled, but the way of the engineer is to start building it and see whether it will work, and it sounds to me that that’s how he approached it.
Just as I’ve written here before, projects fail. They fail all the time. The trick is to see the failure coming, be sure it’s really failing and can’t be picked up, and that it’s not failing becuase of an elementary error (that is, you have learned the lessons that others learned before you), and then address that.
It’s a bold decision. And his investments live to make a return another day. Does this mean anything for the overall move to EVs though? Seems unlikely – it’s evidence that new entrants will struggle and reinforces why partnerships are increasingly the thing both within the existing motor industry and for those trying to break in.
Yesterday I wrote about the difficulty of replacing existing systems, the challenges of meshing waterfall and agile (with reference to a currently running project) and proposed some options that could help move work forward. There is, though, one big caveat.
Some legacy systems are purely “of the moment” – they process a transaction and then, apart from for reporting or audit reasons, forget about it and move on to the next transaction.
But some, perhaps the majority, need to keep hold of that transaction and carry out actions far into the future. For instance:
– A student loan survives over 30 years (unless paid back early). The system needs to know the policy conditions under which that loan was made (interest rate, repayment terms, amount paid back to date, balance outstanding etc)
– Payments made to a farmer under Environmental Stewardship rules can extend up to a decade – the system retains what work has been agreed, how much will be paid (and when) and what the inspection regime looks like
In the latter case, the system that handles these payments (originally for Defra, then for Natural England and now, I believe, for the RPA) is called Genesis. It had a troubled existence but as of 2008 was working very well. The rules for the schemes that the system supports are set every 7 years by the EU; they are complicated and whilst there is early sight of the kind of changes that will be made, the final rules, and the precise implementation of them, only become clear close to the launch date.
Some years ago, in the run up to the next 7 year review, GDS took on the task, working with the RPA, of replacing Genesis by bundling it with the other (far larger in aggregate, but simpler in rules and shorter in duration) payments made by the RPA. As a result, Defra took the costs of running Genesis out of its budget from the new launch date (again, set by the EU and planned years in advance). Those with a long memory will remember how the launch of the RPA schemes, in the mid-2000s, went horribly wrong with many delays and a large fine levied by the EU on the UK.
The trouble was, the plan was to provide for the new rules. Not the old ones. An agreement could be made with a farmer a week before the new rules were in place, and that agreement would survive for 10 years – and so the new system would have to inherit the old agreements and keep paying. Well, new agreements could have been stopped ahead of a transition to the new system you might say. And, sure, that’s right – but an agreement made a year before would still have 9 years to go; one made 2 years before would have 8 years to go. On being told about this, GDS stripped out the Genesis functionality from the scope of the new system, and so Genesis continues to run, processing new agreements, also with 10 year lives … and one day it will have to be replaced, by which time it will be knocking on 20 years old.
Those with good memories will also know that the new system also had its troubles, with many of the vaunted improvements not working, payments delayed, and manual processes put in place to compensate. And, of course, Defra is carrying the running costs of the old system as well as the new one, and not getting quite the anticipated benefits.
IT is hard. Always has been. It’s just that the stakes are often higher now.
When replacing legacy systems where the transactions have a long life, sometimes there is a pure data migration (as there might be, say, for people arriving in the UK where what’s important is the data describing the route that they took, their personal details and any observations – all of which could be moved from an old system to a new system and read by that new system, even if it collected additional data or carried out differnt processing from the old system). But sometimes, as described above, there’s a need for the new system to inherit historic transactions – not just the data, but the rules and the process(es) by which those transactions are administered.
My sense is that this is the one of the two main reasons why legacy systems survive (the other, by the by, is the tangled, even Gordian, knot of data exchanges, interfaces and connections to other systems).
There are still options, but none are easy:
– Can the new system be made flexible enough to handle old rules and new rules, without compromising the benefits that would accure from having a completely new system and new processes?
– Can the transactions be migrated and adjusted to reflect the new rules, without breaching legal (or other) obligations?
– Can the old system be maintained, and held static, with the portfolio of transactions it contains run down, perhaps with an acceleration from making new agreements with individuals or businesses under the new rules? This might involve “buying” people out of the old contracts, a little like those who choose to swap their defined benefit pension for a defined contribution deal, in return for a lump sum.
– Can a new version of the old system be created, in a modern way, that will allow it to run much more cheaply, perhaps on modern infrastructure, but also with modern code? This could help shave costs from the original system and keep it alive long enough for a safe transition to happen.
Some of these will work; some won’t. The important thing is to be eyes open about what you are trying to replace and recognise that when you reach from the front end into the back end, things get much harder and you forget that at your peril. Put another way, Discovery is not just about how it should be, but about how it was and how it is … so you can be sure you’re not missing anything.