Disaggregation Disillusionment

About 15 years ago I wrote a post titled “Websites of Mass Disillusionment”, or maybe it was “Websites of Mass Delusion.”  I can’t recall which and, unusually, I can’t find the original text – I was told, by a somewhat unhappy Minister of the Cabinet Office, to delete the post or lose my job.  At the time I rather liked my job and so I opted to delete the post.  The post explored how, despite there being 1000s of government websites, on which 100s of millions of pounds were being spent, the public, at large, didn’t care about them, weren’t visiting them and saw no need to engage with government (here’s at least a thread of the article, published in August 2009).  I don’t think the Minister disagreed with the content, but he definitely wasn’t keen on the title, coming so soon after the famous missing WMDs in Iraq.
I’m somewhat hesitantly hinting at that title again with this post, though I have less fear of a Minister telling me I will lose my job because of it (I’m not employed by any Ministers) and, anyway, I think this topic, disaggregation, is worth exploring.
It’s worth exploring because the news over recent months has been full of stories about departments extending contracts with existing suppliers, re-scoping and re-awarding contracts to those same suppliers or moving the pieces around those suppliers creating the illusion of change but, fundamentally, changing little.
It looks like jobs for the boys again; there’s very little sign of genuine effort at disaggregation; they’re just moving the pieces around
This feels like a poor accusation – putting to one side the tone of “jobs for the boys” in 2018, it hints at dishonesty or incompetence when, I think, it says more about the challenges departments are facing as they grapple with unwinding contracts that were often put in place 15-20 years ago and that have been “assured” rather than “managed” for all of that time.
But, let’s move on and first establish what we mean, in the context of Public Sector IT, by disaggregation.  We have to wind back a bit to get to that:
IT Outsourcing In The Public Sector (1990 onwards)
In the early 1990s, when departments began to outsource their IT, the playbook was roughly:
Count up everyone with the word “technology”, “information” or “systems” in their job title and draw up a scope of services that encompassed all of that work. 
Carry out an extensive procurement transition to find a third party provider prepared to pay the lowest price to do the same job.    The very nature of government departments meant that these contracts were huge – sometimes £100-200m/year (in the 90s) and because it was such hard work to carry out all of the procurement process, the contracts were long, often 10 years or more.
With them went hardware and software, networks and other gadgets – or, at least, the management of those things.  Whereas the people moved off the payroll, the hardware often stayed on the asset register (and new hardware went on that same asset register, even when purchased through the third party).  This was mostly about capital spending – something with flashing lights went on the books after all.  
There were a lot of moving parts in these deals – the services to be provided, the meaures by which performance and quality would be assessed, legal obligations, plans for future exits and so on.  I’ve seen some of the contracts and they easily ran to more than 10,000 pages. 
Side Effects
There were four interesting side effects as a result of these outsource deals:
  1. Many departments could now recover VAT on “managed services” but not on hardware purchases.  Departments are good at exploiting such opportunities and so the outsource vendor would buy the hardware on behalf of the department, sell it to back to the department as part of a managed service, and the department would then reclaim the VAT, getting 20% back on the deal.   Those who were around in the early days of G-Cloud will remember the endless loops about whether VAT could be reclaimed – it was some years after G-Cloud started that this was successfully resolved.
  2. Departments now had a route to buying more IT services, or capability, without needing to go through a new procurement, provided the scope of the original procurement was wide enough.  That meant that existing contracts could be used to buy new services.  And, as everyone knows, IT doesn’t stay still, so there were a lot of new services, and nearly all of them went through the original contract.  Those contracts swelled in size, with annual spend often double or triple the original expectation within the first few years.  When e-government, now digital, came along and when departments merged, those numbers often exploded.
  3. Whilst all of the original staff involved transferred, via TUPE, on the package they had in government – salary plus index linked pensions etc – any new staff brought on e.g. to replace those who had left (or retired) or for new projects, would come on with a deal that was standard for the private sector.  That usually meant that instead of pension contributions being 27-33%, they were more likely 5-7%.  Instantly, that created an easy save for government – it was 20% or more cheaper, even before we talk VAT, to use the existing provider.
  4. Whilst departments have long had an obligation to award business to smaller players, the ease of using the big players with whom they already had contracts made that difficult (in the sense that there was an easy step “write a contract change to award this work to X” versus “Write the spec, go to market, evaluate, negotiate, award, work with new supplier who doesn’t understand us”).  Small players were, unfairly, shut out.
The Major Flaw
There was also a significant flaw:
  • When a department wanted to know what something cost, it was very hard to figure out.  Email for instance – a few servers for outlook, some admin people to add and delete users etc, how hard can it be to cost?  That’s a bit like Heisenberg’s Uncertainty Principle – the more you study where something is the less you know about where it’s going.  In other words, if you looked closely at one thing, the money moved around.  If something needed to be cheap to get through, the costs were loaded elsewhere.  If something needed to be expensive to justify continued investment (avoiding the sunk cost fallacy), costs were loaded on to it.  Then, of course, there was the ubqiuity of “shared services” – as in “Well, Alan, if you want me to figure out how much email costs, we need to consider some of Bob’s time as he answers the phone for all kinds of problems, a share of the network costs for all that traffic, some of Heidi’s time because email is linked to the directory and without the work she does on the directory, it wouldn’t work” and so on.  Benchmarking was the supposed solution for that – but if you couldn’t break out the costs, how did you know it was value for money?  Or not?  Did suppliers consciously hinder efforts to find true cost?  I suspect it was a mix of the structure they’d built for themselves – they didn’t, themselves, know how it broke down – and a lack of disciplined chasing by departments … because the side effects and the flaws self-reinforced.
Reinforcement

Over the 20 years or so from the first outsourcing until Francis Maude part 2 started, in 2010, these side-effects, and the major flaw, reinforced the outsourcing model.  It was easy to give work to the supplier you already worked with.  It was hard to figure out whether you were over-paying, so you didn’t try to figure that out very often.  The supplier was, on the face of it, anyway, cheaper than you could do it (because VAT, because cost of transition, because pensions etc).  These aren’t good arguments, but I think they are the argument.


What Do We Mean By Disaggregation?
Disaggregation, then, was the idea of breaking out these monolithic contracts (some departments, to be fair, had a couple of suppliers, but usually as a result of a machinery of government change that merged departments, or broke some apart).
A department coming to the end of its contract period with its seeming partner of the last decade would, instead of looking for a new supplier to take on everything, break their IT services into several component parts: networks, desktop, print, hosting, application support, Helpdesk and so on.
There were essentially three ways of attempting this as in the picture below (this, and all of the pictures here, are from various slide decks worked on in 2013/4):
That is:
1) A simple horizontal split – perhaps user facing services, and non-user facing.   This was rarely chosen as it didn’t pass the GDS spend controls test and, in reality, didn’t really achieve much of the true aim of disaggregation, albeit it made for a simple model for a department to operate.
2) A “towers based” model with an integration entity or partner working with several towers, for instance, hosting, desktop, network and applications support.  This was the model chosen by the early adopters of disaggregation.  Some opted to find a partner as their SIAM, some thought about bringing it inhouse, some did a little of both.  The pieces in a tower model are still pretty large, often far out of the reach of small providers, especially if the contract runs over 5 years or more.  Those departments that tried it this way, haven’t had a good experience for the most part, and the model has fallen out of favour.
3) A fully disaggregated model with a dozen or more suppliers, each best of breed and focused on what they were best at.  Integration, in this case, was more about filling in all of the gaps and, realistically, could only be done in house.  Long ago, and I know it’s a broken record, when we built the Gateway, we were disaggregated – 40+ suppliers working on the development, a hosting provider, an infrastructure builder, an apps support provider, a network provider and so on.  Integration at this level isn’t easy.
In the “jobs for the boys” quote above, the claim is really that the department concerned had opted for something close to (2) rather than (3) – that is, deliberately making for large contracts (through aggregation) and preventing smaller players from getting involved.  It’s more complicated than that.

That reinforcement – the side effects and the flaws – plus the inertia of 20+ years of living in a monolithic outsource model meant that change was hard.  Really hard.

What Does That Mean In Practice?
Five years ago, I did some work for a department looking at what it would take to get to the third model, a fully disaggregated service.  The scope looked like this:
Service integration, as I said above, fills in the gaps … but there are a lot of components.  Lots of moving parts for sure.  Many, many millions were spent by departments on Target Operating Models – pastel shaded powerpoints full of artful terms for what the work would look like, how it would be done and what tools were used.  Nearly all of that, I suspect, sits on a shelf, long since abandoned as stale, inflexible and useless.
If they had disaggregated to this level, they would need to sign more than 20 contracts.  That would mean 20 procurements carried out roughly in parallel, with some lagging to allow others to break ground first.  But all would need to complete by the time the contract with the main supplier came up for renewal.  The end date, in other words was, in theory at least, fixed.  Always a bad place to start.
Procurement Challenge
When you are procuring multiple things in parallel, those buying and those selling suffer.  Combining some things would allow a supplier, perhaps, to offer a better deal.  But the supplier doesn’t know what they’ve won and can’t bid on the basis that they will win several parts and so book the benefit of that in their offer (unless they’re prepared to take some possibly outlandish risks).  Likewise, the customer wants variety in the supply chain and wants to encourage bidders to come forward but, at the same time, needs to manage a bid process with a lot of players, avoiding giving any single bidder more work than is optimal (and the customer is unable to influence the outcome of any single bid of course), keeping everyone in the game, staying away from conflicts of interest and so on.
Roadmap Challenge
The transitions are not equally easy (or equally hard).  Replacing WAN connectivity is relatively straight forward – you know where all the buildings are and need to connect them to the backbone, or to the Internet.  Replacing in office connectivity is a bit harder – you need to survey every office and figure out what the topology of the wireless network is, ripping out the fixed connections (except where they might be needed).  Moving to Office 365 might be a bit harder, especialyl if it comes with a new Active Directory and everyone needs to be able to mail everyone else, and not lose any mail, whilst the transition is underway.  None of these are akin to putting astronauts on the moon, but for a department with no astronauts, hard enough.

We also need to consider that modern services are, for the most part, disaggregated from day one – new cloud services are often procured from an IaaS provider, several development companies, a management company and so on.  What we are talking about here, for the most part, is the legacy applications that have been around a decade or more, the network that connects the dozens or hundreds of offices around the country (or the world), the data centres that are full of hardware and the devices that support the workload of thousands, or tens of thousands of users.  These services are the backbone of government IT, pending the long promised (and delayed even longer than disaggregation), digital transformation.  They may not (and indeed, are not) user led, they’re certainly not agile – but they handle our tax, pensions, benefits, grants to farmers and so on.

What Does It Really Mean In Practice?

Writing papers for Ministers many years ago, we would often start with two options, stark choices.  The preamble we used to describe these was “Minister, we have two main choices.  The first one will result in nuclear war and everyone will die.  The second will result in all out ground war and nearly everyone will die.  We think we have a third way ahead, it’s a little risky, and there will be some casualties, but nearly everyone will survive.”  Faced with that intro, what choice do you think the Minister will make?

In this context, the story would be something like: “Minister, we have two options.  The first is to largely stay as we are.  We will come under heavy scrutiny, save no money, progress our IT not a jot and deliver none of the benefits you have promised in your various policies.  The second is to disaggregate our services massively, throwing our control over IT into chaos, increasing our costs as we transition and sucking up so many resources that we won’t be able to do any of the other work that you have added to our list since you took office. Alternatively … we have a third choice”

Disaggregate a little. Take some baby steps.  Build capability in house, manage more suppliers than we’re used to, but not so many suppliers such that your integration capability would be exhausted before it had a chance.

Remember, all those people in the 90s who technology, IT or systems in their job title had been outsourced.  They were the ones who built and maintained systems and applications.  In their place came people who managed those who built and maintained systems – and all of those people worked for third parties.   There’s a huge difference between managing a contract where a company is tasked with achieving X by Y and managing three companies, none of whom have a formal relationship with each other, to achieve X by Y.
The next iteration tried to make it a bit simpler:
We’re down from more than 20 contracts to about 11. Still a lot, but definitely less to manage.  Too much to manage for most departments though.  We worked on further models that merged several of the boxes, aiming for 5-7 contracts overall – a move from just 1 contract to 5-7 is still a big move, but it can be managed with the right team in-house, the right tools and if done at the right pace.
The Departmental Challenge
Departments, then, face serious challenges:
– The end date is fixed.  Transition has to be done by the time the contract with the incumbent finishes.  Many seem to be solving that by extending what they have, as they struggle with delays in specification, procurement or whatever.
– Disaggregate as much as is possible.  The smaller the package, the more bidders will play.  But the more disaggregation there is, the more white space between the contracts is and the greater the management challenge for the departments.  Most departments have not spent the last 5 years preparing for this moment by doubling up on staff – using some staff to manage the existing contract and finding new staff to prepare for the day when they will have to manage suppliers differently.  The result is that they are not disaggregating as much as is possible, but as much as they think they can.
– Write shorter contracts.  Short contracts are good – they let you book a price now in the full knowledge that, for commodity items at least, the same thing will be cheaper in two years. It isn’t necessarily but it at least means you can test the market every two years and see what’s out there – better prices, better service if you aren’t happy with your supplier, new technology etc.  The challenge is that the process – the 5 stage business case plus the procurement – is probably that long for some departments, and they are just not geared up to run fleets of procurements every two years.  Contracts are longer, then, to allow everyone to do the transition, get it working, make/save some money and then recompete.

– TUPE nearly always applies.  Except when it doesn’t – If you take your email service and move it to Office 365, the staff aren’t moving to Microsoft or to that ubiquitous company known as “the cloud.”  But when it does apply, it’s not a trivial process. Handling which staff transition to which companies (and ensuring that the companies taking on the staff have the capability to do it) is tricky.  Big outsource providers have been doing this for years and have teams of people that understand how the process works.  Smaller companies won’t have that experience and, indeed, may not have the capability to bring in staff on different sets of Ts & Cs.

On top of that, there are smaller challenges on the way to disaggregation, with some mitigations:

-Lack of skills available in the department; need to identify skills and routes for sourcing them early

-Market inability to provide a mature offer; coach the market in what will be wanted so that they have time to prove it

-Too great an uncertainty or risk for the business to take; prove concepts through alpha and beta so risks are truly understood

-Lack of clear return for the investment required; demonstrate delivery and credibility in the delivery approach so that costs are
managed and benefits are delivered as promised

-Delays in delivery of key shared services; close management with regular delivery cycles that show progress and allow slips to be visible and dealt with

-Challenges in creating an organisation that can respond to the stimulus of agile, iterative delivery led by user need; start early and prove it, adjust course as lessons are learned, partner closely with the business

What Do We Do?
Departments are on a journey.  They are already disaggregating more than we can see – the evidence of G-Cloud spend suggests that new projects are increasingly being awarded to smaller, newer players who have not often worked with government before. Departments are, therefore, learning what it’s like to integrate multiple suppliers, to manage disparate hosting environments and to deliver projects on an iterative basis.  As with any large population, some are doing that well, some are doing just about ok, and some are finding it really hard and making a real mess of it.  One hopes that those in the former category are teaching those in the latter, but I suspect most are too busy getting on with it to stop and educate others.
The journey plays out in stages – not in three simple stages as I have laid out above, but in a continuum where new providers are coming in and processes are being reformed and refocused on services and users.  Meanwhile, staff in the department are learning what it’s like to “deliver” and “manage” and “integrate” first one service and then many services, rather than “assure” them and check KPIs and SLAs.  Maybe the first jump is from one supplier to four, or five.  A couple of years later, one of those is split into two or three parts.  A year later, another is split.
This is a real change for the way government IT is run.  It’s a change that, in many ways, takes us all the way back to the 1980s when government was leading the way in running IT services – when tax, benefits, pensions and import/export was first computerised.  Back then, everything was run in house.  Now, key things are run in house and others outsourced, and, eventually, dozens of partners will be involved.  If we had our time over again, I think we would have outsourced paper handling (because it was largely static and would eventually decline) and kept IT (because it constantly changed) and customer contact (because that’s essentially what government does, especially when the IT or the paper processing lets it down) in house.
Disaggregation hasn’t happened nearly as fast as many of us hoped, or, indeed, as many of us have worked for in the last few years.  But it is happening.    The side effects, the flaws, inertia, reinforcement and a dominance of “assurance” rather than “delivery” capability, mean it’s hard.

We need to poke and prod and encourage further experimentation.  Suppliers need to make it easy to buy and integrate their services (recognising that even the cheapest commodity needs to be run and operated by someone). And when someone seems to take a short cut and extend a contract, or award to an existing supplier, we need to understand why, and where they are on their journey.  Departments need to be far more transparent about their roadmap and plans to help that

I want to give departments the benefit of the doubt here.  I don’t see them taking the easy way out; I have, indeed, seen some monumental cockups badged as efforts to disaggregate.    Staggering amounts of money – in all senses of the word (cash out the door, business stagnation, loss of potential benefits etc) – have been wasted in this effort.  That suggests a more incremental approach will work better, if not as well as we would all want.

That means that departments need to:

  1. Be more open about what their service provision landscape looks like two, three, four and five years out (with decreasing precision over time, not unreasonably). Coach the market so that the market can help, don’t just come to it when you think you are ready.
  2. Lay out the roadmap for legacy technology, which is what is holding back the increasing use of smaller suppliers, shorter contracts and more disaggregation.  There are three roadmap paths – everything goes exactly as you planned for and you meet all your deadlines (some would say this is the least likely), a few things go wrong and you fall a little behind, and it all goes horribly wrong and you need a lot more time to migrate away from legacy.  Departments generally consider only the first, though one or two have moved to the second. There’s an odd side effect of the spend control process – HMT requires optimism bias and so on to be included in any business case, spend controls normally strip that out, then departmental controls move any remaining contingency to the centre and hold it there, meaning projects are hamstrung by having no money (subject to approvals anyway) to deal with the inevitable challenges.
  3. Share what you are doing with modern projects – just what does your supplier landscape look like today

Mind The Gaps – Nothing New Under The Sun

As we start 2015, a year when several big contracts are approaching their end dates and replacement solutions will need to be in place, here’s a presentation I gave a couple of times last year looking at the challenges of breaking up traditional, single prime IT contracts into potentially lots of smaller, shorter contracts:

Government Draws The Line

On Friday, the Cabinet Office announced (or re-announced according to Patricia Hodge) that:

  • no IT contract will be allowed over £100 million in value – unless
    there is an exceptional reason to do so, smaller contracts mean
    competition from the widest possible range of suppliers
  • companies with a contract for service provision will not be allowed to provide system integration in the same part of government
  • there will be no automatic contract extensions; the government won’t extend existing contracts unless there is a compelling case
  • new hosting contracts will not last for more than 2 years

I was intrigued by the lower case. Almost like I wrote the press release.

These are the new “red lines” then – I don’t think these are re-announcements, they are firming up previous guidance.  When the coalition came to power, there was a presumption against projects over £100m in value; now there appears to be a hard limit (albeit with the caveat around exception reasons, ditto with extensions where there is a “compelling” case).
On the £100m limit:
There may be a perverse consequence here.  Contracts will be split up and/or made shorter to fit within the limit; or contracts may be undervalued with the rest coming in change controls.  Transitions may occur more regularly, increasing costs over the long term.  Integration of the various suppliers may also cost more.  For 20 years, government has bought its IT in huge, single prime (and occasionally double prime) silos.  That is going to be a hard, but necessary, habit to break.
£100m is, of course, still a lot of money.   Suppliers bidding for £100m contracts are likely the same as those bidding for £500m contracts; they are most likely not the same as those bidding for £1m or £5m contracts.
To understand what the new contract landscape looks like will require a slightly different approach to transparency – instead of individual spends or contracts being reported on, it would give a better view if the aggregate set of contracts to achieve a given outcome were reported.  So if HMRC are building a new Import/Export system (for instance), we should be able to visit a site and see the total set of contracts that are connected with that service (including the amounts, durations and suppliers).
On the “service providers” will not be allowed to carry out “system integration” point:
I’m not sure that I follow this but I take it to mean that competition will be forced into the process so that, in my point above about disaggregated contracts, suppliers will be prevented from winning multiple lots (particularly where hardware and software is provided by a company).  That, in theory, has the most consequence for companies like Fujitsu and HP who typically provide their own servers, desktops or laptops when taking on other responsibilities in an outsource deal.
 And no more extensions:
Assuming that there isn’t a compelling reason for extension, the contract term is the contract term.  If that rule is going to be rigorously applied to all existing contracts, there are some departments in trouble already who have run out of time for a reprocurement or who will be unable to attract any meaningful competition into such a procurement.  Transparency, again, can help here – which contracts are coming up to their expiry point (let’s look ahead 24 months to start with) and what is happening to each of them (along with what actually happened when push came to shove).  That would also help suppliers, particularly small ones, understand the pipeline.
On limiting hosting contracts to 2 years:
That’s consistent with the G-Cloud contract term (notwithstanding that some suppliers wrote to GDS last week asking for the term to be extended to 3 years).  But it’s also unproven – it’s one thing to “copy and paste” a dozen virtual machines from one data centre to another, it’s another thing to shift a petabyte of data or a set of load-balanced, firewalled, well-routed network connections.  Government is going to have to practice this – so far, moves of hosting providers have taken a year or more and cost millions (without delivering any tangible business benefit especially given the necessary freezes either side of the move).  It also means trouble for some of the legacy systems that are fragile and hard to move.  The Crown Hosting Service could, at least, limit moves of those kinds of systems to a single transition to their facilities – that would be a big help.

If IT Is Broken … Have We Got The Right Fix?

The main problem with Government IT, many will say, is that contracts (and so power, revenue and so on) is concentrated in a small number of very large vendors, mostly in single prime contracts. Even when they’re not in single prime setups they are in dual prime (e.g. MoJ has Atos and Logica, Home Office has Atos and Fujitsu, HMRC has CapGemini and, wait, Fujitsu and so on). So the fix, we are told, is to no longer allow such single prime contracts and, as a consequence of deciding that, break the market open to new players. Job done.

A new model has been crowned that goes by the name of “towers”. In another time, it might have been called best of breed. Essentially, several different suppliers are chosen, each of will be skilled in a specific tower – where those towers will be functions such as hosting, applications development, security, desktop support and so on (some tower models have seven such towers, some as many as thirteen). Such towers sometimes exist within the context of a single prime model but, this time, it is the contracting authority (i.e. the government department) who will own all of those contracts. They will, the thinking goes, have complete visibility of all of the prices and eliminate any “margin on margin” that results from the prime holding subcontracts. The result? A better deal will be had by government.

There is a very special kind of tower though. We might call it the sine qua non of the tower model. It’s various called the service integration tower or the SIAM (Service Integration and Management). In effect, it’s the old prime contractor (though not necessarily one of the companies that holds those contracts now) re-appearing as the company that manages all of the different towers, but doesn’t actually hold the contracts.

If we know, then, that the prime contractor model was so broken that it needed to be replaced, are we sure, I wonder, that the new model is going to fix all of the problems inherent in the old world? Well, let’s see:

– Lack of transparency … fixed to some degree … the contracting department will now see the cost of each of the components of the service. That doesn’t mean that there is full transparency within each contract of course. Whilst everything is “open book” there are a million definitions of what that means.

– No margin on margin … fixed to some degree … the absence of a prime means that one layer of margin is gone, but there will be other layers within the towers (particularly where small companies are encouraged to play by coming in under the “safe” wing of a bigger company)

– Shorter contracts so more regular competition … fixed to some degree … Whilst G-Cloud encourages one year contracts (the framework forces such contracts though there is nothing to prevent renewal at the end of the period), the new models seem to encourage durations somewhere between 3 and 7 years. Better than 10 years but still gives plenty of room for prices to keep up (the regular iterations of G-Cloud services will provide very useful benchmarking though as is already being seen)

– Lack of innovation … uncertain … it’s probably true that big IT companies struggle to bring new capabilities to bear but I suspect that it’s equally true that government has struggled to adopt such interesting developments as became available. Splitting contracts into smaller chunks doesn’t necessarily make them lighter weight and easier to change, but it might if government carries on with its plans to change the protective marking of data (and if the 50% of new spend via the public cloud promise is held on to, there is certainly scope for more innovation, but I don’t see that as related, particularly to the towers model)

– Greater involvement of SME … Uncertain … Breaking what were very large contracts into smaller, shorter contracts should certainly allow new players into the game, but it’s not clear if small players will make the cut. The recent PSN framework perhaps demonstrates that with only two small players involved – though that’s still two more than before. Without a wholesale shift away from complexity towards commodity, small players will still struggle to navigate the arcane bureaucracy of most government contracts and so will likely need to shelter uner the wings of the bigger players for some time to come

– Better delivery … Uncertain … I guess we have to wait and see. There will, though, be several schools of thought. The large players will say that they can only deliver if they have control of everything; others will say that the more complex the interactions between the contracts, the harder it will be to deliver; still others will say that competitive tension between the suppliers and the knowledge that the contracts are much shorter (or that individual pieces of work can be competed) will improve performance all round. How the SIAM looks and works may turn out to be the key here – how they operate, influence and drive change could make all the difference to delivery (for better or worse)

– Lower risk or, at least, better and clearer risk transfer … Also uncertain … With many more moving parts and overall control resting with the customer, aided in some way yet to be determined by the SIAM, the risk picture certainly looks more complicated from the outset.

The net of that is that, in my view, it’s unclear if this new model solves the problems of the old model. Much of whether it does will be in the detail of the contracts and the behaviour on the ground, by which time it’s too late to do much about it until the end of the first contract term – and at least that is a shorter period than it has historically been.

What worries me most about the new model is not whether it fixes any problems of the old model but how it will actually be put together. There are three stages that need to be got through, each of which will be more challenging than the equivalents ever were in the old model.

1. Buying it all. With the prime model, there were many potential suppliers at the beginning, a few in the middle and 2-3 near the end. Negotiations completed with just one. With the new model there will be multiple, parallel, inter-dependent commercial negotiations underway. That will put a huge burden on the client side buying team. In the past those teams have been heavily supported by external parties; that may not be possible this time, although some will put the SIAM or an equivalent in place first to mitigate that problem. Of course, several parts of government will be doing this at the same time putting pressure on customers, suppliers and potential partners.

2. The transition. There have been relatively few changes of contract over the last ten years. HMRC moving from EDS to Cap is one, for instance. That was largely a one to one transition. With the new models there will, if the point of the model is realised, be multiple transitions to manage – staff will be parcelled up and moved to any one of perhaps a half dozen suppliers (some may go back to the customer), systems will move to any of several data centres, support for apps will move (supplier, location and perhaps even country) and so on. That’s going to take a lot of management. Departments may say “that’s what the SIAM is for”. Suppliers may be giving work away for one contract at the same time as they are taking work on as a result of winning another contract. That could get interesting.

3. Running day to day. All of those moving parts, everyone looking at each other when a problem occurs, many pointing fingers away from themselves. How to diagnose a problem? Who moves first? Who pays service credits? Who proposes, funds and benefits from improvements? What happens in a crisis? All to be figured out. Again, some will give much of that role to the SIAM.

I used “SIAM” in each of those paragraphs deliberately. I get the feeling that it’s the role that everyone thinks will fix the problems of the past. Yet whoever operates there will not have contractual leverage unless they are, actually, the client themselves (that is, the owner of the contracts). At the same time, the SIAM looks a lot like a prime, without the ability to take on / share / divest / pass back risk. It isn’t, in my view, as simple as breaking up the contracts and creating a phantom integrator who somehow brings it together.

I wonder if the analsysis – and the sharing of understanding, lessons learned, best practice etc – is in place to support such a comprehensive and largely parallel implementation of the new model. It’s going to take a lot of work from all parties to make it work and, even then, it may turn out to be no better than the old model in some ways. It may be worse in some, better in others. But we’ll be in it, across the board, by then. So best to do all of the thinking now.