Disaggregation Disillusionment

About 15 years ago I wrote a post titled “Websites of Mass Disillusionment”, or maybe it was “Websites of Mass Delusion.”  I can’t recall which and, unusually, I can’t find the original text – I was told, by a somewhat unhappy Minister of the Cabinet Office, to delete the post or lose my job.  At the time I rather liked my job and so I opted to delete the post.  The post explored how, despite there being 1000s of government websites, on which 100s of millions of pounds were being spent, the public, at large, didn’t care about them, weren’t visiting them and saw no need to engage with government (here’s at least a thread of the article, published in August 2009).  I don’t think the Minister disagreed with the content, but he definitely wasn’t keen on the title, coming so soon after the famous missing WMDs in Iraq.
I’m somewhat hesitantly hinting at that title again with this post, though I have less fear of a Minister telling me I will lose my job because of it (I’m not employed by any Ministers) and, anyway, I think this topic, disaggregation, is worth exploring.
It’s worth exploring because the news over recent months has been full of stories about departments extending contracts with existing suppliers, re-scoping and re-awarding contracts to those same suppliers or moving the pieces around those suppliers creating the illusion of change but, fundamentally, changing little.
It looks like jobs for the boys again; there’s very little sign of genuine effort at disaggregation; they’re just moving the pieces around
This feels like a poor accusation – putting to one side the tone of “jobs for the boys” in 2018, it hints at dishonesty or incompetence when, I think, it says more about the challenges departments are facing as they grapple with unwinding contracts that were often put in place 15-20 years ago and that have been “assured” rather than “managed” for all of that time.
But, let’s move on and first establish what we mean, in the context of Public Sector IT, by disaggregation.  We have to wind back a bit to get to that:
IT Outsourcing In The Public Sector (1990 onwards)
In the early 1990s, when departments began to outsource their IT, the playbook was roughly:
Count up everyone with the word “technology”, “information” or “systems” in their job title and draw up a scope of services that encompassed all of that work. 
Carry out an extensive procurement transition to find a third party provider prepared to pay the lowest price to do the same job.    The very nature of government departments meant that these contracts were huge – sometimes £100-200m/year (in the 90s) and because it was such hard work to carry out all of the procurement process, the contracts were long, often 10 years or more.
With them went hardware and software, networks and other gadgets – or, at least, the management of those things.  Whereas the people moved off the payroll, the hardware often stayed on the asset register (and new hardware went on that same asset register, even when purchased through the third party).  This was mostly about capital spending – something with flashing lights went on the books after all.  
There were a lot of moving parts in these deals – the services to be provided, the meaures by which performance and quality would be assessed, legal obligations, plans for future exits and so on.  I’ve seen some of the contracts and they easily ran to more than 10,000 pages. 
Side Effects
There were four interesting side effects as a result of these outsource deals:
  1. Many departments could now recover VAT on “managed services” but not on hardware purchases.  Departments are good at exploiting such opportunities and so the outsource vendor would buy the hardware on behalf of the department, sell it to back to the department as part of a managed service, and the department would then reclaim the VAT, getting 20% back on the deal.   Those who were around in the early days of G-Cloud will remember the endless loops about whether VAT could be reclaimed – it was some years after G-Cloud started that this was successfully resolved.
  2. Departments now had a route to buying more IT services, or capability, without needing to go through a new procurement, provided the scope of the original procurement was wide enough.  That meant that existing contracts could be used to buy new services.  And, as everyone knows, IT doesn’t stay still, so there were a lot of new services, and nearly all of them went through the original contract.  Those contracts swelled in size, with annual spend often double or triple the original expectation within the first few years.  When e-government, now digital, came along and when departments merged, those numbers often exploded.
  3. Whilst all of the original staff involved transferred, via TUPE, on the package they had in government – salary plus index linked pensions etc – any new staff brought on e.g. to replace those who had left (or retired) or for new projects, would come on with a deal that was standard for the private sector.  That usually meant that instead of pension contributions being 27-33%, they were more likely 5-7%.  Instantly, that created an easy save for government – it was 20% or more cheaper, even before we talk VAT, to use the existing provider.
  4. Whilst departments have long had an obligation to award business to smaller players, the ease of using the big players with whom they already had contracts made that difficult (in the sense that there was an easy step “write a contract change to award this work to X” versus “Write the spec, go to market, evaluate, negotiate, award, work with new supplier who doesn’t understand us”).  Small players were, unfairly, shut out.
The Major Flaw
There was also a significant flaw:
  • When a department wanted to know what something cost, it was very hard to figure out.  Email for instance – a few servers for outlook, some admin people to add and delete users etc, how hard can it be to cost?  That’s a bit like Heisenberg’s Uncertainty Principle – the more you study where something is the less you know about where it’s going.  In other words, if you looked closely at one thing, the money moved around.  If something needed to be cheap to get through, the costs were loaded elsewhere.  If something needed to be expensive to justify continued investment (avoiding the sunk cost fallacy), costs were loaded on to it.  Then, of course, there was the ubqiuity of “shared services” – as in “Well, Alan, if you want me to figure out how much email costs, we need to consider some of Bob’s time as he answers the phone for all kinds of problems, a share of the network costs for all that traffic, some of Heidi’s time because email is linked to the directory and without the work she does on the directory, it wouldn’t work” and so on.  Benchmarking was the supposed solution for that – but if you couldn’t break out the costs, how did you know it was value for money?  Or not?  Did suppliers consciously hinder efforts to find true cost?  I suspect it was a mix of the structure they’d built for themselves – they didn’t, themselves, know how it broke down – and a lack of disciplined chasing by departments … because the side effects and the flaws self-reinforced.
Reinforcement

Over the 20 years or so from the first outsourcing until Francis Maude part 2 started, in 2010, these side-effects, and the major flaw, reinforced the outsourcing model.  It was easy to give work to the supplier you already worked with.  It was hard to figure out whether you were over-paying, so you didn’t try to figure that out very often.  The supplier was, on the face of it, anyway, cheaper than you could do it (because VAT, because cost of transition, because pensions etc).  These aren’t good arguments, but I think they are the argument.


What Do We Mean By Disaggregation?
Disaggregation, then, was the idea of breaking out these monolithic contracts (some departments, to be fair, had a couple of suppliers, but usually as a result of a machinery of government change that merged departments, or broke some apart).
A department coming to the end of its contract period with its seeming partner of the last decade would, instead of looking for a new supplier to take on everything, break their IT services into several component parts: networks, desktop, print, hosting, application support, Helpdesk and so on.
There were essentially three ways of attempting this as in the picture below (this, and all of the pictures here, are from various slide decks worked on in 2013/4):
That is:
1) A simple horizontal split – perhaps user facing services, and non-user facing.   This was rarely chosen as it didn’t pass the GDS spend controls test and, in reality, didn’t really achieve much of the true aim of disaggregation, albeit it made for a simple model for a department to operate.
2) A “towers based” model with an integration entity or partner working with several towers, for instance, hosting, desktop, network and applications support.  This was the model chosen by the early adopters of disaggregation.  Some opted to find a partner as their SIAM, some thought about bringing it inhouse, some did a little of both.  The pieces in a tower model are still pretty large, often far out of the reach of small providers, especially if the contract runs over 5 years or more.  Those departments that tried it this way, haven’t had a good experience for the most part, and the model has fallen out of favour.
3) A fully disaggregated model with a dozen or more suppliers, each best of breed and focused on what they were best at.  Integration, in this case, was more about filling in all of the gaps and, realistically, could only be done in house.  Long ago, and I know it’s a broken record, when we built the Gateway, we were disaggregated – 40+ suppliers working on the development, a hosting provider, an infrastructure builder, an apps support provider, a network provider and so on.  Integration at this level isn’t easy.
In the “jobs for the boys” quote above, the claim is really that the department concerned had opted for something close to (2) rather than (3) – that is, deliberately making for large contracts (through aggregation) and preventing smaller players from getting involved.  It’s more complicated than that.

That reinforcement – the side effects and the flaws – plus the inertia of 20+ years of living in a monolithic outsource model meant that change was hard.  Really hard.

What Does That Mean In Practice?
Five years ago, I did some work for a department looking at what it would take to get to the third model, a fully disaggregated service.  The scope looked like this:
Service integration, as I said above, fills in the gaps … but there are a lot of components.  Lots of moving parts for sure.  Many, many millions were spent by departments on Target Operating Models – pastel shaded powerpoints full of artful terms for what the work would look like, how it would be done and what tools were used.  Nearly all of that, I suspect, sits on a shelf, long since abandoned as stale, inflexible and useless.
If they had disaggregated to this level, they would need to sign more than 20 contracts.  That would mean 20 procurements carried out roughly in parallel, with some lagging to allow others to break ground first.  But all would need to complete by the time the contract with the main supplier came up for renewal.  The end date, in other words was, in theory at least, fixed.  Always a bad place to start.
Procurement Challenge
When you are procuring multiple things in parallel, those buying and those selling suffer.  Combining some things would allow a supplier, perhaps, to offer a better deal.  But the supplier doesn’t know what they’ve won and can’t bid on the basis that they will win several parts and so book the benefit of that in their offer (unless they’re prepared to take some possibly outlandish risks).  Likewise, the customer wants variety in the supply chain and wants to encourage bidders to come forward but, at the same time, needs to manage a bid process with a lot of players, avoiding giving any single bidder more work than is optimal (and the customer is unable to influence the outcome of any single bid of course), keeping everyone in the game, staying away from conflicts of interest and so on.
Roadmap Challenge
The transitions are not equally easy (or equally hard).  Replacing WAN connectivity is relatively straight forward – you know where all the buildings are and need to connect them to the backbone, or to the Internet.  Replacing in office connectivity is a bit harder – you need to survey every office and figure out what the topology of the wireless network is, ripping out the fixed connections (except where they might be needed).  Moving to Office 365 might be a bit harder, especialyl if it comes with a new Active Directory and everyone needs to be able to mail everyone else, and not lose any mail, whilst the transition is underway.  None of these are akin to putting astronauts on the moon, but for a department with no astronauts, hard enough.

We also need to consider that modern services are, for the most part, disaggregated from day one – new cloud services are often procured from an IaaS provider, several development companies, a management company and so on.  What we are talking about here, for the most part, is the legacy applications that have been around a decade or more, the network that connects the dozens or hundreds of offices around the country (or the world), the data centres that are full of hardware and the devices that support the workload of thousands, or tens of thousands of users.  These services are the backbone of government IT, pending the long promised (and delayed even longer than disaggregation), digital transformation.  They may not (and indeed, are not) user led, they’re certainly not agile – but they handle our tax, pensions, benefits, grants to farmers and so on.

What Does It Really Mean In Practice?

Writing papers for Ministers many years ago, we would often start with two options, stark choices.  The preamble we used to describe these was “Minister, we have two main choices.  The first one will result in nuclear war and everyone will die.  The second will result in all out ground war and nearly everyone will die.  We think we have a third way ahead, it’s a little risky, and there will be some casualties, but nearly everyone will survive.”  Faced with that intro, what choice do you think the Minister will make?

In this context, the story would be something like: “Minister, we have two options.  The first is to largely stay as we are.  We will come under heavy scrutiny, save no money, progress our IT not a jot and deliver none of the benefits you have promised in your various policies.  The second is to disaggregate our services massively, throwing our control over IT into chaos, increasing our costs as we transition and sucking up so many resources that we won’t be able to do any of the other work that you have added to our list since you took office. Alternatively … we have a third choice”

Disaggregate a little. Take some baby steps.  Build capability in house, manage more suppliers than we’re used to, but not so many suppliers such that your integration capability would be exhausted before it had a chance.

Remember, all those people in the 90s who technology, IT or systems in their job title had been outsourced.  They were the ones who built and maintained systems and applications.  In their place came people who managed those who built and maintained systems – and all of those people worked for third parties.   There’s a huge difference between managing a contract where a company is tasked with achieving X by Y and managing three companies, none of whom have a formal relationship with each other, to achieve X by Y.
The next iteration tried to make it a bit simpler:
We’re down from more than 20 contracts to about 11. Still a lot, but definitely less to manage.  Too much to manage for most departments though.  We worked on further models that merged several of the boxes, aiming for 5-7 contracts overall – a move from just 1 contract to 5-7 is still a big move, but it can be managed with the right team in-house, the right tools and if done at the right pace.
The Departmental Challenge
Departments, then, face serious challenges:
– The end date is fixed.  Transition has to be done by the time the contract with the incumbent finishes.  Many seem to be solving that by extending what they have, as they struggle with delays in specification, procurement or whatever.
– Disaggregate as much as is possible.  The smaller the package, the more bidders will play.  But the more disaggregation there is, the more white space between the contracts is and the greater the management challenge for the departments.  Most departments have not spent the last 5 years preparing for this moment by doubling up on staff – using some staff to manage the existing contract and finding new staff to prepare for the day when they will have to manage suppliers differently.  The result is that they are not disaggregating as much as is possible, but as much as they think they can.
– Write shorter contracts.  Short contracts are good – they let you book a price now in the full knowledge that, for commodity items at least, the same thing will be cheaper in two years. It isn’t necessarily but it at least means you can test the market every two years and see what’s out there – better prices, better service if you aren’t happy with your supplier, new technology etc.  The challenge is that the process – the 5 stage business case plus the procurement – is probably that long for some departments, and they are just not geared up to run fleets of procurements every two years.  Contracts are longer, then, to allow everyone to do the transition, get it working, make/save some money and then recompete.

– TUPE nearly always applies.  Except when it doesn’t – If you take your email service and move it to Office 365, the staff aren’t moving to Microsoft or to that ubiquitous company known as “the cloud.”  But when it does apply, it’s not a trivial process. Handling which staff transition to which companies (and ensuring that the companies taking on the staff have the capability to do it) is tricky.  Big outsource providers have been doing this for years and have teams of people that understand how the process works.  Smaller companies won’t have that experience and, indeed, may not have the capability to bring in staff on different sets of Ts & Cs.

On top of that, there are smaller challenges on the way to disaggregation, with some mitigations:

-Lack of skills available in the department; need to identify skills and routes for sourcing them early

-Market inability to provide a mature offer; coach the market in what will be wanted so that they have time to prove it

-Too great an uncertainty or risk for the business to take; prove concepts through alpha and beta so risks are truly understood

-Lack of clear return for the investment required; demonstrate delivery and credibility in the delivery approach so that costs are
managed and benefits are delivered as promised

-Delays in delivery of key shared services; close management with regular delivery cycles that show progress and allow slips to be visible and dealt with

-Challenges in creating an organisation that can respond to the stimulus of agile, iterative delivery led by user need; start early and prove it, adjust course as lessons are learned, partner closely with the business

What Do We Do?
Departments are on a journey.  They are already disaggregating more than we can see – the evidence of G-Cloud spend suggests that new projects are increasingly being awarded to smaller, newer players who have not often worked with government before. Departments are, therefore, learning what it’s like to integrate multiple suppliers, to manage disparate hosting environments and to deliver projects on an iterative basis.  As with any large population, some are doing that well, some are doing just about ok, and some are finding it really hard and making a real mess of it.  One hopes that those in the former category are teaching those in the latter, but I suspect most are too busy getting on with it to stop and educate others.
The journey plays out in stages – not in three simple stages as I have laid out above, but in a continuum where new providers are coming in and processes are being reformed and refocused on services and users.  Meanwhile, staff in the department are learning what it’s like to “deliver” and “manage” and “integrate” first one service and then many services, rather than “assure” them and check KPIs and SLAs.  Maybe the first jump is from one supplier to four, or five.  A couple of years later, one of those is split into two or three parts.  A year later, another is split.
This is a real change for the way government IT is run.  It’s a change that, in many ways, takes us all the way back to the 1980s when government was leading the way in running IT services – when tax, benefits, pensions and import/export was first computerised.  Back then, everything was run in house.  Now, key things are run in house and others outsourced, and, eventually, dozens of partners will be involved.  If we had our time over again, I think we would have outsourced paper handling (because it was largely static and would eventually decline) and kept IT (because it constantly changed) and customer contact (because that’s essentially what government does, especially when the IT or the paper processing lets it down) in house.
Disaggregation hasn’t happened nearly as fast as many of us hoped, or, indeed, as many of us have worked for in the last few years.  But it is happening.    The side effects, the flaws, inertia, reinforcement and a dominance of “assurance” rather than “delivery” capability, mean it’s hard.

We need to poke and prod and encourage further experimentation.  Suppliers need to make it easy to buy and integrate their services (recognising that even the cheapest commodity needs to be run and operated by someone). And when someone seems to take a short cut and extend a contract, or award to an existing supplier, we need to understand why, and where they are on their journey.  Departments need to be far more transparent about their roadmap and plans to help that

I want to give departments the benefit of the doubt here.  I don’t see them taking the easy way out; I have, indeed, seen some monumental cockups badged as efforts to disaggregate.    Staggering amounts of money – in all senses of the word (cash out the door, business stagnation, loss of potential benefits etc) – have been wasted in this effort.  That suggests a more incremental approach will work better, if not as well as we would all want.

That means that departments need to:

  1. Be more open about what their service provision landscape looks like two, three, four and five years out (with decreasing precision over time, not unreasonably). Coach the market so that the market can help, don’t just come to it when you think you are ready.
  2. Lay out the roadmap for legacy technology, which is what is holding back the increasing use of smaller suppliers, shorter contracts and more disaggregation.  There are three roadmap paths – everything goes exactly as you planned for and you meet all your deadlines (some would say this is the least likely), a few things go wrong and you fall a little behind, and it all goes horribly wrong and you need a lot more time to migrate away from legacy.  Departments generally consider only the first, though one or two have moved to the second. There’s an odd side effect of the spend control process – HMT requires optimism bias and so on to be included in any business case, spend controls normally strip that out, then departmental controls move any remaining contingency to the centre and hold it there, meaning projects are hamstrung by having no money (subject to approvals anyway) to deal with the inevitable challenges.
  3. Share what you are doing with modern projects – just what does your supplier landscape look like today

GDS Disaggregates Data

To judge from the Digerati’s comments, the recent move of Data (capital D) from GDS to DCMS is akin to the beginning of the end of GDS, that is, far beyond the end of the beginning that we were only celebrating a few weeks ago thanks to a brilliant talk by Janet Hughes.


For most in Government IT, disaggregation has been a hot topic and is a live goal for nearly all of them, even those busily extending their contracts with incumbents so that they can buy time to disaggregate properly, as I wrote in June 2013 for instance.


Concentrating power in big, slow moving central organisations has, traditionally, been a bad thing.  As an organisation grows, so does its bureaucracy.  Government has, then, repeatedly broken itself down (Departments and Ministries … agencies and NDPBs) in an effort to separate policy from delivery and get closer to the customer, with varying degrees of success.

Political fiefdoms have, at the same time, been created to satisfy egos (ODPM) or to pretend to the outside world that real change was happening (the story of dti on its journey to the current BEIS for instance).  Alongside that, functions have moved – Child Benefit between DWP and IR (now HMRC) – and Tax Credits, whilst benefits, were sited in HMRC rather than DWP, to the great consternation of HMRC staff on day one (and for many days thereafter).

GDS, perhaps accidentally, perhaps as a result of a flood of cash in the Spending Review, has become that big, slow moving central organisation.  I’m sure it wasn’t intentional – they saw gaps all around them and took on more people to fill those gaps. Before they knew it, they needed a bigger office to fit in the 900+ people in the organisation.  Along the way, they forgot what they were there for, as the NAO said.

On data, all we know for now is:

“Data policy and governance functions of the Government Digital Service (GDS) will transfer from the Cabinet Office to the Department for Digital, Culture, Media and Sport (DCMS). The transfer includes responsibility for data sharing (including coordination of Part 5 of the Digital Economy Act 2017), data ethics, open data and data governance.”

The real issue here is not that “Data”, whatever that is in this context, has moved from GDS to DCMS, but that we lack (still) an executable strategy.  We have a trite “transformation strategy” that is long on words and short on actions (see “No Vision, No Ambition” on this blog), but we have no real framework to evaluate this decision, to move “Data”, from one department to another.

An executable strategy would lay out not just the what, but the why, the how and the when.  We would be able to see how changes were planned to unfold, whether incremental, revolutionary or transformational … and when a decision such as this was taken, understand the impact on the that strategy and whether it was good or bad (and sometimes, decisions with known bad impacts are taken for good reasons).

Mike Bracken, writing in the New Statesman, is emphatic that this is a bad idea – one that runs against what everyone else in the world is doing.  His closing take is that:

“the UK seems to have made government a little bit slower, more siloed, harder to reform and more complex.”

GDS is hardly the rapidly responding, iterative, agile organisation that it set out to be (and that it certainly was in its early days as I’ve said before) … so maybe this little bit of disaggregation will free up the remaining (and still large) part to get moving again.

Over the last two decades we’ve had several goes at this – OeE, eGU, OCIO and then GDS.  Each worked for a while and then got bogged down in themselves.  New leadership came in, threw out some of what was done, took on some different things and did the things that new leaders generally do when they come in (say how rubbish everything was until they came along and then proceed to do much the same as had been done before only a little differently).

I suspect, though, that this isn’t enough of a change.  We need a more fundamental reform of GDS, taking it back to its roots and to what its good at.  So maybe it is the beginning of the end and maybe that’s no bad thing.

The Trouble With Transition – DECC and BIS Go First

In a head-scratching story at the end of last week, DECC and BIS made
the front page of the Financial Times (registered users/subscribers can access the story). 
Given the front page status, you might imagine that the Smart Meter
rollout had gone catastrophically wrong, or that we had mistakenly paid
billions in grants to scientists who weren’t getting the peer reviews
that we wanted, or that we’d suddenly discovered a flaw in our model for
climate change or perhaps that the Technology Strategy Board had made
an investment that would forever banish viruses and malware.

The BBC followed the story too.

But,
no.  Instead we have two departments having problems with their email. 
Several Whitehall wags asked me weeks ago (because, yes, this story has
been known about for a month or more) whether anyone would either
notice, or care, that there was no email coming to or from these
departments.   It is, perhaps, a good question.

Business Secretary Mr Cable and Energy and Climate Change Secretary Mr Davey were reported in the Financial Times
to be angry about slow and intermittent emails and network problems at
their departments since they started migrating to new systems in May.

The real question, though, is what actually is the story here?


It appeared to be a barely-veiled attack on the current policy of
giving more business to SMEs (insider says “in effect they are not
necessarily the best fit for this sort of task” … “an idealistic Tory
policy to shake up Whitehall”)

– Or was it about
splitting up contracts and of taking more responsibility for IT delivery
within departments (Mr Cable seemingly fears the combination of
cost-cutting and small firms could backfire)?

–  Was the story leaked by Fujitsu who are perhaps sore at losing their £19m per annum, 15 year (yes, 15. 15!) contract?

– Was
it really triggered by Ed Davey and Vince Cable complaining to the PM
that their email was running slow (“Prime Minister, we need to stop
everything – don’t make a single decision on IT until we have resolved
the problems with our email”)?

– Is it even vaguely possible that it is some party political spat where the Liberal Democrats, languishing in the polls, have decided that a key area of differentiation is in how they would manage IT contracts in the future?  And that they would go back to big suppliers and single prime contracts?

– Was it the technology people
in the department themselves who wish that they could go back to the
glory days of managing IT with only one supplier when SLAs were always
met and customers radiated delight at the services they were given?

#unacceptable as Chris Chant would have said.

Richard Holway added his view:

In our view, the pendulum has swung too far. The Cabinet Office refers
to legacy ICT contracts as expensive, inflexible and outdated; but
moving away from this style of contract does not necessarily mean moving
away from the large SIs.

And it appears that it is beginning to dawn on
some in UK Government that you can’t do big IT without the big SIs. A
mixed economy approach – involving large and small suppliers – is what’s
needed.

By pendulum, he means that equilibrium sat
with less than a dozen suppliers taking more than 75% of the
government’s £16bn annual spend on IT.  And that this government, by
pushing for SMEs to receive at least 25% of total spend, has somehow
swung us all out of kilter, causing or potentially causing chaos.  Of
course, 25% of spend is just that – a quarter – it doesn’t mean (based
on the procurements carried out so far by the MoJ, the Met Police, DCLG
and other departments) that SIs are not welcome.

Transitions, especially, in IT are always challenging – see my last blog on the topic
(and many before).  DECC and BIS are pretty much first with a change
from the old model (one or two very large prime contracts) to the new
model (several – maybe ten – suppliers with the bulk of the integration
responsibility resting with the customer, even when, as in this case,
another supplier is nominally given integration responsibility).  Others
will be following soon – including departments with 20-30x more users
than DECC and BIS.

Upcoming procurements will be
fiercely competed, by big and small suppliers alike.  What is different
this time is that there won’t be:

–  15 year deals that
leave departments sitting with Windows XP, Office 2002, IE 6 and dozens
of enterprise applications and hardware that is beyond support.

or


15 year deals that leave departments paying for laptops and desktops
that are three generations behind, that can’t access wireless networks,
that can’t be used from different government sites and that take 40
minutes to boot.

or

– 15 year deals
that mean that only now, 7 years after iPhone and 4 years after iPad,
are departments starting to take advantage of truly mobile devices and
services

With shorter contracts, more competition,
access to a wider range of services (through frameworks like G-Cloud),
only good things can happen.   Costs will fall, the rate of change will
increase and users in departments will increasingly see the kind of IT
that they have at home (and maybe they’ll even get to use some of the
same kind of tools, devices and services).

To the
specific problem at BIS and DECC then.  I know little about what the
actual problem is or was, so this is just speculation:


We know that, one day, the old email/network/whatever service was
switched off and a new one, provided by several new suppliers, was
turned on.  We don’t know how many suppliers – my guess is a couple, at
least one of which is an internal trading fund of government. But most
likely not 5 or 10 suppliers.

– We also know that
transitions are rarely carried out as big bang moves.  It’s not a
sensible way to do it – and goodness knows government has learned the
perils of big bang enough times over the last 15 years (coincidentally
the duration of the Fujitsu contract).

– But what
triggered the transition?  Of course a new contract had been signed, but
why transition at the time they did?  Had the old contract expired? 
Was there a drive to reduce costs, something that could only be
triggered by the transition?   

– Who carried the
responsibility for testing?  What was tested?  Was it properly tested? 
Who said “that’s it, we’ve done enough testing, let’s go”?  There is,
usually, only one entity that can say that – and that’s the government
department.  All the more so in this time of increased accountability
falling to the customer.

– When someone said “let’s
go”, was there an understanding that things would be bumpy?  Was there a
risk register entry, flashing at least amber and maybe red, that said
“testing has been insufficient”?

In this golden age of
transparency, it would be good if DECC and BIS declared – at least to
their peer departments – what had gone wrong so that the lessons can be
learned.  But my feeling is that the lessons will be all too clear:

– Accountability lies with the customer.  Make decisions knowing that the comeback will be to you.

– Transition will be bumpy.  Practice it, do dry runs, migrate small numbers of users before migrating many.


Prepare your users for problems, over-communicate about what is
happening.  Step up your support processes around the transition
period(s).

– Bring all of your supply chain together
and step through how key processes and scenarios will work including
when it all goes wrong.

– Have backout processes that you have tested and know the criteria you will use to put them into action

Transitions
don’t come along very often.  The last one DECC and BIS did seems to
have been 15 years ago (recognising that DECC was within Defra and even
MAFF back then).  They take practice.  Even moving from big firm A to
big firm B.  Even moving from Exchange version x to Exchange version y.

What
this story isn’t, in any way, is a signal that there is something wrong
with the current policy of disaggregating contracts, of bringing in new
players (small and large) and of reducing the cost of IT).

The challenge ahead is definitely high on the ambition scale – many large scale IT contracts were signed at roughly the same time, a decade or more ago, and are expiring over the next 8 months.  Government departments will find that they are, as one, procuring, transitioning and going live with multiple new providers.  They will be competing for talent in a market where, with the economy growing, there is already plenty of competition.  Suppliers will be evaluating which contracts to bid for and where they, too, can find the people they need – and will be looking for much the same talent as the government departments are.  There are interesting times ahead.

There will be more stories about transition, and how hard it is, from
here on in.  What angle the reporting takes in the future will be quite fascinating.