The 10 Year Strategy

In May 2008, on this blog, I wrote about Chateau Palmer (a fine Bordeaux wine) and, specifically, about how making wine forces a long term strategy – vines take years before they produce a yield that is worth bottling (my friends in the business say that the way to make a small fortune in wine is to start with a large one), more years can go by before the wine in the bottle is drunk by most consumers, and yet, every year the process repeats (with some variety, much caused by the weather).  It’s definitely a long game.

I wondered what would happen if you could only make decisions about your IT investment every 10 years, and them made a couple of predictions.  I said:
Cloud computing – This is going to be increasingly talked about until you can’t remember when people didn’t talk about and then, finally, people are going to do it. [If you read only this bit then perhaps I am a visionary strategist; if you read the whole of it, I got most of the rest wrong]
Application rationalisation – Taken across a single country’s government as a whole, the total number of applications will be a frightening number, as will the total cost to support them all. There are several layers of consolidation, ranging from declaring “end of life” for small systems and cutting their budgets to zero (and waiting for them to wither and die – this might take eons) to a more strategic, let’s use only one platform (SAP, Oracle etc) from here on in and migrate everything to that single platform (this too could take eons)
It feels, 11 years on, that we are still talking about cloud computing and that, whilst many are doing it, we are a long way from all in.  And the same for application rationalisation – many have rationalised, but key systems are still creaking, supported by an ever decreasing number of specialists, and handling workloads far beyond their original design principles.
Did we devise a strategy and stick to it? or did we bend with the wind and change year to year, rewrite as new people came and went? Perhaps we focused on business as usual and forgot the big levers of change? 

Disaggregation Disillusionment

About 15 years ago I wrote a post titled “Websites of Mass Disillusionment”, or maybe it was “Websites of Mass Delusion.”  I can’t recall which and, unusually, I can’t find the original text – I was told, by a somewhat unhappy Minister of the Cabinet Office, to delete the post or lose my job.  At the time I rather liked my job and so I opted to delete the post.  The post explored how, despite there being 1000s of government websites, on which 100s of millions of pounds were being spent, the public, at large, didn’t care about them, weren’t visiting them and saw no need to engage with government (here’s at least a thread of the article, published in August 2009).  I don’t think the Minister disagreed with the content, but he definitely wasn’t keen on the title, coming so soon after the famous missing WMDs in Iraq.
I’m somewhat hesitantly hinting at that title again with this post, though I have less fear of a Minister telling me I will lose my job because of it (I’m not employed by any Ministers) and, anyway, I think this topic, disaggregation, is worth exploring.
It’s worth exploring because the news over recent months has been full of stories about departments extending contracts with existing suppliers, re-scoping and re-awarding contracts to those same suppliers or moving the pieces around those suppliers creating the illusion of change but, fundamentally, changing little.
It looks like jobs for the boys again; there’s very little sign of genuine effort at disaggregation; they’re just moving the pieces around
This feels like a poor accusation – putting to one side the tone of “jobs for the boys” in 2018, it hints at dishonesty or incompetence when, I think, it says more about the challenges departments are facing as they grapple with unwinding contracts that were often put in place 15-20 years ago and that have been “assured” rather than “managed” for all of that time.
But, let’s move on and first establish what we mean, in the context of Public Sector IT, by disaggregation.  We have to wind back a bit to get to that:
IT Outsourcing In The Public Sector (1990 onwards)
In the early 1990s, when departments began to outsource their IT, the playbook was roughly:
Count up everyone with the word “technology”, “information” or “systems” in their job title and draw up a scope of services that encompassed all of that work. 
Carry out an extensive procurement transition to find a third party provider prepared to pay the lowest price to do the same job.    The very nature of government departments meant that these contracts were huge – sometimes £100-200m/year (in the 90s) and because it was such hard work to carry out all of the procurement process, the contracts were long, often 10 years or more.
With them went hardware and software, networks and other gadgets – or, at least, the management of those things.  Whereas the people moved off the payroll, the hardware often stayed on the asset register (and new hardware went on that same asset register, even when purchased through the third party).  This was mostly about capital spending – something with flashing lights went on the books after all.  
There were a lot of moving parts in these deals – the services to be provided, the meaures by which performance and quality would be assessed, legal obligations, plans for future exits and so on.  I’ve seen some of the contracts and they easily ran to more than 10,000 pages. 
Side Effects
There were four interesting side effects as a result of these outsource deals:
  1. Many departments could now recover VAT on “managed services” but not on hardware purchases.  Departments are good at exploiting such opportunities and so the outsource vendor would buy the hardware on behalf of the department, sell it to back to the department as part of a managed service, and the department would then reclaim the VAT, getting 20% back on the deal.   Those who were around in the early days of G-Cloud will remember the endless loops about whether VAT could be reclaimed – it was some years after G-Cloud started that this was successfully resolved.
  2. Departments now had a route to buying more IT services, or capability, without needing to go through a new procurement, provided the scope of the original procurement was wide enough.  That meant that existing contracts could be used to buy new services.  And, as everyone knows, IT doesn’t stay still, so there were a lot of new services, and nearly all of them went through the original contract.  Those contracts swelled in size, with annual spend often double or triple the original expectation within the first few years.  When e-government, now digital, came along and when departments merged, those numbers often exploded.
  3. Whilst all of the original staff involved transferred, via TUPE, on the package they had in government – salary plus index linked pensions etc – any new staff brought on e.g. to replace those who had left (or retired) or for new projects, would come on with a deal that was standard for the private sector.  That usually meant that instead of pension contributions being 27-33%, they were more likely 5-7%.  Instantly, that created an easy save for government – it was 20% or more cheaper, even before we talk VAT, to use the existing provider.
  4. Whilst departments have long had an obligation to award business to smaller players, the ease of using the big players with whom they already had contracts made that difficult (in the sense that there was an easy step “write a contract change to award this work to X” versus “Write the spec, go to market, evaluate, negotiate, award, work with new supplier who doesn’t understand us”).  Small players were, unfairly, shut out.
The Major Flaw
There was also a significant flaw:
  • When a department wanted to know what something cost, it was very hard to figure out.  Email for instance – a few servers for outlook, some admin people to add and delete users etc, how hard can it be to cost?  That’s a bit like Heisenberg’s Uncertainty Principle – the more you study where something is the less you know about where it’s going.  In other words, if you looked closely at one thing, the money moved around.  If something needed to be cheap to get through, the costs were loaded elsewhere.  If something needed to be expensive to justify continued investment (avoiding the sunk cost fallacy), costs were loaded on to it.  Then, of course, there was the ubqiuity of “shared services” – as in “Well, Alan, if you want me to figure out how much email costs, we need to consider some of Bob’s time as he answers the phone for all kinds of problems, a share of the network costs for all that traffic, some of Heidi’s time because email is linked to the directory and without the work she does on the directory, it wouldn’t work” and so on.  Benchmarking was the supposed solution for that – but if you couldn’t break out the costs, how did you know it was value for money?  Or not?  Did suppliers consciously hinder efforts to find true cost?  I suspect it was a mix of the structure they’d built for themselves – they didn’t, themselves, know how it broke down – and a lack of disciplined chasing by departments … because the side effects and the flaws self-reinforced.
Reinforcement

Over the 20 years or so from the first outsourcing until Francis Maude part 2 started, in 2010, these side-effects, and the major flaw, reinforced the outsourcing model.  It was easy to give work to the supplier you already worked with.  It was hard to figure out whether you were over-paying, so you didn’t try to figure that out very often.  The supplier was, on the face of it, anyway, cheaper than you could do it (because VAT, because cost of transition, because pensions etc).  These aren’t good arguments, but I think they are the argument.


What Do We Mean By Disaggregation?
Disaggregation, then, was the idea of breaking out these monolithic contracts (some departments, to be fair, had a couple of suppliers, but usually as a result of a machinery of government change that merged departments, or broke some apart).
A department coming to the end of its contract period with its seeming partner of the last decade would, instead of looking for a new supplier to take on everything, break their IT services into several component parts: networks, desktop, print, hosting, application support, Helpdesk and so on.
There were essentially three ways of attempting this as in the picture below (this, and all of the pictures here, are from various slide decks worked on in 2013/4):
That is:
1) A simple horizontal split – perhaps user facing services, and non-user facing.   This was rarely chosen as it didn’t pass the GDS spend controls test and, in reality, didn’t really achieve much of the true aim of disaggregation, albeit it made for a simple model for a department to operate.
2) A “towers based” model with an integration entity or partner working with several towers, for instance, hosting, desktop, network and applications support.  This was the model chosen by the early adopters of disaggregation.  Some opted to find a partner as their SIAM, some thought about bringing it inhouse, some did a little of both.  The pieces in a tower model are still pretty large, often far out of the reach of small providers, especially if the contract runs over 5 years or more.  Those departments that tried it this way, haven’t had a good experience for the most part, and the model has fallen out of favour.
3) A fully disaggregated model with a dozen or more suppliers, each best of breed and focused on what they were best at.  Integration, in this case, was more about filling in all of the gaps and, realistically, could only be done in house.  Long ago, and I know it’s a broken record, when we built the Gateway, we were disaggregated – 40+ suppliers working on the development, a hosting provider, an infrastructure builder, an apps support provider, a network provider and so on.  Integration at this level isn’t easy.
In the “jobs for the boys” quote above, the claim is really that the department concerned had opted for something close to (2) rather than (3) – that is, deliberately making for large contracts (through aggregation) and preventing smaller players from getting involved.  It’s more complicated than that.

That reinforcement – the side effects and the flaws – plus the inertia of 20+ years of living in a monolithic outsource model meant that change was hard.  Really hard.

What Does That Mean In Practice?
Five years ago, I did some work for a department looking at what it would take to get to the third model, a fully disaggregated service.  The scope looked like this:
Service integration, as I said above, fills in the gaps … but there are a lot of components.  Lots of moving parts for sure.  Many, many millions were spent by departments on Target Operating Models – pastel shaded powerpoints full of artful terms for what the work would look like, how it would be done and what tools were used.  Nearly all of that, I suspect, sits on a shelf, long since abandoned as stale, inflexible and useless.
If they had disaggregated to this level, they would need to sign more than 20 contracts.  That would mean 20 procurements carried out roughly in parallel, with some lagging to allow others to break ground first.  But all would need to complete by the time the contract with the main supplier came up for renewal.  The end date, in other words was, in theory at least, fixed.  Always a bad place to start.
Procurement Challenge
When you are procuring multiple things in parallel, those buying and those selling suffer.  Combining some things would allow a supplier, perhaps, to offer a better deal.  But the supplier doesn’t know what they’ve won and can’t bid on the basis that they will win several parts and so book the benefit of that in their offer (unless they’re prepared to take some possibly outlandish risks).  Likewise, the customer wants variety in the supply chain and wants to encourage bidders to come forward but, at the same time, needs to manage a bid process with a lot of players, avoiding giving any single bidder more work than is optimal (and the customer is unable to influence the outcome of any single bid of course), keeping everyone in the game, staying away from conflicts of interest and so on.
Roadmap Challenge
The transitions are not equally easy (or equally hard).  Replacing WAN connectivity is relatively straight forward – you know where all the buildings are and need to connect them to the backbone, or to the Internet.  Replacing in office connectivity is a bit harder – you need to survey every office and figure out what the topology of the wireless network is, ripping out the fixed connections (except where they might be needed).  Moving to Office 365 might be a bit harder, especialyl if it comes with a new Active Directory and everyone needs to be able to mail everyone else, and not lose any mail, whilst the transition is underway.  None of these are akin to putting astronauts on the moon, but for a department with no astronauts, hard enough.

We also need to consider that modern services are, for the most part, disaggregated from day one – new cloud services are often procured from an IaaS provider, several development companies, a management company and so on.  What we are talking about here, for the most part, is the legacy applications that have been around a decade or more, the network that connects the dozens or hundreds of offices around the country (or the world), the data centres that are full of hardware and the devices that support the workload of thousands, or tens of thousands of users.  These services are the backbone of government IT, pending the long promised (and delayed even longer than disaggregation), digital transformation.  They may not (and indeed, are not) user led, they’re certainly not agile – but they handle our tax, pensions, benefits, grants to farmers and so on.

What Does It Really Mean In Practice?

Writing papers for Ministers many years ago, we would often start with two options, stark choices.  The preamble we used to describe these was “Minister, we have two main choices.  The first one will result in nuclear war and everyone will die.  The second will result in all out ground war and nearly everyone will die.  We think we have a third way ahead, it’s a little risky, and there will be some casualties, but nearly everyone will survive.”  Faced with that intro, what choice do you think the Minister will make?

In this context, the story would be something like: “Minister, we have two options.  The first is to largely stay as we are.  We will come under heavy scrutiny, save no money, progress our IT not a jot and deliver none of the benefits you have promised in your various policies.  The second is to disaggregate our services massively, throwing our control over IT into chaos, increasing our costs as we transition and sucking up so many resources that we won’t be able to do any of the other work that you have added to our list since you took office. Alternatively … we have a third choice”

Disaggregate a little. Take some baby steps.  Build capability in house, manage more suppliers than we’re used to, but not so many suppliers such that your integration capability would be exhausted before it had a chance.

Remember, all those people in the 90s who technology, IT or systems in their job title had been outsourced.  They were the ones who built and maintained systems and applications.  In their place came people who managed those who built and maintained systems – and all of those people worked for third parties.   There’s a huge difference between managing a contract where a company is tasked with achieving X by Y and managing three companies, none of whom have a formal relationship with each other, to achieve X by Y.
The next iteration tried to make it a bit simpler:
We’re down from more than 20 contracts to about 11. Still a lot, but definitely less to manage.  Too much to manage for most departments though.  We worked on further models that merged several of the boxes, aiming for 5-7 contracts overall – a move from just 1 contract to 5-7 is still a big move, but it can be managed with the right team in-house, the right tools and if done at the right pace.
The Departmental Challenge
Departments, then, face serious challenges:
– The end date is fixed.  Transition has to be done by the time the contract with the incumbent finishes.  Many seem to be solving that by extending what they have, as they struggle with delays in specification, procurement or whatever.
– Disaggregate as much as is possible.  The smaller the package, the more bidders will play.  But the more disaggregation there is, the more white space between the contracts is and the greater the management challenge for the departments.  Most departments have not spent the last 5 years preparing for this moment by doubling up on staff – using some staff to manage the existing contract and finding new staff to prepare for the day when they will have to manage suppliers differently.  The result is that they are not disaggregating as much as is possible, but as much as they think they can.
– Write shorter contracts.  Short contracts are good – they let you book a price now in the full knowledge that, for commodity items at least, the same thing will be cheaper in two years. It isn’t necessarily but it at least means you can test the market every two years and see what’s out there – better prices, better service if you aren’t happy with your supplier, new technology etc.  The challenge is that the process – the 5 stage business case plus the procurement – is probably that long for some departments, and they are just not geared up to run fleets of procurements every two years.  Contracts are longer, then, to allow everyone to do the transition, get it working, make/save some money and then recompete.

– TUPE nearly always applies.  Except when it doesn’t – If you take your email service and move it to Office 365, the staff aren’t moving to Microsoft or to that ubiquitous company known as “the cloud.”  But when it does apply, it’s not a trivial process. Handling which staff transition to which companies (and ensuring that the companies taking on the staff have the capability to do it) is tricky.  Big outsource providers have been doing this for years and have teams of people that understand how the process works.  Smaller companies won’t have that experience and, indeed, may not have the capability to bring in staff on different sets of Ts & Cs.

On top of that, there are smaller challenges on the way to disaggregation, with some mitigations:

-Lack of skills available in the department; need to identify skills and routes for sourcing them early

-Market inability to provide a mature offer; coach the market in what will be wanted so that they have time to prove it

-Too great an uncertainty or risk for the business to take; prove concepts through alpha and beta so risks are truly understood

-Lack of clear return for the investment required; demonstrate delivery and credibility in the delivery approach so that costs are
managed and benefits are delivered as promised

-Delays in delivery of key shared services; close management with regular delivery cycles that show progress and allow slips to be visible and dealt with

-Challenges in creating an organisation that can respond to the stimulus of agile, iterative delivery led by user need; start early and prove it, adjust course as lessons are learned, partner closely with the business

What Do We Do?
Departments are on a journey.  They are already disaggregating more than we can see – the evidence of G-Cloud spend suggests that new projects are increasingly being awarded to smaller, newer players who have not often worked with government before. Departments are, therefore, learning what it’s like to integrate multiple suppliers, to manage disparate hosting environments and to deliver projects on an iterative basis.  As with any large population, some are doing that well, some are doing just about ok, and some are finding it really hard and making a real mess of it.  One hopes that those in the former category are teaching those in the latter, but I suspect most are too busy getting on with it to stop and educate others.
The journey plays out in stages – not in three simple stages as I have laid out above, but in a continuum where new providers are coming in and processes are being reformed and refocused on services and users.  Meanwhile, staff in the department are learning what it’s like to “deliver” and “manage” and “integrate” first one service and then many services, rather than “assure” them and check KPIs and SLAs.  Maybe the first jump is from one supplier to four, or five.  A couple of years later, one of those is split into two or three parts.  A year later, another is split.
This is a real change for the way government IT is run.  It’s a change that, in many ways, takes us all the way back to the 1980s when government was leading the way in running IT services – when tax, benefits, pensions and import/export was first computerised.  Back then, everything was run in house.  Now, key things are run in house and others outsourced, and, eventually, dozens of partners will be involved.  If we had our time over again, I think we would have outsourced paper handling (because it was largely static and would eventually decline) and kept IT (because it constantly changed) and customer contact (because that’s essentially what government does, especially when the IT or the paper processing lets it down) in house.
Disaggregation hasn’t happened nearly as fast as many of us hoped, or, indeed, as many of us have worked for in the last few years.  But it is happening.    The side effects, the flaws, inertia, reinforcement and a dominance of “assurance” rather than “delivery” capability, mean it’s hard.

We need to poke and prod and encourage further experimentation.  Suppliers need to make it easy to buy and integrate their services (recognising that even the cheapest commodity needs to be run and operated by someone). And when someone seems to take a short cut and extend a contract, or award to an existing supplier, we need to understand why, and where they are on their journey.  Departments need to be far more transparent about their roadmap and plans to help that

I want to give departments the benefit of the doubt here.  I don’t see them taking the easy way out; I have, indeed, seen some monumental cockups badged as efforts to disaggregate.    Staggering amounts of money – in all senses of the word (cash out the door, business stagnation, loss of potential benefits etc) – have been wasted in this effort.  That suggests a more incremental approach will work better, if not as well as we would all want.

That means that departments need to:

  1. Be more open about what their service provision landscape looks like two, three, four and five years out (with decreasing precision over time, not unreasonably). Coach the market so that the market can help, don’t just come to it when you think you are ready.
  2. Lay out the roadmap for legacy technology, which is what is holding back the increasing use of smaller suppliers, shorter contracts and more disaggregation.  There are three roadmap paths – everything goes exactly as you planned for and you meet all your deadlines (some would say this is the least likely), a few things go wrong and you fall a little behind, and it all goes horribly wrong and you need a lot more time to migrate away from legacy.  Departments generally consider only the first, though one or two have moved to the second. There’s an odd side effect of the spend control process – HMT requires optimism bias and so on to be included in any business case, spend controls normally strip that out, then departmental controls move any remaining contingency to the centre and hold it there, meaning projects are hamstrung by having no money (subject to approvals anyway) to deal with the inevitable challenges.
  3. Share what you are doing with modern projects – just what does your supplier landscape look like today

G-Cloud – A Whole Lot of "G", Not Much "Cloud"

It’s been nearly two years since I last looked at G-Cloud expenditure – when the total spend crossed £1bn at the end of 2015.  Well, as of July 2017, spend reached a little under £2.5bn, so I figured it was time to look again.  I am, as always, indebted to Dan Harrison for his data analysis – his Tableau work is second to none and it, really, should be taken up by GDS and used as their default reporting tool (obviously they should hire Dan to do this for them).

As an aside, the raw data has been persistently poor and is not improving.  Date formats are mixed up, fields are missing, the recent change to combine lots means that there are some mixed up numbers and, interestingly, the project field has been removed – I’d looked at this before and queried whether many projects were actually cloud related (along with the fact that something like 20% of projects were listed as “null” – I can understand that it’s embarrassing having empty data, but removing the field doesn’t make the data qualitatively better, it just makes me think something is being hidden).

Recall this, from June 2014, for instance:

Scanning through the sales by line item, there are far too many descriptions that say simply “project manager”, “tester”, “IT project manager” etc.  There are even line items (not in Lot 4) that say “expenses – 4gb memory stick” – a whole new meaning to the phrase “cloud storage” perhaps.

Here’s the graph of spend over the 5 1/2 years that G-Cloud has been around:

The main conclusions I reach are much the same as before:

– 77% of spend continues to be in “Cloud Support” (previously known as “Specialist Services”).  It’s actually a little higher than that – now that PaaS and SaaS have been merged (to create a category of “Cloud Software”, Lot 4 has become Lot 3 but both categories are reported in the data.  It’s early days for Cloud Software – it would be good if GDS cleaned up the data so that historic lots reflected current lots.

– 2017 spend looks like it will be slightly higher than 2016, but not by much.  If the idea was to move work from “People As a Service”, i.e. Cloud Support, to other frameworks, it’s not obvious that it’s happened in a meaningful way, but it may be damping spend a little.

– IaaS spend, now known as Cloud Hosting, has reached £205m. I seem to remember from the early days of the Crown Hosting Service business case that there were estimates that government spent some £400m annually on addressable hosting charges (i.e. systems that could be moved to the cloud).  At the moment Cloud Hosting is a reasonably flat £6m/month, or £70m/year. It’s very possible that there’s a 1:10 saving in cloud versus legacy, but everything in me says that much of this cloud hosting is new spend, not reduced spend following migration to the cloud.  That’s good in that it avoids a much higher old-style asset rich infrastructure, but I don’t think it shows much of a true migration to the cloud.

28% of spend by the top 5 customers.  


In the past I’ve looked at the top spending customers and top earning suppliers, specifically in Lot 4 (now a combination of Lot 4 and the new Lot 3).  There are a couple of changes here:

– Back then, for customers … Home Office, MoJ, DVLA, DSA and HMRC were the highest spending departments with around £150m between them.  Today … Home Office, MoJ, HMRC, Cabinet Office and DSA (DVLA dropped to 7th place) have spent nearly £800m (total spend across all lots by the top 5 customers is only £100m higher at £925m which shows the true dominance of support services at the top end).  £925m out of £2.5bn in just 5 customers.  £1.25bn (51%) is from the top 10 customers.

– And for suppliers, Mastek, Deloitte, Cap Gemini, ValTech and Methods were the top 5 with a combined revenue (again in Lot 4) of £67m.  Today it’s Equal Experts, Deloitte, Cap Gemini, BJSS and PA Consulting with revenue of £335m (total spend across all lots for the top 5 suppliers is £348m – that makes sense given few of the top suppliers are active across multiple lots – maybe Cap Gemini is the odd one out, getting some revenue for hosting or SaaS).  It takes the top 10 suppliers to make for 25% of the spend.  I don’t think that was the intention of G-Cloud – that it would be dominated by a small number of suppliers, though, at the same time, some of those companies – UKCloud (£64m) for instance – are still small companies and, without G-Cloud, might not exist or have reached such revenues if they did exist.

A couple of years ago I offered the observation that

“once a customer starts spending money with G-Cloud, they are more likely to continue than not.  And one a supplier starts seeing revenue, they are more likely to continue to see it than not.”

That seems to be exactly the case, here’s a picture showing the departments who have contracts that have run for more than 24 months (and up to 50 months – nearly as long as G-Cloud has been around):

If anything, this is busier than might be expected given the preponderance of Lot 4 – it might be reasonable to expect that support services would be short term and focused on a specific project, such as migrating locally hosted email to Office 365 or to Gmail, or setting up the capability to manage cloud infrastructure.  What we see, instead, is many long term resource contracts.
What should we really conclude?  And what can we do?
In 2012, with G-Cloud not even a year old, I asked whether it could ever be more than a hobby for government.   I wondered about some interim targets (at the time the plan was for a “cloud first” approach with “50% of spend in the cloud” – that should all have happened by now).  There is absence of strategy or overall plan for further cloud realisation – with GDS neutered and spend controls licking their wounds from the NAO’s criticism that they spent far more time than they should have done looking at projects spending less than £1m, it’s not clear who will grasp the mantle of driving the change away from long term contracts towards shorter, more cash intensive (as opposed to capital driven) contracts (be they with big or small suppliers).  Perhaps it’s time for Chris Chant and Denise Mcdonagh to come back?
  • Should there be spend control review of “Cloud Support” contracts to determine what they’re aiming to achieve and then assess whether there really has been a reduction in costs, a migration to the cloud, a change in the contracting model for the service?  If we were to do a show of hands across departmental CIOs now and ask how many were running their email in the cloud (the true cloud, not one they’ve made up and badged as cloud that morning), what would the response be?  If we were to make it harder and ask about directory services (such as Active Directory), what would the answer be?  If we were to look at historic Lot 4 and test how much had been spent in pursuit of such migrations, what would the answer be?  
  • What incentives could we put in place to encourage departments to make the move to cloud?  Departments have control over their budgets, of course, and lots of other things to spend the money on, but could we create a true central capability (key people drawn from departments and suppliers with a brief to build a cloud transition plan) that was architecture agnostic and delivery focused that would support departments in the transition – and that would be accountable (and quite literally held to account) for delivering on the promise of cloud transition?  If that was in place, could departments focus on their legacy systems and how to move those to more flexible platforms, in readiness for future cloud moves (or future enhancements to cope with Brexit)?
  • What more could we do to encourage UK based cloud companies (as opposed to overseas companies with UK bases) to excel?  Plainly they have to compete in a global market – and I were a UK hosting company, I would be watching Amazon very closely and wondering whether I will have a business in a few months – but that doesn’t mean to say we don’t want to encourage a local capability across all lots?  What would they need to know to encourage them to invest in the services that will be needed in the future? How could that information be made available so that a level playing field was maintained?  Do we want to encourage such a capability in the UK, or should we publish the overall plans and transition maps and let the chips fall where they may?
  • Are there changes that need to be made to the procurement model so that every supplier can see what every department is looking for rather than the somewhat peculiar approach now where suppliers may not even know a department is looking to make a purchase?  What would that add to the timeline?  Would it result in better competition?  Would customers benefit as well as suppliers?  Could we try it and see – you know that whole alpha, beta, A/B testing thing?
GDS have long since been quiet on grand, or indeed any, plans for transition to the cloud (and on many other things too).   Instead of a cloud first strategy, it looks like we have contracts being extended and delays to existing projects. IR35 likely resulted in some unexpected cost saves as the headcount of contractors and interims reduced almost overnight, but that also meant that projects were suddenly understaffed and further delayed.
Energy and Vision
We need a re-injection of energy and vision in the government IT world.  Not one where the centre dictates and micro-controls every action departments want to take, resulting in lengthy process, avoidance of spend that might be scrutinised and cancellation/delays to projects that could make a difference … but one where the centre actively facilitates and helps drive the changes that departments want to make, measuring them for logical consistency against an overall architectural plan and transition map rather than getting theological about code standards or architectures.
A Strategy And A Plan
At the same time we need to recommit to a strategy and a plan for delivering that strategy.  In terms of the cloud that means:
– Setting a cloud transition goal.  In the same way that we have set a goal to give increased business to SMEs (which G-Cloud is underpinning), we should be setting the same goal to move government to commodity, i.e. cloud-based, IT where it makes sense.  10% of the total budget (including Capex and Opex, or CDEL and RDEL if you prefer) in the first year, increasing from there to 25% in 2 years and 50% in 5 years, say.
– Reviewing the long (36 month plus) contracts and testing them for value, current performance and overall delivery.  Are they supporting migration to the cloud?  Is the right framework being used (if it’s not cloud but it is delivering, then use the right framework or other procurement option)?  It doesn’t matter, in my view, whether it was valid in the first place or how the process was or wasn’t followed originally, it matters whether there is value today and whether there are better options that will support the overall plan.  If it’s not cloud, let’s not call it cloud and let’s get to the root of what is really going on with commodity technology in government.
– Overwhelmingly adopting an architecture founded on multiple shared and secure infrastructures. There’s no need for a single architecture when the market provides so many commodity options – and spreading the business will foster innovation, increase the access points (and improve security through distributing data) and ensure that there is continued competitive tension.  Some of that infrastructure will be pure public cloud, some of it will be a shared government cloud (in the US, cloud providers maintain clones of their public infrastructure for federal government use – that may be one answer for specific areas; importantly, what I am not suggesting is that a department set up their own infrastructure and call it a cloud, thought there may be specific instances, in the security services, say, where data classifications may mean that’s the only option).  
– Migrating all of government’s commodity services to the cloud.  Commodity means email, directories, collaboration, HR, finance, service support, asset management and so on.  This doesn’t have to be a wholesale “move now” approach, but one that looks at when it’s sensible to close down existing applications and make the move.  No new applications should be built or deployed without first assessing whether there is a cloud alternative – this is a perfect place for a spending team to look at who is doing what and act as a hub for sharing what is going on across central and local government.  
  • I’ve been on the record for a long time as saying government should recognise that it doesn’t collaborate with itself – having collaboration services inside the department’s own firewall isn’t collaboration, it’s talking to yourself.  I believe that I even once suggested using a clone of Facebook for such collaboration.  Government doesn’t need lots of collaboration tools – it needs one or two where everyone, including suppliers and even customers, can get to with appropriate segregation and reviews to make sure people can only see what they’re supposed to see.  Whatever happened to Civil Pages I wonder?
– Putting in place a new test for Lot 3 (the old Lot 4) services to measure what is being purchased against its contribution to the department’s cloud migration strategy.  This is a “cloud first” test – are you really using this capability to help you move to the cloud?  What is the objective, what are the milestones?  A follow on test to see how delivery is progressing will then allow a regular state of the cloud nation report to be published to see what is and isn’t moving.  
– Working with local government, Devolved Administrations, the Health Service and others to see what they are doing in cloud.  With 84% of G-Cloud spend in central government, maybe the other folks are doing something different – maybe it’s good, maybe it’s not so good, but there are likely lessons to be learned.

10 Years After 10 Years After

Strictly speaking, this is a little more than 10 years after the 10 year mark.  In late 2005,  Public Sector Forums asked me to do a review of the first 10 years of e-government; in May 2006, I published that same review on this blog.  It’s now time, I think, to look at what has happened in the 10 years (or more) since that piece, reviewing, particularly, digital government as opposed to e-government.

Here’s a quick recap of the original “10 years of e-government” piece, pulling out the key points from each of the posts that made up the full piece:

Part 1 – Let’s get it all online

At the Labour Party conference in 1997, the Prime Minister had announced his plans for ‘simple government’ with a short paragraph in his first conference speech since taking charge of the country: 
“We will publish a White Paper in the new year for what we call Simple Government, to cut the bureaucracy of Government and improve its service. We are setting a target that within five years, one quarter of dealings with Government can be done by a member of the public electronically through their television, telephone or computer.”
Some time later he went further:
“I am determined that Government should play its part, so I am bringing forward our target for getting all Government services online, from 2008 to 2005”

It’s easy to pick holes with a strategy (or perhaps the absence of one) that’s resulted in more than 4,000 individual websites, dozens of inconsistent and incompatible services and a level of take-up that, for the most popular services, is perhaps 25% at best.
After all, in a world where most people have 10-12 sites they visit regularly, it’s unlikely even one of those would be a government site – most interactions with government are, at best, annual and so there’s little incentive to store a list of government sites you might visit. As the count of government websites rose inexorably – from 1,600 in mid-2002 to 2,500 a year later and nearly 4,000 by mid-2005 – citizen interest in all but a few moved in the opposite direction.
Over 80% of the cost of any given website was spent on technology – content management tools, web server software, servers themselves – as technology buyers and their business unit partners became easy pickings for salesmen with 2 car families to support. Too often, design meant flashy graphics, complicated pages, too much information on a page and confusing navigation. 
Accessibility meant, simply, the site wasn’t.
In short, services were supply-led by the government, not demand-led by the consumer. But where was the demand? Was the demand even there? Should it be up to the citizen to scream for the services they want and, if they did, would they – as Henry Ford claimed before producing the Model T – just want ‘faster horses’, or more of the same they’d always had performed a little quicker? 
We have government for government, not government for the citizen. With so many services available, you’d perhaps think that usage should be higher. Early on, the argument was often made (I believe I made it too) that it wasn’t worth going online just to do one service – the overhead was too high – and that we needed to have a full range of services on offer – ones that could be used weekly and monthly as well as annually. That way, people would get used to dealing online with government and we’d have a shot at passing the ‘neighbour test’ (i.e. no service will get truly high usage until people are willing to tell their neighbour that they used, say, ‘that new tax credits service online’ and got their money in 4 days flat, encouraging their friends to do likewise).
A new plan
 • Rationalise massively the number of government websites. In a 2002 April Fool email sent widely around government, I announced the e-Envoy’s department had seized control of government’s domain name registry and routed all website URLs to UKonline.gov.uk and was in the process of moving all content to that same site. Many people reading the mail a few days later applauded the initiative. Something similar is needed. The only reason to have a website is if someone else isn’t already doing it. Even if someone isn’t, there’s rarely a need for a new site and a new brand for every new idea.
• Engage forcefully with the private sector. The banks, building societies, pension and insurance companies need to tie their services into those offered by government. Want a pension forecast? Why go to government – what you really want to know is how much will you need to live on when you’re 65 (67?) and how you’ll put that much money away in time. Government can’t and won’t tell you that. Similarly, authentication services need to be provided that can be used across both public and private sectors – speeding the registration process in either direction. With Tesco more trusted than government, why shouldn’t it work this way? The Government Gateway, with over 7 million registered users, has much to offer the private sector – and they, in turn, could accelerate the usage of hardware tokens for authentication (to rid us of the problems of phishing) and so on.
• Open up every service. The folks at my society, public whip and theyworkforyou.com have shown what can be done by a small, dedicated (in the sense of passionate) team. No-one should ever need to visit the absurdly difficult to use Hansard site when it’s much easier through the services these folks have created. Incentives for small third parties to offer services should be created.
• Build services based on what people need to do. We know every year there are some 38 million tax discs issued for cars and that nearly everyone shows up at a post office with a tax disc, insurance form and MOT. For years, people in government have been talking about insurance companies issuing discs – but it still hasn’t happened. Bring together disparate services that have the same basic data requirements – tax credits and child benefit, housing benefit and council tax benefit etc.
• Increase the use of intermediaries. For the 45% of people who aren’t using the Internet and aren’t likely to any time soon, web-enabled services are so much hocus pocus. There needs to be a drive to take services to where people use them. Andrew Pinder, the former e-Envoy, used to talk about kiosks in pubs. He may have been speaking half in jest, but he probably wasn’t wrong. If that’s where people in a small village in Shropshire are to be found (and with Post Offices diminishing, it’s probably the only place to get access to the locals), that’s where the services need to be available. Government needs to be in the wholesale market if it’s to be efficient – there are far smarter, more fleet of foot retail providers that can deliver the individual transactions.
• Clean up the data. One of the reasons why government is probably afraid to join up services is that they know the data held on any given citizen is wildly out of date or just plain wrong. Joining up services would expose this. When I first took the business plan for the Government Gateway to a minister outside the Cabinet Office, this problem was quickly identified and seen as a huge impediment to progress

More to come.

The Billion Pound G-Cloud

Sometime in the next few weeks, spend through the G-Cloud framework
will cross £1 billion.  Yep, a cool billion.  A billion here and a
billion there and pretty soon you’re talking real money.

Does
that mean G-Cloud has been successful?  Has it achieved what it was set
up for? Has it broken the mould?  I guess we could say this is a story in four lots.

Well, that depends:

1) The Trend

Let’s start with this chart showing the monthly spend since inception.

It
shows 400 fold growth since day one, but spend looks pretty flat over
the last year or so, despite that peak 3 months ago. Given that this
framework had a standing start, for both customers and suppliers, it
looks pretty good.  It took time for potential customers (and suppliers)
to get their heads round it.  Some still haven’t. And perhaps that’s
why things seem to have stalled?

Total spend to
date is a little over £903m.  At roughly £40m a month (based on the
November figures), £1bn should be reached before the end of February,
maybe sooner. And then the bollard budget might swing into action and
we’ll see a year end boost (contrary to the principles of pay as you go
cloud services though that would be).

Government no
longer publishes total IT spend figures but, in the past, it’s been
estimated to be somewhere between £10bn and £16bn per year.  G-Cloud’s
annual spend, then, is a tiny part of that overall spend.  G-Cloud fans
have, though, suggested that £1 spent on G-Cloud is equivalent to £10 or
even £50 spent the old way – that may be the case for hosting costs, it
certainly isn’t the case for Lot 4 costs (though I am quite sure there
has been some reduction in rates simply from the real innovation that
G-Cloud brought – transparency on prices).

2) The Overall Composition

Up
until 18 months ago, I used to publish regular analysis showing where
G-Cloud spend was going.  The headline observation then was that some
80% was being spent in Lot 4 – Specialist Cloud Services, or perhaps
Specialist Counsultancy Services.  To date, of our £903m, some £715m, or
79%, has been spent through Lot 4 (the red bars on the chart above). 
That’s a lot of cloud consultancy.

 
(post updated 19th Jan 2016 with the above graph to show more clearly the percentage that is spent on Lot 4).

With all that spent
on cloud consultancy, surely we would see an increase in spend in the
other lots?  Lot 4 was created to give customers a vehicle to buy
expertise that would explain to them how to migrate from their stale,
high capital, high cost legacy services to sleek, shiny, pay as you go
cloud services.

Well, maybe.  Spend on IaaS (the blue
bars), or Lot 1, is hovering around £4m-£5m a month, though has increased substantially from the early days.  Let’s call it
£60m/year at the current run rate (we’re at £47m now) – if it hits that
number it will be double the spend last year, good growth for sure, and
that IaaS spend has helped created some new businesses from scratch. 
But they probably aren’t coining it just yet.

Perhaps the Crown Hosting Service has, ummm, stolen the crown and taken all of the easy business.  Government apparently spends £1.6bn per year on hosting,
with £700m of that on facilities and infrastructure, and the CHS was
predicted to save some £530m of that once it was running (that looks to
be a save through the end of 2017/18 rather than an annual save).  But
CHS is not designed for cloud hosting, it’s designed for legacy systems –
call it the Marie Celeste, or the Ship of the Doomed.  You send your
legacy apps there and never have to move them again – though, ideally,
you migrate them to cloud at some point. We had a similar idea to CHS
back in 2002, called True North, it ended badly.

A
more positive way to look at this is that Government’s hosting costs
would have increased if G-Cloud wasn’t there – so the £47m spent this
year would actually have been £470m or £2.5bn if the money had been
spent the old way.  There is no way of knowing of course – it could be
that much of this money is being spent on servers that are idling
because people spin them up but don’t spin them down, it could be that
more projects are underway at the same than previously possible because
the cost of hosting is so much lower.

But really, G-Cloud
is all about Lot 4.  A persistent and consistent 80% of the monthly
spend is going on people, not on servers, software or platforms.  PaaS
may well be People As A Service as far as Lot 4 is concerned.

3) Lot 4 Specifically

Let’s
narrow Lot 4 down to this year only, so that we are not looking at old
data.  We have £356m of spend to look at, 80% of which is made by
central government.  There’s a roughly 50/50 split between small and
large companies – though I suspect one or two previously small companies
have now become very much larger since G-Cloud arrived (though on these
revenues, they have not yet become “large”).

If we
knew which projects that spend had been committed to – we would soon
know what kind of cloud work government was doing if we could see that,
right?

Sadly, £160m is recorded as against “Project
Null”.  Let’s hope it’s successful, there’s a lot of cash riding on it
not becoming void too.

Here are the Top 10 Lot 4 spenders (for this calendar year to date only):

 
 And the Top 10 suppliers:


Cloud
companies?  Well, possibly.  Or perhaps, more likely, companies with
available (and, obviously, agile) resource for development projects that
might, or might not, be deployed to the cloud.  It’s also possible that
all of these companies are breaking down the legacy systems into
components that can be deployed into the cloud starting as soon as this
new financial year; we will soon see if that’s the case.

To
help understand what is most likely, here’s another way of looking at
the same data.  This plots the length of an engagement (along the
X-axis) against the total spend (Y-axis) and shows a dot with the
customer and supplier name.

A
cloud-related contract under G-Cloud might be expected to be short and
sharp – a few months, perhaps, to understand the need, develop the
strategy and then ready it for implementation.  With G-Cloud contracts
lasting a maximum of two years, you might expect to see no relationship
last longer than twenty four months.

But there are some
big contracts here that appear to have been running for far longer than
twenty four months.  And, whilst it’s very clear that G-Cloud has
enabled far greater access to SME capability than any previous
framework, there are some old familiar names here.

4) Conclusions

G-Cloud
without Lot 4 would look far less impressive, even if the spend it is
replacing was 10x higher.  It’s clear that we need:

– Transparency. What is the Lot 4 spend going to?

– Telegraphing of need.  What will government entities come to market for over the next 6-12 months?

– 
Targets.  The old target was that 50% of new IT spend would be on
cloud.  Little has been said about that in a long time.  Little has, in
fact, been said about plans.  What are the new targets?

Most of those points are not new – I’ve said them before, for instance in a previous post about G-Cloud as a Hobby and also here about how to take G-Cloud Further Forward.

In
short, Lot 4 needs to be looked at hard – and government needs to get
serious about the opportunity that this framework (which broke new
ground at inception but has been allowed to fester somewhat) presents
for restructuring how IT is delivered.

Acknowledgements

I’m
indebted, as ever, to Dan Harrison for taking the raw G-Cloud data and
producing these far simpler to follow graphs and tables.  I maintain
that GDS should long ago have hired him to do their data analysis.  I’m
all for open data, but without presentation, the consequences of the
data go unremarked.

Performance Dashboard July 2003 – The Steep Hill of Adoption

With gov.uk’s Verify appearing on the Performance Dashboard for the first time, I was taken all the way back to the early 2000s when we published our own dashboards for the Government Gateway, Direct.gov.uk and our other services.  Here’s one from July 2003 – there must have been earlier ones but I don’t have them to hand:

This is the graph that particularly resonated:

With the equivalent from back then being:

After 4 years of effort on the Identity programme (now called Verify), the figures present pretty dismal reading – low usage, low ability to authenticate first time, low number of services using it – but, you know what, the data is right there to see for everyone and it’s plain that no one is going to give up on this so gradually the issues will be sorted, people will authenticate more easily and more services will be added.    It’s a very steep hill to climb though.

We started the Gateway with just the Inland Revenue, HM Customs and MAFF (all department names that have long since fallen away)- and adding more was a long and painful process.  So I feel for the Verify team – I wouldn’t have approached things the way they are but it’s for each iteration to pick its path.  There were, though, plenty of lessons to learn that would have made things easier.

There is, though, a big hill to climb for Verify.  Will be interesting to watch.

Mind The Gaps – Nothing New Under The Sun

As we start 2015, a year when several big contracts are approaching their end dates and replacement solutions will need to be in place, here’s a presentation I gave a couple of times last year looking at the challenges of breaking up traditional, single prime IT contracts into potentially lots of smaller, shorter contracts: