Getting A Grip: Special Report on the AWS Special Report

Late last week a seemingly comprehensive takedown Amazon, titled “Amazon’s extraordinary grip on British data“, appeared in the Telegraph, written by Harry de Quetteville.

Read quickly it would suggest that Amazon, through perhaps fair and foul means, has secured too great a share of UK Government’s cloud business and that this poses an increasingly systemic risk to digital services and, inevitably, to consumer data.

Read more slowly, the article brings together some old allegations and some truths and joins them together so as get to the point where I ask “ok, so what do you want to do about it”, but it doesn’t suggest any particular action. That’s not to be said that there’s no need for action, just that this isn’t the place to find the argument.

The main points of the Telegraph’s case are seemingly based on “figures leaked” (as far as I know, all of this data is public) to the newspaper:

  • Amazon doesn’t pay tax (figures from 2018 are quoted showing it paid £10m euros on £1.9bn revenues, using offshore (Luxembourg) vehicles. For comparison, the article says, AWS apparently sold £15m of cloud services to HMRC.
  • There is a “revolving door” where senior civil servants move to work for Amazon “within months of overseeing government cloud contracts.” Three people are referenced, Liam Maxwell (former Government deputy CIO and CTO), Norman Driskell (Home Office CDO) and Alex Holmes (DD Cyber at DCMS).
  • Amazon lowballs prices which then spiral … and “even become a bar to medical research.” This is backed up by a beautifully done Amazon smile that says DCLG signed a contract in 2017 estimated at £959,593 that turned out to cost £2,611,563 (an uplift of 172%)
  • There is a government bias towards AWS giving it “an unfair competitive advantage that has deprived British companies of contracts and cost job[s].
  • A neat infographic says “1/3 of government information is stored on AWS (including sensitive biometric details and tax records); 80% of cloud contracts are “won by large firms like AWS
  • Amazon’s “leading position with … departments like the Home Office, DWP, Cabinet Office, NHS Digital and the NCA is also entrenched.
  • Figures obtained by the Sunday Telegraph suggest that AWS has captured more than a third of the UK public sector market with revenues of more than £100m in the last financial year.

Let’s start by setting out the wider context of the cloud market:

  • AWS is a fast growing business, roughly 13% of Amazon’s total sales (as of fiscal Q1 2019). Just 15 years old, it has quickly come to represent the bulk of Amazon’s profits (and is sometimes the only part of Amazon that is in profit – though Amazon would say that they choose not to make the retail business profitable, preferring to reinvest).
  • Microsoft’s Azure is regularly referred to as a smaller, but faster growing business than AWS. Google is smaller still. It’s hard to be sure though – getting like for like comparisons is difficult. AWS’ revenues in Q2 2019 were $7.7bn, and Microsoft’s cloud (which includes Office 365 and other products) had $9.6bn in revenues. AWS’ growth rate was 41%, Azure’s was 73% – both rates are down year on year. Google’s cloud (known as GCP) revenue isn’t broken out separately but is included in a line that includes G-Suite, Google Play and Nest, totalling $5.45bn, up 25%
  • Amazon, as first mover, has built quite the lead with various figures published, including those in the Telegraph article, suggesting it has as much as 50% of the nascent cloud market. Other sources quote Azure at between 22 and 30% and Google at less than 10%%

There’s a almost “by the by” figure quoted that I can’t source, where Lloyd’s of London apparently said that “even a temporary shutdown at a major cloud provider like AWS could wreak almost $20bn in business losses.” The Lloyd’s report I downloaded says:

  • A cyber incident that took a “top three cloud provider” offline in the US for 3-6 days would cost between $6.9bn and $14.7bn (much of which is uninsured, with insured losses running $1.5-2.8bn)

What’s clear from all of the figures is that the cloud market is expanding quickly, that Amazon has seized a large share of that market but is under pressure from growing rivals, and that there is an increasing concentration of workloads deployed to the cloud.

It’s also true that governments generally, but particularly UK government, are a long way from a wholesale move to the cloud with few front line, transactional services, deployed. Most of those services are still stuck in traditional data centres, anchored by legacy systems that are slow to change and that will resist, for years to come, a move to a cloud environmnet. Instead, work will likely be sliced away from them, a little at a time, as new applications are built and the various transformation projects see at least some success.

The Crux

When the move to cloud started, government was still clinging to the idea that its data somehow needed protection beyond that used by banks, supermarkets and retailers. There was a vast industry propping up the IL3 / Restricted classification (where perhaps 75-80% of government data sat, mostly emails asking “what’s for lunch?”). This classification made cloud practically impossible – IL3 data could not sit on the same servers or storage as lower (or higher) classified data, it needed to be in the UK andsecured in data centres that Tom Cruise and the rest of the Mission Impossible team couldn’t get into. Let’s not even get into IL4. And, yes, I recognise that the use of IL3 and IL4 in regards to data isn’t quite right, but it was by far the most used way of referring to that data.

Then, in 2014, after some years of work, government made a relatively sudden, and dramatic, switch. 95% of data was “Official” and could be handled with commercial products and security. A small part was “Official Sensitive” which required additional handling controls, but no change in the technical environment.

And so the public cloud market became a viable option for governments systems – all of them, not just websites and transactional front ends but potentially anything that government did (that didn’t fall into the 5% of things that are secret and above).

Government was relatively slow to recognise this – after all, there was a vast army of people who had been brought up to think about data in terms of the “restricted” classification, and such a seismic change would take time. There are still some departments that insist on a UK presence, but there are many who say “official is official” and anywhere in the UK is fine

It was this, more than anything, that blew the doors off the G-Cloud market. You can see the rise in Lot 1/IaaS cloud spend from April 2014 onwards. That was not just broad awareness of cloud as an option, but the recognition that the old rules no longer applied.

The UK’s small and medium companies had built infrastructures based around the IL3 model. It was more expensive, took longer, and forced them through the formal accreditation model. Few made it through; only those with strong engineering standards and good process discipline and, perhaps, relatively deep pockets. But once “official” came along, much of that work was over the top, driving cost and overhead into the model and it wasn’t enough of a moat to keep the scale players out.

TL;DR

I’ve let contracts worth several hundred million pounds in total and worked with people who have done 5, 10 or 20x that amount. I’ve never met anyone in government who bought something because of a relationship with a former colleague or because of any bias for or against any supplier. Competition is fearsome. Big players can outspend small players. They can compete on price and features. Small players can still win. Small players can become big players. Skate where the puck is going, not where it was.

How does a government department choose a cloud provider?

Whilst the original aim of G-Cloud was to be able to type in a specification of what was wanted and have the system spit out some costs (along with iTunes style reviews), the reality is that getting a quote is more complicated than that. The assumption, then, was perhaps that cloud services would be true commodity, paying by the minute, hour or day for servers, storage and networks. That largely isn’t the case today.

There are three components to a typical evaluation

1) How much will it cost?

2) What is the range of products that I can deploy and how easily can I make that happen? Is the supplier seen by independent bodies as a leader or a laggard.

3) Do I, or my existing partners, already have the skills needed to manage this environmment?

Most customers will likely start with (3), move to (2) and then evaluate (1) for the suppliers that make it through.

Is there a bias here? With AWS having close to 50% market share of the entire cloud market, the market will be full of people with AWS skills, followed closely by those with Azure skills (given the predominance of Microsoft environments for e.g. Active Directory, email etc in government). Departments will look at their existing staff, or that of their suppliers, or who they can recruit, and pick their strategy based on the available talent.

Departments will also look at Gartner, or Forrester, and see who is in the lead. They will talk to a range of supplier partners and see who is using what. They will consult their peers and see who is doing what.

But there’s no bias against, or for, any given supplier. We can see that when we read about companies who have been hauled over the coals by one department and the very next week they get a new contract from a different department. Don’t read conspiracy into anything government ever does; it’s far more likely to be cockup.

Is there a revolving door?

People come into government from the outside world and people leave government to go to the outside world. In the mid-2000s there was a large influx of very senior Accenture people joining government; did Accenture benefit? If anything, they probably lost out as the newcomers were overcautious rather than overzealous.

Government departments don’t choose a provider because a former colleague or Cabinet Office power broker is employed by the supplier. As anywhere, relationships persist for a period – not as long as you would think – and so some suppliers are better able to inform potential customers of the range of their offer, but this is not a simple relationship. Some people are well liked, some are well respected and some are neither. There are 17,000 people in government IT. They all play a role. Some will stay, some will go. Some make decisions, some don’t.

Also, a bid informed by a former colleague could be better written than one uninformed. This advantage doesn’t last beyond a few weeks. I’ve worked on a lot of bids (both as buyer and seller) and I’m still amazed how many suppliers fail to answer the question, don’t address the scoring criteria, or waffle away beyond the word count. If you’ve been a buyer, you will likely be able to teach a supplier how to write a bid; but there are any number of people who can do that,

There is little in the way of inside information about what government is or isn’t doing or what its strategy will look like. Spend a couple of hours with an architect or bid manager in any Systems Integrator that has worked for several departments and you will know as much about government IT strategy as anyone on the inside.

Do costs escalate (and are suppliers lowballing)?

Once a contract is signed, and proved to be working, it would be unusual if more work was not put through that same contract.

What’s different about cloud is mostly a function of the sift from capex to opex. Servers largely sit there and rust. The cost is the cost. Maybe they’re 10% used for most of their lives, with occasional higher spikes. But the cost for them doesn’t change. Any fluctuations in power are wrapped into a giant overhead number that isn’t probed too closely.

Cloud environments consume cash all the time though. Spin up a server and forget to spin it down and it will cost you money. Fire up more capacity than you need, and it will cost you money. Set up a development environment for a project and, when the project start is delayed by governance questions, don’t spin it down, and it will cost you money. Plan for more capacity than you needed and don’t dynamically adjust it, and it will cost you money. Need some more security, that’s extra? Different products, that’s more as well. If you don’t know what you need when you set out, it will certainly cost more than you expected when you’re done.

Many departments will have woken up to this new cost model when they received their first bill and it was 3x or 5x what they expected. Cost disciplines will then have been imposed, probably unsuccessfully. Over time, these will be improving, but there are still going to be plenty of cases of sticker shock, both for new and existing cloud customers, I’m sure.

But if the service is working, more projects will be put through the same vehicle, sometimes with additional procurement checks, sometimes without. The Inland Revenue’s original contract with EDS was valued, in 1992, at some £200m/year. 10 years later it was £400m and not long after that, with the addition of HMCE (to form HMRC), and the transition to CapGemini, it was easily £1bn.

Did EDS lowball the cost? Probably. And it probably hurt them for a while until new business began to flow through the contract – in 1992, the IR did not have a position on Internet services, but as it began to add them in the late 90s, its costs would have gone up, without offsetting reductions elsewhere.

Do suppliers lowball the cost today? Far less so, because the old adage “price it low and make it up on change control” is difficult to pull off now and with unit costs available and many services or goods being bought at a unit cost rate, it would be difficult to pull the wool over the eyes of a buyer.

Is tax paid part of the evaluation?

For thirty years until the cloud came along, most big departments relied on their outsourced suppliers to handle technology – they bought servers, cabled them up, deployed products, patched them (sometimes) and fed and watered them. Many costs were capitalised and nearly everything was bought through a managed services deal because VAT could be reclaimed that way.

Existing contracts were used because it avoided new procurements and ensured that there was “one throat to choke”, i.e. one supplier on the hook for any problems. Most of these technology suppliers were (and are) based outside of the UK and their tax affairs are not considered in the evaluation of their offers.

HMRC, some will recall, did a deal with a property company registered in Bermuda, called Mapeley, that doesn’t pay tax in the UK.

Tax just isn’t part of the evaluation, for any kind of contract. Supplier finances are – that is, the ability of a company to scale to support a government customer, or to withstand the loss of a large customer.

Is 1/3rd of government information stored in AWS?

No. Next question.

IaaS expenditure is perhaps £10-12m/month (through end of 2018). Total government IT spend, as I’ve covered here before, is somewhere between £7bn and £14bn/year. In the early days of the Crown Hosting business case, hosting costs were reckoned to be up to 25% of that cost. Some 70% of the spend is “keep the lights on” for existing systems.

Most government data is still stored on servers and storage owner by government or its integrators and sits in data centres, some owned by government, but most owned by those integrators. Web front ends, email, development and test environments are increasingly moving to the cloud, but the real data is still a long way from being cloud ready.

Are 80% of contracts won by large providers?

Historically, no. UKcloud revenues over the life of G-Cloud are £86m with AWS at around £63m (through end of 2018). AWS’ share is plainly growing fast though – because of skills in the marketplace, independent views of the range of products and supportability, and because of price.

Momentum suggests that existing contracts will get larger and it will be harder (and harder) for contracts to move between providers, because of the risk of disruption during transition, the lack of skill and the difficulty of making a benefits case for incurring the cost of transition when the savings probably won’t offset that cost.

So what should we do?

It’s easy to say “nothing.” Government doesn’t pick winners and has rarely been successful in trying to skew the market. The cloud market is still new, but growing fast, and it’s hard to say whether today’s winners will still be there tomorrow.

G-Cloud contracts last only two years and, in theory, there is an opportunity to recompete then – see what’s new in the market, explore new pricing options and transition to the new best in class (or Most Economically Advantageous Tender as it’s known)

But transition is hard, as I wrote here in March 2014. And see this one, talking about mobile phones, from 2009 (with excerpts from a 2003 piece). If services aren’t designed to transition, then it’s unlikely to ever happen.

That suggests that we, as government customers, should:

1) Consciously design services to be portable, recognising that will likely increase costs up front (which will make the business case harder to get through), but that future payback could offset those costs; if the supplier knows you can’t transition, you’re in a worse position than if you have choices

2) Build tools and capabilities that support multiple cloud environments so that we can pick the right cloud for the problem we are trying to solve. If you have all of your workloads in one supplier and in one region, you are at risk if there is a problem there, be it fat fingers or a lightning strike.

3) Train our existing teams and keep them up to date with new technologies and services. Encourage them to be curious about what else is out there. Of course they will be more valuable to others, including cloud companies, when you do this, but that’s a fact of life. You will lose people (to other departments and to suppliers) and also gain people (from other departments and from suppliers).

And, as government suppliers, we should:

1) Recognise that big players exist in big markets and that special treatment is rarely available. They may not pay tax in this jurisdiction, but that’s a matter for law, not procurement. They may hire people from government; you have already done the same and you will continue to look out for the opportunity. Don’t bleat, compete.

2) Go where the big players aren’t going. Offer more, for less, or at least for the same. Provide products that compound your customers investment – they’re no longer buying assets for capex, but they will want increased benefit for their spend, so offer new things.

3) Move up the stack. IaaS was always going to be a tough business to compete in. WIth big players able to sweat their assets 24/7, anyone not able to swap workloads between regions and attract customers from multiple sectors that can better overlap peak workloads, is going to struggle. So don’t go there, go where the bigger opportunities are. Government departments aren’t often buying dropbox, so what’s your equivalent for instance?

But, don’t

1) Expect government to intervene and give you preferential treatment because you are small and in the UK. Expect such preferential treatment if you have a better product, at a better price that gets closest to solving the specific problem that the customer has.

2) Expect government to break up a bigger business, or change its structure so that you can better compete. It might happen, sure, but your servers will have long since rusted away by the time that happens.

How Much is IT?

Maybe 15 or 16 years ago I sat in a room with a few people from the e-Delivery team and we tried to figure out how much the total IT spend in central government was. All the departments and agencies (including the NHS) were listed on a board and we plugged in the numbers that we knew about (based on contracts that had been let or spend that we were familiar with). Some we proxied based on “roughly the same size as.”

After a couple of hours work, we came up with a total of about £14bn. That’s an annual spend figure. Of course, some would be capital, and some operating costs, but we didn’t include major projects (which would tend towards capital) so it’s likely that 70-80% of that spend was “keep the lights on”, i.e. servers/hosting, operational support, maintenance changes and refreshes.

That number may be wrong today given 10 years of tighter budget management and significant reductions in the staff counts for many large departments. It might be that the £6.3bn in 2014/15 published in an April 2016 report is now more accurate (total government spend that year was c£745bn). A 2011 report suggests £7.5bn. Much depends on the definition of central government (is the NHS in? MoD? Agencies such as RPA, Natural England etc?) and what’s included in the spend total (steady state versus project, pure IT versus consultancy on IT transformation and change projects).

Maybe our number was wrong, maybe the cost has fallen as departments have shrunk. Or maybe it’s hard to get to the right number.

IT is both “a lot” of money and “not much” – public pensions will be some £160bn this year, health and social care roughly the same, Defence as much as £50bn and Social Security perhaps £125bn.

But how much is the right number?It’s useful to know how much is being spent for at least a couple of reasons

  1. Are we getting more efficient at spending, reducing the cost of keeping the lights on and “getting more, or at least the same, for less”?
  2. Are we pushing the planned 25% (or 33% for the 2020 target) of spend towards SMEs?

It would be more useful to know what the breakdown of spending was, e.g. how much are we spending on

  • Hosting?
  • Legacy system support?
  • Infrastructure refreshes?
  • Application maintenance?
  • Application development?
  • And so on

Knowing those figures, department by department, would let us explore some more interesting topics

  • How much are we spending overall and how does that number sit versus other expenses? And versus private sector companies?
  • How are we doing with the migration to cloud (and the cloud first policy) and how much is there left to do?
  • What are our legacy systems really costing us to host, support and enhance? And when we compare those hosting costs with cloud costs, is there a strong case for making the switch (sooner rather than later)?
  • What is the opportunity available if we close down some legacy systems and replace them with more modern systems (with the aim of reducing costs to host, upgrade and refresh, as well as the future cost of new policy introduction)
  • If we don’t take any action and replace some of our old systems, what kind of costs are we in for over the next 5 and 10 years and does that help frame the debate about the best way ahead?

Linday Smith, aka @Insadly, produces some detailed and useful insight on G-Cloud spend, for instance, that tells us, based on data for April to July 2019 that spend on cloud hosting appears to have fallen from £94m in the same quarter in 2018 to £78m this year (he notes that there are some data anomalies that may make this data not so useful – I’ve commented on the problems with the G-Cloud data before and agree with him that it can be unhelpful).

It’s possible that this is a sign of departments getting smart about their hosting – spinning down machines that are unused, using cloud capacity to deal with peaks and then reverting to a lower base capacity, consolidating environments and using better tools to manage different workloads. It could also be a reflection of seeking lower cost suppliers.

Or it could be a sign that there are fewer new projects starting that are using the cloud from day one (because my overall sense is that the bulk of cloud projects are new, not migrations of existing systems), or that departments are struggling to manage cloud environments and so have experimented and pulled back. Alternatively, it could be that departments are Capex rich and because cloud hosting is an Opex spend, they’re actually buying servers again.

Some broad analysis that showed the trends in spending across departments would improve transparency, highlight areas that need attention, help suppliers figure out where to make product investments and help departmental CIOs figure out where their spending was different from their peers. On the journey away from legacy it would also show where the work needed to be done.

The 10 Year Strategy

In May 2008, on this blog, I wrote about Chateau Palmer (a fine Bordeaux wine) and, specifically, about how making wine forces a long term strategy – vines take years before they produce a yield that is worth bottling (my friends in the business say that the way to make a small fortune in wine is to start with a large one), more years can go by before the wine in the bottle is drunk by most consumers, and yet, every year the process repeats (with some variety, much caused by the weather).  It’s definitely a long game.

I wondered what would happen if you could only make decisions about your IT investment every 10 years, and them made a couple of predictions.  I said:
Cloud computing – This is going to be increasingly talked about until you can’t remember when people didn’t talk about and then, finally, people are going to do it. [If you read only this bit then perhaps I am a visionary strategist; if you read the whole of it, I got most of the rest wrong]
Application rationalisation – Taken across a single country’s government as a whole, the total number of applications will be a frightening number, as will the total cost to support them all. There are several layers of consolidation, ranging from declaring “end of life” for small systems and cutting their budgets to zero (and waiting for them to wither and die – this might take eons) to a more strategic, let’s use only one platform (SAP, Oracle etc) from here on in and migrate everything to that single platform (this too could take eons)
It feels, 11 years on, that we are still talking about cloud computing and that, whilst many are doing it, we are a long way from all in.  And the same for application rationalisation – many have rationalised, but key systems are still creaking, supported by an ever decreasing number of specialists, and handling workloads far beyond their original design principles.
Did we devise a strategy and stick to it? or did we bend with the wind and change year to year, rewrite as new people came and went? Perhaps we focused on business as usual and forgot the big levers of change? 

Disaggregation Disillusionment

About 15 years ago I wrote a post titled “Websites of Mass Disillusionment”, or maybe it was “Websites of Mass Delusion.”  I can’t recall which and, unusually, I can’t find the original text – I was told, by a somewhat unhappy Minister of the Cabinet Office, to delete the post or lose my job.  At the time I rather liked my job and so I opted to delete the post.  The post explored how, despite there being 1000s of government websites, on which 100s of millions of pounds were being spent, the public, at large, didn’t care about them, weren’t visiting them and saw no need to engage with government (here’s at least a thread of the article, published in August 2009).  I don’t think the Minister disagreed with the content, but he definitely wasn’t keen on the title, coming so soon after the famous missing WMDs in Iraq.
I’m somewhat hesitantly hinting at that title again with this post, though I have less fear of a Minister telling me I will lose my job because of it (I’m not employed by any Ministers) and, anyway, I think this topic, disaggregation, is worth exploring.
It’s worth exploring because the news over recent months has been full of stories about departments extending contracts with existing suppliers, re-scoping and re-awarding contracts to those same suppliers or moving the pieces around those suppliers creating the illusion of change but, fundamentally, changing little.
It looks like jobs for the boys again; there’s very little sign of genuine effort at disaggregation; they’re just moving the pieces around
This feels like a poor accusation – putting to one side the tone of “jobs for the boys” in 2018, it hints at dishonesty or incompetence when, I think, it says more about the challenges departments are facing as they grapple with unwinding contracts that were often put in place 15-20 years ago and that have been “assured” rather than “managed” for all of that time.
But, let’s move on and first establish what we mean, in the context of Public Sector IT, by disaggregation.  We have to wind back a bit to get to that:
IT Outsourcing In The Public Sector (1990 onwards)
In the early 1990s, when departments began to outsource their IT, the playbook was roughly:
Count up everyone with the word “technology”, “information” or “systems” in their job title and draw up a scope of services that encompassed all of that work. 
Carry out an extensive procurement transition to find a third party provider prepared to pay the lowest price to do the same job.    The very nature of government departments meant that these contracts were huge – sometimes £100-200m/year (in the 90s) and because it was such hard work to carry out all of the procurement process, the contracts were long, often 10 years or more.
With them went hardware and software, networks and other gadgets – or, at least, the management of those things.  Whereas the people moved off the payroll, the hardware often stayed on the asset register (and new hardware went on that same asset register, even when purchased through the third party).  This was mostly about capital spending – something with flashing lights went on the books after all.  
There were a lot of moving parts in these deals – the services to be provided, the meaures by which performance and quality would be assessed, legal obligations, plans for future exits and so on.  I’ve seen some of the contracts and they easily ran to more than 10,000 pages. 
Side Effects
There were four interesting side effects as a result of these outsource deals:
  1. Many departments could now recover VAT on “managed services” but not on hardware purchases.  Departments are good at exploiting such opportunities and so the outsource vendor would buy the hardware on behalf of the department, sell it to back to the department as part of a managed service, and the department would then reclaim the VAT, getting 20% back on the deal.   Those who were around in the early days of G-Cloud will remember the endless loops about whether VAT could be reclaimed – it was some years after G-Cloud started that this was successfully resolved.
  2. Departments now had a route to buying more IT services, or capability, without needing to go through a new procurement, provided the scope of the original procurement was wide enough.  That meant that existing contracts could be used to buy new services.  And, as everyone knows, IT doesn’t stay still, so there were a lot of new services, and nearly all of them went through the original contract.  Those contracts swelled in size, with annual spend often double or triple the original expectation within the first few years.  When e-government, now digital, came along and when departments merged, those numbers often exploded.
  3. Whilst all of the original staff involved transferred, via TUPE, on the package they had in government – salary plus index linked pensions etc – any new staff brought on e.g. to replace those who had left (or retired) or for new projects, would come on with a deal that was standard for the private sector.  That usually meant that instead of pension contributions being 27-33%, they were more likely 5-7%.  Instantly, that created an easy save for government – it was 20% or more cheaper, even before we talk VAT, to use the existing provider.
  4. Whilst departments have long had an obligation to award business to smaller players, the ease of using the big players with whom they already had contracts made that difficult (in the sense that there was an easy step “write a contract change to award this work to X” versus “Write the spec, go to market, evaluate, negotiate, award, work with new supplier who doesn’t understand us”).  Small players were, unfairly, shut out.
The Major Flaw
There was also a significant flaw:
  • When a department wanted to know what something cost, it was very hard to figure out.  Email for instance – a few servers for outlook, some admin people to add and delete users etc, how hard can it be to cost?  That’s a bit like Heisenberg’s Uncertainty Principle – the more you study where something is the less you know about where it’s going.  In other words, if you looked closely at one thing, the money moved around.  If something needed to be cheap to get through, the costs were loaded elsewhere.  If something needed to be expensive to justify continued investment (avoiding the sunk cost fallacy), costs were loaded on to it.  Then, of course, there was the ubqiuity of “shared services” – as in “Well, Alan, if you want me to figure out how much email costs, we need to consider some of Bob’s time as he answers the phone for all kinds of problems, a share of the network costs for all that traffic, some of Heidi’s time because email is linked to the directory and without the work she does on the directory, it wouldn’t work” and so on.  Benchmarking was the supposed solution for that – but if you couldn’t break out the costs, how did you know it was value for money?  Or not?  Did suppliers consciously hinder efforts to find true cost?  I suspect it was a mix of the structure they’d built for themselves – they didn’t, themselves, know how it broke down – and a lack of disciplined chasing by departments … because the side effects and the flaws self-reinforced.
Reinforcement

Over the 20 years or so from the first outsourcing until Francis Maude part 2 started, in 2010, these side-effects, and the major flaw, reinforced the outsourcing model.  It was easy to give work to the supplier you already worked with.  It was hard to figure out whether you were over-paying, so you didn’t try to figure that out very often.  The supplier was, on the face of it, anyway, cheaper than you could do it (because VAT, because cost of transition, because pensions etc).  These aren’t good arguments, but I think they are the argument.


What Do We Mean By Disaggregation?
Disaggregation, then, was the idea of breaking out these monolithic contracts (some departments, to be fair, had a couple of suppliers, but usually as a result of a machinery of government change that merged departments, or broke some apart).
A department coming to the end of its contract period with its seeming partner of the last decade would, instead of looking for a new supplier to take on everything, break their IT services into several component parts: networks, desktop, print, hosting, application support, Helpdesk and so on.
There were essentially three ways of attempting this as in the picture below (this, and all of the pictures here, are from various slide decks worked on in 2013/4):
That is:
1) A simple horizontal split – perhaps user facing services, and non-user facing.   This was rarely chosen as it didn’t pass the GDS spend controls test and, in reality, didn’t really achieve much of the true aim of disaggregation, albeit it made for a simple model for a department to operate.
2) A “towers based” model with an integration entity or partner working with several towers, for instance, hosting, desktop, network and applications support.  This was the model chosen by the early adopters of disaggregation.  Some opted to find a partner as their SIAM, some thought about bringing it inhouse, some did a little of both.  The pieces in a tower model are still pretty large, often far out of the reach of small providers, especially if the contract runs over 5 years or more.  Those departments that tried it this way, haven’t had a good experience for the most part, and the model has fallen out of favour.
3) A fully disaggregated model with a dozen or more suppliers, each best of breed and focused on what they were best at.  Integration, in this case, was more about filling in all of the gaps and, realistically, could only be done in house.  Long ago, and I know it’s a broken record, when we built the Gateway, we were disaggregated – 40+ suppliers working on the development, a hosting provider, an infrastructure builder, an apps support provider, a network provider and so on.  Integration at this level isn’t easy.
In the “jobs for the boys” quote above, the claim is really that the department concerned had opted for something close to (2) rather than (3) – that is, deliberately making for large contracts (through aggregation) and preventing smaller players from getting involved.  It’s more complicated than that.

That reinforcement – the side effects and the flaws – plus the inertia of 20+ years of living in a monolithic outsource model meant that change was hard.  Really hard.

What Does That Mean In Practice?
Five years ago, I did some work for a department looking at what it would take to get to the third model, a fully disaggregated service.  The scope looked like this:
Service integration, as I said above, fills in the gaps … but there are a lot of components.  Lots of moving parts for sure.  Many, many millions were spent by departments on Target Operating Models – pastel shaded powerpoints full of artful terms for what the work would look like, how it would be done and what tools were used.  Nearly all of that, I suspect, sits on a shelf, long since abandoned as stale, inflexible and useless.
If they had disaggregated to this level, they would need to sign more than 20 contracts.  That would mean 20 procurements carried out roughly in parallel, with some lagging to allow others to break ground first.  But all would need to complete by the time the contract with the main supplier came up for renewal.  The end date, in other words was, in theory at least, fixed.  Always a bad place to start.
Procurement Challenge
When you are procuring multiple things in parallel, those buying and those selling suffer.  Combining some things would allow a supplier, perhaps, to offer a better deal.  But the supplier doesn’t know what they’ve won and can’t bid on the basis that they will win several parts and so book the benefit of that in their offer (unless they’re prepared to take some possibly outlandish risks).  Likewise, the customer wants variety in the supply chain and wants to encourage bidders to come forward but, at the same time, needs to manage a bid process with a lot of players, avoiding giving any single bidder more work than is optimal (and the customer is unable to influence the outcome of any single bid of course), keeping everyone in the game, staying away from conflicts of interest and so on.
Roadmap Challenge
The transitions are not equally easy (or equally hard).  Replacing WAN connectivity is relatively straight forward – you know where all the buildings are and need to connect them to the backbone, or to the Internet.  Replacing in office connectivity is a bit harder – you need to survey every office and figure out what the topology of the wireless network is, ripping out the fixed connections (except where they might be needed).  Moving to Office 365 might be a bit harder, especialyl if it comes with a new Active Directory and everyone needs to be able to mail everyone else, and not lose any mail, whilst the transition is underway.  None of these are akin to putting astronauts on the moon, but for a department with no astronauts, hard enough.

We also need to consider that modern services are, for the most part, disaggregated from day one – new cloud services are often procured from an IaaS provider, several development companies, a management company and so on.  What we are talking about here, for the most part, is the legacy applications that have been around a decade or more, the network that connects the dozens or hundreds of offices around the country (or the world), the data centres that are full of hardware and the devices that support the workload of thousands, or tens of thousands of users.  These services are the backbone of government IT, pending the long promised (and delayed even longer than disaggregation), digital transformation.  They may not (and indeed, are not) user led, they’re certainly not agile – but they handle our tax, pensions, benefits, grants to farmers and so on.

What Does It Really Mean In Practice?

Writing papers for Ministers many years ago, we would often start with two options, stark choices.  The preamble we used to describe these was “Minister, we have two main choices.  The first one will result in nuclear war and everyone will die.  The second will result in all out ground war and nearly everyone will die.  We think we have a third way ahead, it’s a little risky, and there will be some casualties, but nearly everyone will survive.”  Faced with that intro, what choice do you think the Minister will make?

In this context, the story would be something like: “Minister, we have two options.  The first is to largely stay as we are.  We will come under heavy scrutiny, save no money, progress our IT not a jot and deliver none of the benefits you have promised in your various policies.  The second is to disaggregate our services massively, throwing our control over IT into chaos, increasing our costs as we transition and sucking up so many resources that we won’t be able to do any of the other work that you have added to our list since you took office. Alternatively … we have a third choice”

Disaggregate a little. Take some baby steps.  Build capability in house, manage more suppliers than we’re used to, but not so many suppliers such that your integration capability would be exhausted before it had a chance.

Remember, all those people in the 90s who technology, IT or systems in their job title had been outsourced.  They were the ones who built and maintained systems and applications.  In their place came people who managed those who built and maintained systems – and all of those people worked for third parties.   There’s a huge difference between managing a contract where a company is tasked with achieving X by Y and managing three companies, none of whom have a formal relationship with each other, to achieve X by Y.
The next iteration tried to make it a bit simpler:
We’re down from more than 20 contracts to about 11. Still a lot, but definitely less to manage.  Too much to manage for most departments though.  We worked on further models that merged several of the boxes, aiming for 5-7 contracts overall – a move from just 1 contract to 5-7 is still a big move, but it can be managed with the right team in-house, the right tools and if done at the right pace.
The Departmental Challenge
Departments, then, face serious challenges:
– The end date is fixed.  Transition has to be done by the time the contract with the incumbent finishes.  Many seem to be solving that by extending what they have, as they struggle with delays in specification, procurement or whatever.
– Disaggregate as much as is possible.  The smaller the package, the more bidders will play.  But the more disaggregation there is, the more white space between the contracts is and the greater the management challenge for the departments.  Most departments have not spent the last 5 years preparing for this moment by doubling up on staff – using some staff to manage the existing contract and finding new staff to prepare for the day when they will have to manage suppliers differently.  The result is that they are not disaggregating as much as is possible, but as much as they think they can.
– Write shorter contracts.  Short contracts are good – they let you book a price now in the full knowledge that, for commodity items at least, the same thing will be cheaper in two years. It isn’t necessarily but it at least means you can test the market every two years and see what’s out there – better prices, better service if you aren’t happy with your supplier, new technology etc.  The challenge is that the process – the 5 stage business case plus the procurement – is probably that long for some departments, and they are just not geared up to run fleets of procurements every two years.  Contracts are longer, then, to allow everyone to do the transition, get it working, make/save some money and then recompete.

– TUPE nearly always applies.  Except when it doesn’t – If you take your email service and move it to Office 365, the staff aren’t moving to Microsoft or to that ubiquitous company known as “the cloud.”  But when it does apply, it’s not a trivial process. Handling which staff transition to which companies (and ensuring that the companies taking on the staff have the capability to do it) is tricky.  Big outsource providers have been doing this for years and have teams of people that understand how the process works.  Smaller companies won’t have that experience and, indeed, may not have the capability to bring in staff on different sets of Ts & Cs.

On top of that, there are smaller challenges on the way to disaggregation, with some mitigations:

-Lack of skills available in the department; need to identify skills and routes for sourcing them early

-Market inability to provide a mature offer; coach the market in what will be wanted so that they have time to prove it

-Too great an uncertainty or risk for the business to take; prove concepts through alpha and beta so risks are truly understood

-Lack of clear return for the investment required; demonstrate delivery and credibility in the delivery approach so that costs are
managed and benefits are delivered as promised

-Delays in delivery of key shared services; close management with regular delivery cycles that show progress and allow slips to be visible and dealt with

-Challenges in creating an organisation that can respond to the stimulus of agile, iterative delivery led by user need; start early and prove it, adjust course as lessons are learned, partner closely with the business

What Do We Do?
Departments are on a journey.  They are already disaggregating more than we can see – the evidence of G-Cloud spend suggests that new projects are increasingly being awarded to smaller, newer players who have not often worked with government before. Departments are, therefore, learning what it’s like to integrate multiple suppliers, to manage disparate hosting environments and to deliver projects on an iterative basis.  As with any large population, some are doing that well, some are doing just about ok, and some are finding it really hard and making a real mess of it.  One hopes that those in the former category are teaching those in the latter, but I suspect most are too busy getting on with it to stop and educate others.
The journey plays out in stages – not in three simple stages as I have laid out above, but in a continuum where new providers are coming in and processes are being reformed and refocused on services and users.  Meanwhile, staff in the department are learning what it’s like to “deliver” and “manage” and “integrate” first one service and then many services, rather than “assure” them and check KPIs and SLAs.  Maybe the first jump is from one supplier to four, or five.  A couple of years later, one of those is split into two or three parts.  A year later, another is split.
This is a real change for the way government IT is run.  It’s a change that, in many ways, takes us all the way back to the 1980s when government was leading the way in running IT services – when tax, benefits, pensions and import/export was first computerised.  Back then, everything was run in house.  Now, key things are run in house and others outsourced, and, eventually, dozens of partners will be involved.  If we had our time over again, I think we would have outsourced paper handling (because it was largely static and would eventually decline) and kept IT (because it constantly changed) and customer contact (because that’s essentially what government does, especially when the IT or the paper processing lets it down) in house.
Disaggregation hasn’t happened nearly as fast as many of us hoped, or, indeed, as many of us have worked for in the last few years.  But it is happening.    The side effects, the flaws, inertia, reinforcement and a dominance of “assurance” rather than “delivery” capability, mean it’s hard.

We need to poke and prod and encourage further experimentation.  Suppliers need to make it easy to buy and integrate their services (recognising that even the cheapest commodity needs to be run and operated by someone). And when someone seems to take a short cut and extend a contract, or award to an existing supplier, we need to understand why, and where they are on their journey.  Departments need to be far more transparent about their roadmap and plans to help that

I want to give departments the benefit of the doubt here.  I don’t see them taking the easy way out; I have, indeed, seen some monumental cockups badged as efforts to disaggregate.    Staggering amounts of money – in all senses of the word (cash out the door, business stagnation, loss of potential benefits etc) – have been wasted in this effort.  That suggests a more incremental approach will work better, if not as well as we would all want.

That means that departments need to:

  1. Be more open about what their service provision landscape looks like two, three, four and five years out (with decreasing precision over time, not unreasonably). Coach the market so that the market can help, don’t just come to it when you think you are ready.
  2. Lay out the roadmap for legacy technology, which is what is holding back the increasing use of smaller suppliers, shorter contracts and more disaggregation.  There are three roadmap paths – everything goes exactly as you planned for and you meet all your deadlines (some would say this is the least likely), a few things go wrong and you fall a little behind, and it all goes horribly wrong and you need a lot more time to migrate away from legacy.  Departments generally consider only the first, though one or two have moved to the second. There’s an odd side effect of the spend control process – HMT requires optimism bias and so on to be included in any business case, spend controls normally strip that out, then departmental controls move any remaining contingency to the centre and hold it there, meaning projects are hamstrung by having no money (subject to approvals anyway) to deal with the inevitable challenges.
  3. Share what you are doing with modern projects – just what does your supplier landscape look like today

G-Cloud – A Whole Lot of "G", Not Much "Cloud"

It’s been nearly two years since I last looked at G-Cloud expenditure – when the total spend crossed £1bn at the end of 2015.  Well, as of July 2017, spend reached a little under £2.5bn, so I figured it was time to look again.  I am, as always, indebted to Dan Harrison for his data analysis – his Tableau work is second to none and it, really, should be taken up by GDS and used as their default reporting tool (obviously they should hire Dan to do this for them).

As an aside, the raw data has been persistently poor and is not improving.  Date formats are mixed up, fields are missing, the recent change to combine lots means that there are some mixed up numbers and, interestingly, the project field has been removed – I’d looked at this before and queried whether many projects were actually cloud related (along with the fact that something like 20% of projects were listed as “null” – I can understand that it’s embarrassing having empty data, but removing the field doesn’t make the data qualitatively better, it just makes me think something is being hidden).

Recall this, from June 2014, for instance:

Scanning through the sales by line item, there are far too many descriptions that say simply “project manager”, “tester”, “IT project manager” etc.  There are even line items (not in Lot 4) that say “expenses – 4gb memory stick” – a whole new meaning to the phrase “cloud storage” perhaps.

Here’s the graph of spend over the 5 1/2 years that G-Cloud has been around:

The main conclusions I reach are much the same as before:

– 77% of spend continues to be in “Cloud Support” (previously known as “Specialist Services”).  It’s actually a little higher than that – now that PaaS and SaaS have been merged (to create a category of “Cloud Software”, Lot 4 has become Lot 3 but both categories are reported in the data.  It’s early days for Cloud Software – it would be good if GDS cleaned up the data so that historic lots reflected current lots.

– 2017 spend looks like it will be slightly higher than 2016, but not by much.  If the idea was to move work from “People As a Service”, i.e. Cloud Support, to other frameworks, it’s not obvious that it’s happened in a meaningful way, but it may be damping spend a little.

– IaaS spend, now known as Cloud Hosting, has reached £205m. I seem to remember from the early days of the Crown Hosting Service business case that there were estimates that government spent some £400m annually on addressable hosting charges (i.e. systems that could be moved to the cloud).  At the moment Cloud Hosting is a reasonably flat £6m/month, or £70m/year. It’s very possible that there’s a 1:10 saving in cloud versus legacy, but everything in me says that much of this cloud hosting is new spend, not reduced spend following migration to the cloud.  That’s good in that it avoids a much higher old-style asset rich infrastructure, but I don’t think it shows much of a true migration to the cloud.

28% of spend by the top 5 customers.  


In the past I’ve looked at the top spending customers and top earning suppliers, specifically in Lot 4 (now a combination of Lot 4 and the new Lot 3).  There are a couple of changes here:

– Back then, for customers … Home Office, MoJ, DVLA, DSA and HMRC were the highest spending departments with around £150m between them.  Today … Home Office, MoJ, HMRC, Cabinet Office and DSA (DVLA dropped to 7th place) have spent nearly £800m (total spend across all lots by the top 5 customers is only £100m higher at £925m which shows the true dominance of support services at the top end).  £925m out of £2.5bn in just 5 customers.  £1.25bn (51%) is from the top 10 customers.

– And for suppliers, Mastek, Deloitte, Cap Gemini, ValTech and Methods were the top 5 with a combined revenue (again in Lot 4) of £67m.  Today it’s Equal Experts, Deloitte, Cap Gemini, BJSS and PA Consulting with revenue of £335m (total spend across all lots for the top 5 suppliers is £348m – that makes sense given few of the top suppliers are active across multiple lots – maybe Cap Gemini is the odd one out, getting some revenue for hosting or SaaS).  It takes the top 10 suppliers to make for 25% of the spend.  I don’t think that was the intention of G-Cloud – that it would be dominated by a small number of suppliers, though, at the same time, some of those companies – UKCloud (£64m) for instance – are still small companies and, without G-Cloud, might not exist or have reached such revenues if they did exist.

A couple of years ago I offered the observation that

“once a customer starts spending money with G-Cloud, they are more likely to continue than not.  And one a supplier starts seeing revenue, they are more likely to continue to see it than not.”

That seems to be exactly the case, here’s a picture showing the departments who have contracts that have run for more than 24 months (and up to 50 months – nearly as long as G-Cloud has been around):

If anything, this is busier than might be expected given the preponderance of Lot 4 – it might be reasonable to expect that support services would be short term and focused on a specific project, such as migrating locally hosted email to Office 365 or to Gmail, or setting up the capability to manage cloud infrastructure.  What we see, instead, is many long term resource contracts.
What should we really conclude?  And what can we do?
In 2012, with G-Cloud not even a year old, I asked whether it could ever be more than a hobby for government.   I wondered about some interim targets (at the time the plan was for a “cloud first” approach with “50% of spend in the cloud” – that should all have happened by now).  There is absence of strategy or overall plan for further cloud realisation – with GDS neutered and spend controls licking their wounds from the NAO’s criticism that they spent far more time than they should have done looking at projects spending less than £1m, it’s not clear who will grasp the mantle of driving the change away from long term contracts towards shorter, more cash intensive (as opposed to capital driven) contracts (be they with big or small suppliers).  Perhaps it’s time for Chris Chant and Denise Mcdonagh to come back?
  • Should there be spend control review of “Cloud Support” contracts to determine what they’re aiming to achieve and then assess whether there really has been a reduction in costs, a migration to the cloud, a change in the contracting model for the service?  If we were to do a show of hands across departmental CIOs now and ask how many were running their email in the cloud (the true cloud, not one they’ve made up and badged as cloud that morning), what would the response be?  If we were to make it harder and ask about directory services (such as Active Directory), what would the answer be?  If we were to look at historic Lot 4 and test how much had been spent in pursuit of such migrations, what would the answer be?  
  • What incentives could we put in place to encourage departments to make the move to cloud?  Departments have control over their budgets, of course, and lots of other things to spend the money on, but could we create a true central capability (key people drawn from departments and suppliers with a brief to build a cloud transition plan) that was architecture agnostic and delivery focused that would support departments in the transition – and that would be accountable (and quite literally held to account) for delivering on the promise of cloud transition?  If that was in place, could departments focus on their legacy systems and how to move those to more flexible platforms, in readiness for future cloud moves (or future enhancements to cope with Brexit)?
  • What more could we do to encourage UK based cloud companies (as opposed to overseas companies with UK bases) to excel?  Plainly they have to compete in a global market – and I were a UK hosting company, I would be watching Amazon very closely and wondering whether I will have a business in a few months – but that doesn’t mean to say we don’t want to encourage a local capability across all lots?  What would they need to know to encourage them to invest in the services that will be needed in the future? How could that information be made available so that a level playing field was maintained?  Do we want to encourage such a capability in the UK, or should we publish the overall plans and transition maps and let the chips fall where they may?
  • Are there changes that need to be made to the procurement model so that every supplier can see what every department is looking for rather than the somewhat peculiar approach now where suppliers may not even know a department is looking to make a purchase?  What would that add to the timeline?  Would it result in better competition?  Would customers benefit as well as suppliers?  Could we try it and see – you know that whole alpha, beta, A/B testing thing?
GDS have long since been quiet on grand, or indeed any, plans for transition to the cloud (and on many other things too).   Instead of a cloud first strategy, it looks like we have contracts being extended and delays to existing projects. IR35 likely resulted in some unexpected cost saves as the headcount of contractors and interims reduced almost overnight, but that also meant that projects were suddenly understaffed and further delayed.
Energy and Vision
We need a re-injection of energy and vision in the government IT world.  Not one where the centre dictates and micro-controls every action departments want to take, resulting in lengthy process, avoidance of spend that might be scrutinised and cancellation/delays to projects that could make a difference … but one where the centre actively facilitates and helps drive the changes that departments want to make, measuring them for logical consistency against an overall architectural plan and transition map rather than getting theological about code standards or architectures.
A Strategy And A Plan
At the same time we need to recommit to a strategy and a plan for delivering that strategy.  In terms of the cloud that means:
– Setting a cloud transition goal.  In the same way that we have set a goal to give increased business to SMEs (which G-Cloud is underpinning), we should be setting the same goal to move government to commodity, i.e. cloud-based, IT where it makes sense.  10% of the total budget (including Capex and Opex, or CDEL and RDEL if you prefer) in the first year, increasing from there to 25% in 2 years and 50% in 5 years, say.
– Reviewing the long (36 month plus) contracts and testing them for value, current performance and overall delivery.  Are they supporting migration to the cloud?  Is the right framework being used (if it’s not cloud but it is delivering, then use the right framework or other procurement option)?  It doesn’t matter, in my view, whether it was valid in the first place or how the process was or wasn’t followed originally, it matters whether there is value today and whether there are better options that will support the overall plan.  If it’s not cloud, let’s not call it cloud and let’s get to the root of what is really going on with commodity technology in government.
– Overwhelmingly adopting an architecture founded on multiple shared and secure infrastructures. There’s no need for a single architecture when the market provides so many commodity options – and spreading the business will foster innovation, increase the access points (and improve security through distributing data) and ensure that there is continued competitive tension.  Some of that infrastructure will be pure public cloud, some of it will be a shared government cloud (in the US, cloud providers maintain clones of their public infrastructure for federal government use – that may be one answer for specific areas; importantly, what I am not suggesting is that a department set up their own infrastructure and call it a cloud, thought there may be specific instances, in the security services, say, where data classifications may mean that’s the only option).  
– Migrating all of government’s commodity services to the cloud.  Commodity means email, directories, collaboration, HR, finance, service support, asset management and so on.  This doesn’t have to be a wholesale “move now” approach, but one that looks at when it’s sensible to close down existing applications and make the move.  No new applications should be built or deployed without first assessing whether there is a cloud alternative – this is a perfect place for a spending team to look at who is doing what and act as a hub for sharing what is going on across central and local government.  
  • I’ve been on the record for a long time as saying government should recognise that it doesn’t collaborate with itself – having collaboration services inside the department’s own firewall isn’t collaboration, it’s talking to yourself.  I believe that I even once suggested using a clone of Facebook for such collaboration.  Government doesn’t need lots of collaboration tools – it needs one or two where everyone, including suppliers and even customers, can get to with appropriate segregation and reviews to make sure people can only see what they’re supposed to see.  Whatever happened to Civil Pages I wonder?
– Putting in place a new test for Lot 3 (the old Lot 4) services to measure what is being purchased against its contribution to the department’s cloud migration strategy.  This is a “cloud first” test – are you really using this capability to help you move to the cloud?  What is the objective, what are the milestones?  A follow on test to see how delivery is progressing will then allow a regular state of the cloud nation report to be published to see what is and isn’t moving.  
– Working with local government, Devolved Administrations, the Health Service and others to see what they are doing in cloud.  With 84% of G-Cloud spend in central government, maybe the other folks are doing something different – maybe it’s good, maybe it’s not so good, but there are likely lessons to be learned.

10 Years After 10 Years After

Strictly speaking, this is a little more than 10 years after the 10 year mark.  In late 2005,  Public Sector Forums asked me to do a review of the first 10 years of e-government; in May 2006, I published that same review on this blog.  It’s now time, I think, to look at what has happened in the 10 years (or more) since that piece, reviewing, particularly, digital government as opposed to e-government.

Here’s a quick recap of the original “10 years of e-government” piece, pulling out the key points from each of the posts that made up the full piece:

Part 1 – Let’s get it all online

At the Labour Party conference in 1997, the Prime Minister had announced his plans for ‘simple government’ with a short paragraph in his first conference speech since taking charge of the country: 
“We will publish a White Paper in the new year for what we call Simple Government, to cut the bureaucracy of Government and improve its service. We are setting a target that within five years, one quarter of dealings with Government can be done by a member of the public electronically through their television, telephone or computer.”
Some time later he went further:
“I am determined that Government should play its part, so I am bringing forward our target for getting all Government services online, from 2008 to 2005”

It’s easy to pick holes with a strategy (or perhaps the absence of one) that’s resulted in more than 4,000 individual websites, dozens of inconsistent and incompatible services and a level of take-up that, for the most popular services, is perhaps 25% at best.
After all, in a world where most people have 10-12 sites they visit regularly, it’s unlikely even one of those would be a government site – most interactions with government are, at best, annual and so there’s little incentive to store a list of government sites you might visit. As the count of government websites rose inexorably – from 1,600 in mid-2002 to 2,500 a year later and nearly 4,000 by mid-2005 – citizen interest in all but a few moved in the opposite direction.
Over 80% of the cost of any given website was spent on technology – content management tools, web server software, servers themselves – as technology buyers and their business unit partners became easy pickings for salesmen with 2 car families to support. Too often, design meant flashy graphics, complicated pages, too much information on a page and confusing navigation. 
Accessibility meant, simply, the site wasn’t.
In short, services were supply-led by the government, not demand-led by the consumer. But where was the demand? Was the demand even there? Should it be up to the citizen to scream for the services they want and, if they did, would they – as Henry Ford claimed before producing the Model T – just want ‘faster horses’, or more of the same they’d always had performed a little quicker? 
We have government for government, not government for the citizen. With so many services available, you’d perhaps think that usage should be higher. Early on, the argument was often made (I believe I made it too) that it wasn’t worth going online just to do one service – the overhead was too high – and that we needed to have a full range of services on offer – ones that could be used weekly and monthly as well as annually. That way, people would get used to dealing online with government and we’d have a shot at passing the ‘neighbour test’ (i.e. no service will get truly high usage until people are willing to tell their neighbour that they used, say, ‘that new tax credits service online’ and got their money in 4 days flat, encouraging their friends to do likewise).
A new plan
 • Rationalise massively the number of government websites. In a 2002 April Fool email sent widely around government, I announced the e-Envoy’s department had seized control of government’s domain name registry and routed all website URLs to UKonline.gov.uk and was in the process of moving all content to that same site. Many people reading the mail a few days later applauded the initiative. Something similar is needed. The only reason to have a website is if someone else isn’t already doing it. Even if someone isn’t, there’s rarely a need for a new site and a new brand for every new idea.
• Engage forcefully with the private sector. The banks, building societies, pension and insurance companies need to tie their services into those offered by government. Want a pension forecast? Why go to government – what you really want to know is how much will you need to live on when you’re 65 (67?) and how you’ll put that much money away in time. Government can’t and won’t tell you that. Similarly, authentication services need to be provided that can be used across both public and private sectors – speeding the registration process in either direction. With Tesco more trusted than government, why shouldn’t it work this way? The Government Gateway, with over 7 million registered users, has much to offer the private sector – and they, in turn, could accelerate the usage of hardware tokens for authentication (to rid us of the problems of phishing) and so on.
• Open up every service. The folks at my society, public whip and theyworkforyou.com have shown what can be done by a small, dedicated (in the sense of passionate) team. No-one should ever need to visit the absurdly difficult to use Hansard site when it’s much easier through the services these folks have created. Incentives for small third parties to offer services should be created.
• Build services based on what people need to do. We know every year there are some 38 million tax discs issued for cars and that nearly everyone shows up at a post office with a tax disc, insurance form and MOT. For years, people in government have been talking about insurance companies issuing discs – but it still hasn’t happened. Bring together disparate services that have the same basic data requirements – tax credits and child benefit, housing benefit and council tax benefit etc.
• Increase the use of intermediaries. For the 45% of people who aren’t using the Internet and aren’t likely to any time soon, web-enabled services are so much hocus pocus. There needs to be a drive to take services to where people use them. Andrew Pinder, the former e-Envoy, used to talk about kiosks in pubs. He may have been speaking half in jest, but he probably wasn’t wrong. If that’s where people in a small village in Shropshire are to be found (and with Post Offices diminishing, it’s probably the only place to get access to the locals), that’s where the services need to be available. Government needs to be in the wholesale market if it’s to be efficient – there are far smarter, more fleet of foot retail providers that can deliver the individual transactions.
• Clean up the data. One of the reasons why government is probably afraid to join up services is that they know the data held on any given citizen is wildly out of date or just plain wrong. Joining up services would expose this. When I first took the business plan for the Government Gateway to a minister outside the Cabinet Office, this problem was quickly identified and seen as a huge impediment to progress

More to come.

The Billion Pound G-Cloud

Sometime in the next few weeks, spend through the G-Cloud framework
will cross £1 billion.  Yep, a cool billion.  A billion here and a
billion there and pretty soon you’re talking real money.

Does
that mean G-Cloud has been successful?  Has it achieved what it was set
up for? Has it broken the mould?  I guess we could say this is a story in four lots.

Well, that depends:

1) The Trend

Let’s start with this chart showing the monthly spend since inception.

It
shows 400 fold growth since day one, but spend looks pretty flat over
the last year or so, despite that peak 3 months ago. Given that this
framework had a standing start, for both customers and suppliers, it
looks pretty good.  It took time for potential customers (and suppliers)
to get their heads round it.  Some still haven’t. And perhaps that’s
why things seem to have stalled?

Total spend to
date is a little over £903m.  At roughly £40m a month (based on the
November figures), £1bn should be reached before the end of February,
maybe sooner. And then the bollard budget might swing into action and
we’ll see a year end boost (contrary to the principles of pay as you go
cloud services though that would be).

Government no
longer publishes total IT spend figures but, in the past, it’s been
estimated to be somewhere between £10bn and £16bn per year.  G-Cloud’s
annual spend, then, is a tiny part of that overall spend.  G-Cloud fans
have, though, suggested that £1 spent on G-Cloud is equivalent to £10 or
even £50 spent the old way – that may be the case for hosting costs, it
certainly isn’t the case for Lot 4 costs (though I am quite sure there
has been some reduction in rates simply from the real innovation that
G-Cloud brought – transparency on prices).

2) The Overall Composition

Up
until 18 months ago, I used to publish regular analysis showing where
G-Cloud spend was going.  The headline observation then was that some
80% was being spent in Lot 4 – Specialist Cloud Services, or perhaps
Specialist Counsultancy Services.  To date, of our £903m, some £715m, or
79%, has been spent through Lot 4 (the red bars on the chart above). 
That’s a lot of cloud consultancy.

 
(post updated 19th Jan 2016 with the above graph to show more clearly the percentage that is spent on Lot 4).

With all that spent
on cloud consultancy, surely we would see an increase in spend in the
other lots?  Lot 4 was created to give customers a vehicle to buy
expertise that would explain to them how to migrate from their stale,
high capital, high cost legacy services to sleek, shiny, pay as you go
cloud services.

Well, maybe.  Spend on IaaS (the blue
bars), or Lot 1, is hovering around £4m-£5m a month, though has increased substantially from the early days.  Let’s call it
£60m/year at the current run rate (we’re at £47m now) – if it hits that
number it will be double the spend last year, good growth for sure, and
that IaaS spend has helped created some new businesses from scratch. 
But they probably aren’t coining it just yet.

Perhaps the Crown Hosting Service has, ummm, stolen the crown and taken all of the easy business.  Government apparently spends £1.6bn per year on hosting,
with £700m of that on facilities and infrastructure, and the CHS was
predicted to save some £530m of that once it was running (that looks to
be a save through the end of 2017/18 rather than an annual save).  But
CHS is not designed for cloud hosting, it’s designed for legacy systems –
call it the Marie Celeste, or the Ship of the Doomed.  You send your
legacy apps there and never have to move them again – though, ideally,
you migrate them to cloud at some point. We had a similar idea to CHS
back in 2002, called True North, it ended badly.

A
more positive way to look at this is that Government’s hosting costs
would have increased if G-Cloud wasn’t there – so the £47m spent this
year would actually have been £470m or £2.5bn if the money had been
spent the old way.  There is no way of knowing of course – it could be
that much of this money is being spent on servers that are idling
because people spin them up but don’t spin them down, it could be that
more projects are underway at the same than previously possible because
the cost of hosting is so much lower.

But really, G-Cloud
is all about Lot 4.  A persistent and consistent 80% of the monthly
spend is going on people, not on servers, software or platforms.  PaaS
may well be People As A Service as far as Lot 4 is concerned.

3) Lot 4 Specifically

Let’s
narrow Lot 4 down to this year only, so that we are not looking at old
data.  We have £356m of spend to look at, 80% of which is made by
central government.  There’s a roughly 50/50 split between small and
large companies – though I suspect one or two previously small companies
have now become very much larger since G-Cloud arrived (though on these
revenues, they have not yet become “large”).

If we
knew which projects that spend had been committed to – we would soon
know what kind of cloud work government was doing if we could see that,
right?

Sadly, £160m is recorded as against “Project
Null”.  Let’s hope it’s successful, there’s a lot of cash riding on it
not becoming void too.

Here are the Top 10 Lot 4 spenders (for this calendar year to date only):

 
 And the Top 10 suppliers:


Cloud
companies?  Well, possibly.  Or perhaps, more likely, companies with
available (and, obviously, agile) resource for development projects that
might, or might not, be deployed to the cloud.  It’s also possible that
all of these companies are breaking down the legacy systems into
components that can be deployed into the cloud starting as soon as this
new financial year; we will soon see if that’s the case.

To
help understand what is most likely, here’s another way of looking at
the same data.  This plots the length of an engagement (along the
X-axis) against the total spend (Y-axis) and shows a dot with the
customer and supplier name.

A
cloud-related contract under G-Cloud might be expected to be short and
sharp – a few months, perhaps, to understand the need, develop the
strategy and then ready it for implementation.  With G-Cloud contracts
lasting a maximum of two years, you might expect to see no relationship
last longer than twenty four months.

But there are some
big contracts here that appear to have been running for far longer than
twenty four months.  And, whilst it’s very clear that G-Cloud has
enabled far greater access to SME capability than any previous
framework, there are some old familiar names here.

4) Conclusions

G-Cloud
without Lot 4 would look far less impressive, even if the spend it is
replacing was 10x higher.  It’s clear that we need:

– Transparency. What is the Lot 4 spend going to?

– Telegraphing of need.  What will government entities come to market for over the next 6-12 months?

– 
Targets.  The old target was that 50% of new IT spend would be on
cloud.  Little has been said about that in a long time.  Little has, in
fact, been said about plans.  What are the new targets?

Most of those points are not new – I’ve said them before, for instance in a previous post about G-Cloud as a Hobby and also here about how to take G-Cloud Further Forward.

In
short, Lot 4 needs to be looked at hard – and government needs to get
serious about the opportunity that this framework (which broke new
ground at inception but has been allowed to fester somewhat) presents
for restructuring how IT is delivered.

Acknowledgements

I’m
indebted, as ever, to Dan Harrison for taking the raw G-Cloud data and
producing these far simpler to follow graphs and tables.  I maintain
that GDS should long ago have hired him to do their data analysis.  I’m
all for open data, but without presentation, the consequences of the
data go unremarked.

Performance Dashboard July 2003 – The Steep Hill of Adoption

With gov.uk’s Verify appearing on the Performance Dashboard for the first time, I was taken all the way back to the early 2000s when we published our own dashboards for the Government Gateway, Direct.gov.uk and our other services.  Here’s one from July 2003 – there must have been earlier ones but I don’t have them to hand:

This is the graph that particularly resonated:

With the equivalent from back then being:

After 4 years of effort on the Identity programme (now called Verify), the figures present pretty dismal reading – low usage, low ability to authenticate first time, low number of services using it – but, you know what, the data is right there to see for everyone and it’s plain that no one is going to give up on this so gradually the issues will be sorted, people will authenticate more easily and more services will be added.    It’s a very steep hill to climb though.

We started the Gateway with just the Inland Revenue, HM Customs and MAFF (all department names that have long since fallen away)- and adding more was a long and painful process.  So I feel for the Verify team – I wouldn’t have approached things the way they are but it’s for each iteration to pick its path.  There were, though, plenty of lessons to learn that would have made things easier.

There is, though, a big hill to climb for Verify.  Will be interesting to watch.

Mind The Gaps – Nothing New Under The Sun

As we start 2015, a year when several big contracts are approaching their end dates and replacement solutions will need to be in place, here’s a presentation I gave a couple of times last year looking at the challenges of breaking up traditional, single prime IT contracts into potentially lots of smaller, shorter contracts:

G-Cloud By The Numbers (To End June 2014)

With Dan’s Tableau version of the G-Cloud spend data, interested folks need never download the csv file provided by Cabinet Office ever again.  Cabinet Office should subcontract all of their open data publication work to him.

The headlines for G-Cloud spend to the end of June 2014 are:

– No news on the split between lots.  80% of spend continues to be in Lot 4, Specialist Cloud Services

– 50% of the spend is with 10 customers, 80% is with 38 customers

– Spend in June was the lowest since February 2014.  I suspect that is still an artefact of a boost because of year end budget clearouts (and perhaps some effort to move spend out of Lot 4 onto other frameworks)

– 24 suppliers have 50% of the spend, 72 have 80%.  A relative concentration in customer spend is being spent across a wider group of suppliers.  That can only be a good thing

– 5 suppliers have invoiced less than £1,000. 34 less than £10,000

– 10 customers have spent less than £1,000. 122 less than £10,000.  How that boxes with the bullet immediately above, I’m not sure

– 524 customers (up from 489 last month) have now used the framework, commissioning 342 suppliers.  80% of the spend is from central government (unsurprising, perhaps, given the top 3 customers – HO, MoJ, CO – account for 31% of the spend)

– 36 customers have spent more than £1m.  56 suppliers have billed more than £1m (up from 51).  This time next year, Rodney, we’ll be millionaires.

– Top spending customers stay the same but there’s a change in the top 3 suppliers (BJSS, Methods stay the same and Equal Experts squeaks in above IBM to claim the 3rd spot)

One point I will venture, though not terribly well researched, is that once a customer starts spending money with G-Cloud, they are more likely to continue than not.  And one a supplier starts seeing revenue, they are more likely to continue to see it than not.  So effort on the first sale is likely to be rewarded with continued business.