Getting A Grip: Special Report on the AWS Special Report

Late last week a seemingly comprehensive takedown Amazon, titled “Amazon’s extraordinary grip on British data“, appeared in the Telegraph, written by Harry de Quetteville.

Read quickly it would suggest that Amazon, through perhaps fair and foul means, has secured too great a share of UK Government’s cloud business and that this poses an increasingly systemic risk to digital services and, inevitably, to consumer data.

Read more slowly, the article brings together some old allegations and some truths and joins them together so as get to the point where I ask “ok, so what do you want to do about it”, but it doesn’t suggest any particular action. That’s not to be said that there’s no need for action, just that this isn’t the place to find the argument.

The main points of the Telegraph’s case are seemingly based on “figures leaked” (as far as I know, all of this data is public) to the newspaper:

  • Amazon doesn’t pay tax (figures from 2018 are quoted showing it paid £10m euros on £1.9bn revenues, using offshore (Luxembourg) vehicles. For comparison, the article says, AWS apparently sold £15m of cloud services to HMRC.
  • There is a “revolving door” where senior civil servants move to work for Amazon “within months of overseeing government cloud contracts.” Three people are referenced, Liam Maxwell (former Government deputy CIO and CTO), Norman Driskell (Home Office CDO) and Alex Holmes (DD Cyber at DCMS).
  • Amazon lowballs prices which then spiral … and “even become a bar to medical research.” This is backed up by a beautifully done Amazon smile that says DCLG signed a contract in 2017 estimated at £959,593 that turned out to cost £2,611,563 (an uplift of 172%)
  • There is a government bias towards AWS giving it “an unfair competitive advantage that has deprived British companies of contracts and cost job[s].
  • A neat infographic says “1/3 of government information is stored on AWS (including sensitive biometric details and tax records); 80% of cloud contracts are “won by large firms like AWS
  • Amazon’s “leading position with … departments like the Home Office, DWP, Cabinet Office, NHS Digital and the NCA is also entrenched.
  • Figures obtained by the Sunday Telegraph suggest that AWS has captured more than a third of the UK public sector market with revenues of more than £100m in the last financial year.

Let’s start by setting out the wider context of the cloud market:

  • AWS is a fast growing business, roughly 13% of Amazon’s total sales (as of fiscal Q1 2019). Just 15 years old, it has quickly come to represent the bulk of Amazon’s profits (and is sometimes the only part of Amazon that is in profit – though Amazon would say that they choose not to make the retail business profitable, preferring to reinvest).
  • Microsoft’s Azure is regularly referred to as a smaller, but faster growing business than AWS. Google is smaller still. It’s hard to be sure though – getting like for like comparisons is difficult. AWS’ revenues in Q2 2019 were $7.7bn, and Microsoft’s cloud (which includes Office 365 and other products) had $9.6bn in revenues. AWS’ growth rate was 41%, Azure’s was 73% – both rates are down year on year. Google’s cloud (known as GCP) revenue isn’t broken out separately but is included in a line that includes G-Suite, Google Play and Nest, totalling $5.45bn, up 25%
  • Amazon, as first mover, has built quite the lead with various figures published, including those in the Telegraph article, suggesting it has as much as 50% of the nascent cloud market. Other sources quote Azure at between 22 and 30% and Google at less than 10%%

There’s a almost “by the by” figure quoted that I can’t source, where Lloyd’s of London apparently said that “even a temporary shutdown at a major cloud provider like AWS could wreak almost $20bn in business losses.” The Lloyd’s report I downloaded says:

  • A cyber incident that took a “top three cloud provider” offline in the US for 3-6 days would cost between $6.9bn and $14.7bn (much of which is uninsured, with insured losses running $1.5-2.8bn)

What’s clear from all of the figures is that the cloud market is expanding quickly, that Amazon has seized a large share of that market but is under pressure from growing rivals, and that there is an increasing concentration of workloads deployed to the cloud.

It’s also true that governments generally, but particularly UK government, are a long way from a wholesale move to the cloud with few front line, transactional services, deployed. Most of those services are still stuck in traditional data centres, anchored by legacy systems that are slow to change and that will resist, for years to come, a move to a cloud environmnet. Instead, work will likely be sliced away from them, a little at a time, as new applications are built and the various transformation projects see at least some success.

The Crux

When the move to cloud started, government was still clinging to the idea that its data somehow needed protection beyond that used by banks, supermarkets and retailers. There was a vast industry propping up the IL3 / Restricted classification (where perhaps 75-80% of government data sat, mostly emails asking “what’s for lunch?”). This classification made cloud practically impossible – IL3 data could not sit on the same servers or storage as lower (or higher) classified data, it needed to be in the UK andsecured in data centres that Tom Cruise and the rest of the Mission Impossible team couldn’t get into. Let’s not even get into IL4. And, yes, I recognise that the use of IL3 and IL4 in regards to data isn’t quite right, but it was by far the most used way of referring to that data.

Then, in 2014, after some years of work, government made a relatively sudden, and dramatic, switch. 95% of data was “Official” and could be handled with commercial products and security. A small part was “Official Sensitive” which required additional handling controls, but no change in the technical environment.

And so the public cloud market became a viable option for governments systems – all of them, not just websites and transactional front ends but potentially anything that government did (that didn’t fall into the 5% of things that are secret and above).

Government was relatively slow to recognise this – after all, there was a vast army of people who had been brought up to think about data in terms of the “restricted” classification, and such a seismic change would take time. There are still some departments that insist on a UK presence, but there are many who say “official is official” and anywhere in the UK is fine

It was this, more than anything, that blew the doors off the G-Cloud market. You can see the rise in Lot 1/IaaS cloud spend from April 2014 onwards. That was not just broad awareness of cloud as an option, but the recognition that the old rules no longer applied.

The UK’s small and medium companies had built infrastructures based around the IL3 model. It was more expensive, took longer, and forced them through the formal accreditation model. Few made it through; only those with strong engineering standards and good process discipline and, perhaps, relatively deep pockets. But once “official” came along, much of that work was over the top, driving cost and overhead into the model and it wasn’t enough of a moat to keep the scale players out.

TL;DR

I’ve let contracts worth several hundred million pounds in total and worked with people who have done 5, 10 or 20x that amount. I’ve never met anyone in government who bought something because of a relationship with a former colleague or because of any bias for or against any supplier. Competition is fearsome. Big players can outspend small players. They can compete on price and features. Small players can still win. Small players can become big players. Skate where the puck is going, not where it was.

How does a government department choose a cloud provider?

Whilst the original aim of G-Cloud was to be able to type in a specification of what was wanted and have the system spit out some costs (along with iTunes style reviews), the reality is that getting a quote is more complicated than that. The assumption, then, was perhaps that cloud services would be true commodity, paying by the minute, hour or day for servers, storage and networks. That largely isn’t the case today.

There are three components to a typical evaluation

1) How much will it cost?

2) What is the range of products that I can deploy and how easily can I make that happen? Is the supplier seen by independent bodies as a leader or a laggard.

3) Do I, or my existing partners, already have the skills needed to manage this environmment?

Most customers will likely start with (3), move to (2) and then evaluate (1) for the suppliers that make it through.

Is there a bias here? With AWS having close to 50% market share of the entire cloud market, the market will be full of people with AWS skills, followed closely by those with Azure skills (given the predominance of Microsoft environments for e.g. Active Directory, email etc in government). Departments will look at their existing staff, or that of their suppliers, or who they can recruit, and pick their strategy based on the available talent.

Departments will also look at Gartner, or Forrester, and see who is in the lead. They will talk to a range of supplier partners and see who is using what. They will consult their peers and see who is doing what.

But there’s no bias against, or for, any given supplier. We can see that when we read about companies who have been hauled over the coals by one department and the very next week they get a new contract from a different department. Don’t read conspiracy into anything government ever does; it’s far more likely to be cockup.

Is there a revolving door?

People come into government from the outside world and people leave government to go to the outside world. In the mid-2000s there was a large influx of very senior Accenture people joining government; did Accenture benefit? If anything, they probably lost out as the newcomers were overcautious rather than overzealous.

Government departments don’t choose a provider because a former colleague or Cabinet Office power broker is employed by the supplier. As anywhere, relationships persist for a period – not as long as you would think – and so some suppliers are better able to inform potential customers of the range of their offer, but this is not a simple relationship. Some people are well liked, some are well respected and some are neither. There are 17,000 people in government IT. They all play a role. Some will stay, some will go. Some make decisions, some don’t.

Also, a bid informed by a former colleague could be better written than one uninformed. This advantage doesn’t last beyond a few weeks. I’ve worked on a lot of bids (both as buyer and seller) and I’m still amazed how many suppliers fail to answer the question, don’t address the scoring criteria, or waffle away beyond the word count. If you’ve been a buyer, you will likely be able to teach a supplier how to write a bid; but there are any number of people who can do that,

There is little in the way of inside information about what government is or isn’t doing or what its strategy will look like. Spend a couple of hours with an architect or bid manager in any Systems Integrator that has worked for several departments and you will know as much about government IT strategy as anyone on the inside.

Do costs escalate (and are suppliers lowballing)?

Once a contract is signed, and proved to be working, it would be unusual if more work was not put through that same contract.

What’s different about cloud is mostly a function of the sift from capex to opex. Servers largely sit there and rust. The cost is the cost. Maybe they’re 10% used for most of their lives, with occasional higher spikes. But the cost for them doesn’t change. Any fluctuations in power are wrapped into a giant overhead number that isn’t probed too closely.

Cloud environments consume cash all the time though. Spin up a server and forget to spin it down and it will cost you money. Fire up more capacity than you need, and it will cost you money. Set up a development environment for a project and, when the project start is delayed by governance questions, don’t spin it down, and it will cost you money. Plan for more capacity than you needed and don’t dynamically adjust it, and it will cost you money. Need some more security, that’s extra? Different products, that’s more as well. If you don’t know what you need when you set out, it will certainly cost more than you expected when you’re done.

Many departments will have woken up to this new cost model when they received their first bill and it was 3x or 5x what they expected. Cost disciplines will then have been imposed, probably unsuccessfully. Over time, these will be improving, but there are still going to be plenty of cases of sticker shock, both for new and existing cloud customers, I’m sure.

But if the service is working, more projects will be put through the same vehicle, sometimes with additional procurement checks, sometimes without. The Inland Revenue’s original contract with EDS was valued, in 1992, at some £200m/year. 10 years later it was £400m and not long after that, with the addition of HMCE (to form HMRC), and the transition to CapGemini, it was easily £1bn.

Did EDS lowball the cost? Probably. And it probably hurt them for a while until new business began to flow through the contract – in 1992, the IR did not have a position on Internet services, but as it began to add them in the late 90s, its costs would have gone up, without offsetting reductions elsewhere.

Do suppliers lowball the cost today? Far less so, because the old adage “price it low and make it up on change control” is difficult to pull off now and with unit costs available and many services or goods being bought at a unit cost rate, it would be difficult to pull the wool over the eyes of a buyer.

Is tax paid part of the evaluation?

For thirty years until the cloud came along, most big departments relied on their outsourced suppliers to handle technology – they bought servers, cabled them up, deployed products, patched them (sometimes) and fed and watered them. Many costs were capitalised and nearly everything was bought through a managed services deal because VAT could be reclaimed that way.

Existing contracts were used because it avoided new procurements and ensured that there was “one throat to choke”, i.e. one supplier on the hook for any problems. Most of these technology suppliers were (and are) based outside of the UK and their tax affairs are not considered in the evaluation of their offers.

HMRC, some will recall, did a deal with a property company registered in Bermuda, called Mapeley, that doesn’t pay tax in the UK.

Tax just isn’t part of the evaluation, for any kind of contract. Supplier finances are – that is, the ability of a company to scale to support a government customer, or to withstand the loss of a large customer.

Is 1/3rd of government information stored in AWS?

No. Next question.

IaaS expenditure is perhaps £10-12m/month (through end of 2018). Total government IT spend, as I’ve covered here before, is somewhere between £7bn and £14bn/year. In the early days of the Crown Hosting business case, hosting costs were reckoned to be up to 25% of that cost. Some 70% of the spend is “keep the lights on” for existing systems.

Most government data is still stored on servers and storage owner by government or its integrators and sits in data centres, some owned by government, but most owned by those integrators. Web front ends, email, development and test environments are increasingly moving to the cloud, but the real data is still a long way from being cloud ready.

Are 80% of contracts won by large providers?

Historically, no. UKcloud revenues over the life of G-Cloud are £86m with AWS at around £63m (through end of 2018). AWS’ share is plainly growing fast though – because of skills in the marketplace, independent views of the range of products and supportability, and because of price.

Momentum suggests that existing contracts will get larger and it will be harder (and harder) for contracts to move between providers, because of the risk of disruption during transition, the lack of skill and the difficulty of making a benefits case for incurring the cost of transition when the savings probably won’t offset that cost.

So what should we do?

It’s easy to say “nothing.” Government doesn’t pick winners and has rarely been successful in trying to skew the market. The cloud market is still new, but growing fast, and it’s hard to say whether today’s winners will still be there tomorrow.

G-Cloud contracts last only two years and, in theory, there is an opportunity to recompete then – see what’s new in the market, explore new pricing options and transition to the new best in class (or Most Economically Advantageous Tender as it’s known)

But transition is hard, as I wrote here in March 2014. And see this one, talking about mobile phones, from 2009 (with excerpts from a 2003 piece). If services aren’t designed to transition, then it’s unlikely to ever happen.

That suggests that we, as government customers, should:

1) Consciously design services to be portable, recognising that will likely increase costs up front (which will make the business case harder to get through), but that future payback could offset those costs; if the supplier knows you can’t transition, you’re in a worse position than if you have choices

2) Build tools and capabilities that support multiple cloud environments so that we can pick the right cloud for the problem we are trying to solve. If you have all of your workloads in one supplier and in one region, you are at risk if there is a problem there, be it fat fingers or a lightning strike.

3) Train our existing teams and keep them up to date with new technologies and services. Encourage them to be curious about what else is out there. Of course they will be more valuable to others, including cloud companies, when you do this, but that’s a fact of life. You will lose people (to other departments and to suppliers) and also gain people (from other departments and from suppliers).

And, as government suppliers, we should:

1) Recognise that big players exist in big markets and that special treatment is rarely available. They may not pay tax in this jurisdiction, but that’s a matter for law, not procurement. They may hire people from government; you have already done the same and you will continue to look out for the opportunity. Don’t bleat, compete.

2) Go where the big players aren’t going. Offer more, for less, or at least for the same. Provide products that compound your customers investment – they’re no longer buying assets for capex, but they will want increased benefit for their spend, so offer new things.

3) Move up the stack. IaaS was always going to be a tough business to compete in. WIth big players able to sweat their assets 24/7, anyone not able to swap workloads between regions and attract customers from multiple sectors that can better overlap peak workloads, is going to struggle. So don’t go there, go where the bigger opportunities are. Government departments aren’t often buying dropbox, so what’s your equivalent for instance?

But, don’t

1) Expect government to intervene and give you preferential treatment because you are small and in the UK. Expect such preferential treatment if you have a better product, at a better price that gets closest to solving the specific problem that the customer has.

2) Expect government to break up a bigger business, or change its structure so that you can better compete. It might happen, sure, but your servers will have long since rusted away by the time that happens.

How Much is IT?

Maybe 15 or 16 years ago I sat in a room with a few people from the e-Delivery team and we tried to figure out how much the total IT spend in central government was. All the departments and agencies (including the NHS) were listed on a board and we plugged in the numbers that we knew about (based on contracts that had been let or spend that we were familiar with). Some we proxied based on “roughly the same size as.”

After a couple of hours work, we came up with a total of about £14bn. That’s an annual spend figure. Of course, some would be capital, and some operating costs, but we didn’t include major projects (which would tend towards capital) so it’s likely that 70-80% of that spend was “keep the lights on”, i.e. servers/hosting, operational support, maintenance changes and refreshes.

That number may be wrong today given 10 years of tighter budget management and significant reductions in the staff counts for many large departments. It might be that the £6.3bn in 2014/15 published in an April 2016 report is now more accurate (total government spend that year was c£745bn). A 2011 report suggests £7.5bn. Much depends on the definition of central government (is the NHS in? MoD? Agencies such as RPA, Natural England etc?) and what’s included in the spend total (steady state versus project, pure IT versus consultancy on IT transformation and change projects).

Maybe our number was wrong, maybe the cost has fallen as departments have shrunk. Or maybe it’s hard to get to the right number.

IT is both “a lot” of money and “not much” – public pensions will be some £160bn this year, health and social care roughly the same, Defence as much as £50bn and Social Security perhaps £125bn.

But how much is the right number?It’s useful to know how much is being spent for at least a couple of reasons

  1. Are we getting more efficient at spending, reducing the cost of keeping the lights on and “getting more, or at least the same, for less”?
  2. Are we pushing the planned 25% (or 33% for the 2020 target) of spend towards SMEs?

It would be more useful to know what the breakdown of spending was, e.g. how much are we spending on

  • Hosting?
  • Legacy system support?
  • Infrastructure refreshes?
  • Application maintenance?
  • Application development?
  • And so on

Knowing those figures, department by department, would let us explore some more interesting topics

  • How much are we spending overall and how does that number sit versus other expenses? And versus private sector companies?
  • How are we doing with the migration to cloud (and the cloud first policy) and how much is there left to do?
  • What are our legacy systems really costing us to host, support and enhance? And when we compare those hosting costs with cloud costs, is there a strong case for making the switch (sooner rather than later)?
  • What is the opportunity available if we close down some legacy systems and replace them with more modern systems (with the aim of reducing costs to host, upgrade and refresh, as well as the future cost of new policy introduction)
  • If we don’t take any action and replace some of our old systems, what kind of costs are we in for over the next 5 and 10 years and does that help frame the debate about the best way ahead?

Linday Smith, aka @Insadly, produces some detailed and useful insight on G-Cloud spend, for instance, that tells us, based on data for April to July 2019 that spend on cloud hosting appears to have fallen from £94m in the same quarter in 2018 to £78m this year (he notes that there are some data anomalies that may make this data not so useful – I’ve commented on the problems with the G-Cloud data before and agree with him that it can be unhelpful).

It’s possible that this is a sign of departments getting smart about their hosting – spinning down machines that are unused, using cloud capacity to deal with peaks and then reverting to a lower base capacity, consolidating environments and using better tools to manage different workloads. It could also be a reflection of seeking lower cost suppliers.

Or it could be a sign that there are fewer new projects starting that are using the cloud from day one (because my overall sense is that the bulk of cloud projects are new, not migrations of existing systems), or that departments are struggling to manage cloud environments and so have experimented and pulled back. Alternatively, it could be that departments are Capex rich and because cloud hosting is an Opex spend, they’re actually buying servers again.

Some broad analysis that showed the trends in spending across departments would improve transparency, highlight areas that need attention, help suppliers figure out where to make product investments and help departmental CIOs figure out where their spending was different from their peers. On the journey away from legacy it would also show where the work needed to be done.

The 10 Year Strategy

In May 2008, on this blog, I wrote about Chateau Palmer (a fine Bordeaux wine) and, specifically, about how making wine forces a long term strategy – vines take years before they produce a yield that is worth bottling (my friends in the business say that the way to make a small fortune in wine is to start with a large one), more years can go by before the wine in the bottle is drunk by most consumers, and yet, every year the process repeats (with some variety, much caused by the weather).  It’s definitely a long game.

I wondered what would happen if you could only make decisions about your IT investment every 10 years, and them made a couple of predictions.  I said:
Cloud computing – This is going to be increasingly talked about until you can’t remember when people didn’t talk about and then, finally, people are going to do it. [If you read only this bit then perhaps I am a visionary strategist; if you read the whole of it, I got most of the rest wrong]
Application rationalisation – Taken across a single country’s government as a whole, the total number of applications will be a frightening number, as will the total cost to support them all. There are several layers of consolidation, ranging from declaring “end of life” for small systems and cutting their budgets to zero (and waiting for them to wither and die – this might take eons) to a more strategic, let’s use only one platform (SAP, Oracle etc) from here on in and migrate everything to that single platform (this too could take eons)
It feels, 11 years on, that we are still talking about cloud computing and that, whilst many are doing it, we are a long way from all in.  And the same for application rationalisation – many have rationalised, but key systems are still creaking, supported by an ever decreasing number of specialists, and handling workloads far beyond their original design principles.
Did we devise a strategy and stick to it? or did we bend with the wind and change year to year, rewrite as new people came and went? Perhaps we focused on business as usual and forgot the big levers of change? 

10 Years After 10 Years After

Strictly speaking, this is a little more than 10 years after the 10 year mark.  In late 2005,  Public Sector Forums asked me to do a review of the first 10 years of e-government; in May 2006, I published that same review on this blog.  It’s now time, I think, to look at what has happened in the 10 years (or more) since that piece, reviewing, particularly, digital government as opposed to e-government.

Here’s a quick recap of the original “10 years of e-government” piece, pulling out the key points from each of the posts that made up the full piece:

Part 1 – Let’s get it all online

At the Labour Party conference in 1997, the Prime Minister had announced his plans for ‘simple government’ with a short paragraph in his first conference speech since taking charge of the country: 
“We will publish a White Paper in the new year for what we call Simple Government, to cut the bureaucracy of Government and improve its service. We are setting a target that within five years, one quarter of dealings with Government can be done by a member of the public electronically through their television, telephone or computer.”
Some time later he went further:
“I am determined that Government should play its part, so I am bringing forward our target for getting all Government services online, from 2008 to 2005”

It’s easy to pick holes with a strategy (or perhaps the absence of one) that’s resulted in more than 4,000 individual websites, dozens of inconsistent and incompatible services and a level of take-up that, for the most popular services, is perhaps 25% at best.
After all, in a world where most people have 10-12 sites they visit regularly, it’s unlikely even one of those would be a government site – most interactions with government are, at best, annual and so there’s little incentive to store a list of government sites you might visit. As the count of government websites rose inexorably – from 1,600 in mid-2002 to 2,500 a year later and nearly 4,000 by mid-2005 – citizen interest in all but a few moved in the opposite direction.
Over 80% of the cost of any given website was spent on technology – content management tools, web server software, servers themselves – as technology buyers and their business unit partners became easy pickings for salesmen with 2 car families to support. Too often, design meant flashy graphics, complicated pages, too much information on a page and confusing navigation. 
Accessibility meant, simply, the site wasn’t.
In short, services were supply-led by the government, not demand-led by the consumer. But where was the demand? Was the demand even there? Should it be up to the citizen to scream for the services they want and, if they did, would they – as Henry Ford claimed before producing the Model T – just want ‘faster horses’, or more of the same they’d always had performed a little quicker? 
We have government for government, not government for the citizen. With so many services available, you’d perhaps think that usage should be higher. Early on, the argument was often made (I believe I made it too) that it wasn’t worth going online just to do one service – the overhead was too high – and that we needed to have a full range of services on offer – ones that could be used weekly and monthly as well as annually. That way, people would get used to dealing online with government and we’d have a shot at passing the ‘neighbour test’ (i.e. no service will get truly high usage until people are willing to tell their neighbour that they used, say, ‘that new tax credits service online’ and got their money in 4 days flat, encouraging their friends to do likewise).
A new plan
 • Rationalise massively the number of government websites. In a 2002 April Fool email sent widely around government, I announced the e-Envoy’s department had seized control of government’s domain name registry and routed all website URLs to UKonline.gov.uk and was in the process of moving all content to that same site. Many people reading the mail a few days later applauded the initiative. Something similar is needed. The only reason to have a website is if someone else isn’t already doing it. Even if someone isn’t, there’s rarely a need for a new site and a new brand for every new idea.
• Engage forcefully with the private sector. The banks, building societies, pension and insurance companies need to tie their services into those offered by government. Want a pension forecast? Why go to government – what you really want to know is how much will you need to live on when you’re 65 (67?) and how you’ll put that much money away in time. Government can’t and won’t tell you that. Similarly, authentication services need to be provided that can be used across both public and private sectors – speeding the registration process in either direction. With Tesco more trusted than government, why shouldn’t it work this way? The Government Gateway, with over 7 million registered users, has much to offer the private sector – and they, in turn, could accelerate the usage of hardware tokens for authentication (to rid us of the problems of phishing) and so on.
• Open up every service. The folks at my society, public whip and theyworkforyou.com have shown what can be done by a small, dedicated (in the sense of passionate) team. No-one should ever need to visit the absurdly difficult to use Hansard site when it’s much easier through the services these folks have created. Incentives for small third parties to offer services should be created.
• Build services based on what people need to do. We know every year there are some 38 million tax discs issued for cars and that nearly everyone shows up at a post office with a tax disc, insurance form and MOT. For years, people in government have been talking about insurance companies issuing discs – but it still hasn’t happened. Bring together disparate services that have the same basic data requirements – tax credits and child benefit, housing benefit and council tax benefit etc.
• Increase the use of intermediaries. For the 45% of people who aren’t using the Internet and aren’t likely to any time soon, web-enabled services are so much hocus pocus. There needs to be a drive to take services to where people use them. Andrew Pinder, the former e-Envoy, used to talk about kiosks in pubs. He may have been speaking half in jest, but he probably wasn’t wrong. If that’s where people in a small village in Shropshire are to be found (and with Post Offices diminishing, it’s probably the only place to get access to the locals), that’s where the services need to be available. Government needs to be in the wholesale market if it’s to be efficient – there are far smarter, more fleet of foot retail providers that can deliver the individual transactions.
• Clean up the data. One of the reasons why government is probably afraid to join up services is that they know the data held on any given citizen is wildly out of date or just plain wrong. Joining up services would expose this. When I first took the business plan for the Government Gateway to a minister outside the Cabinet Office, this problem was quickly identified and seen as a huge impediment to progress

More to come.

The Billion Pound G-Cloud

Sometime in the next few weeks, spend through the G-Cloud framework
will cross £1 billion.  Yep, a cool billion.  A billion here and a
billion there and pretty soon you’re talking real money.

Does
that mean G-Cloud has been successful?  Has it achieved what it was set
up for? Has it broken the mould?  I guess we could say this is a story in four lots.

Well, that depends:

1) The Trend

Let’s start with this chart showing the monthly spend since inception.

It
shows 400 fold growth since day one, but spend looks pretty flat over
the last year or so, despite that peak 3 months ago. Given that this
framework had a standing start, for both customers and suppliers, it
looks pretty good.  It took time for potential customers (and suppliers)
to get their heads round it.  Some still haven’t. And perhaps that’s
why things seem to have stalled?

Total spend to
date is a little over £903m.  At roughly £40m a month (based on the
November figures), £1bn should be reached before the end of February,
maybe sooner. And then the bollard budget might swing into action and
we’ll see a year end boost (contrary to the principles of pay as you go
cloud services though that would be).

Government no
longer publishes total IT spend figures but, in the past, it’s been
estimated to be somewhere between £10bn and £16bn per year.  G-Cloud’s
annual spend, then, is a tiny part of that overall spend.  G-Cloud fans
have, though, suggested that £1 spent on G-Cloud is equivalent to £10 or
even £50 spent the old way – that may be the case for hosting costs, it
certainly isn’t the case for Lot 4 costs (though I am quite sure there
has been some reduction in rates simply from the real innovation that
G-Cloud brought – transparency on prices).

2) The Overall Composition

Up
until 18 months ago, I used to publish regular analysis showing where
G-Cloud spend was going.  The headline observation then was that some
80% was being spent in Lot 4 – Specialist Cloud Services, or perhaps
Specialist Counsultancy Services.  To date, of our £903m, some £715m, or
79%, has been spent through Lot 4 (the red bars on the chart above). 
That’s a lot of cloud consultancy.

 
(post updated 19th Jan 2016 with the above graph to show more clearly the percentage that is spent on Lot 4).

With all that spent
on cloud consultancy, surely we would see an increase in spend in the
other lots?  Lot 4 was created to give customers a vehicle to buy
expertise that would explain to them how to migrate from their stale,
high capital, high cost legacy services to sleek, shiny, pay as you go
cloud services.

Well, maybe.  Spend on IaaS (the blue
bars), or Lot 1, is hovering around £4m-£5m a month, though has increased substantially from the early days.  Let’s call it
£60m/year at the current run rate (we’re at £47m now) – if it hits that
number it will be double the spend last year, good growth for sure, and
that IaaS spend has helped created some new businesses from scratch. 
But they probably aren’t coining it just yet.

Perhaps the Crown Hosting Service has, ummm, stolen the crown and taken all of the easy business.  Government apparently spends £1.6bn per year on hosting,
with £700m of that on facilities and infrastructure, and the CHS was
predicted to save some £530m of that once it was running (that looks to
be a save through the end of 2017/18 rather than an annual save).  But
CHS is not designed for cloud hosting, it’s designed for legacy systems –
call it the Marie Celeste, or the Ship of the Doomed.  You send your
legacy apps there and never have to move them again – though, ideally,
you migrate them to cloud at some point. We had a similar idea to CHS
back in 2002, called True North, it ended badly.

A
more positive way to look at this is that Government’s hosting costs
would have increased if G-Cloud wasn’t there – so the £47m spent this
year would actually have been £470m or £2.5bn if the money had been
spent the old way.  There is no way of knowing of course – it could be
that much of this money is being spent on servers that are idling
because people spin them up but don’t spin them down, it could be that
more projects are underway at the same than previously possible because
the cost of hosting is so much lower.

But really, G-Cloud
is all about Lot 4.  A persistent and consistent 80% of the monthly
spend is going on people, not on servers, software or platforms.  PaaS
may well be People As A Service as far as Lot 4 is concerned.

3) Lot 4 Specifically

Let’s
narrow Lot 4 down to this year only, so that we are not looking at old
data.  We have £356m of spend to look at, 80% of which is made by
central government.  There’s a roughly 50/50 split between small and
large companies – though I suspect one or two previously small companies
have now become very much larger since G-Cloud arrived (though on these
revenues, they have not yet become “large”).

If we
knew which projects that spend had been committed to – we would soon
know what kind of cloud work government was doing if we could see that,
right?

Sadly, £160m is recorded as against “Project
Null”.  Let’s hope it’s successful, there’s a lot of cash riding on it
not becoming void too.

Here are the Top 10 Lot 4 spenders (for this calendar year to date only):

 
 And the Top 10 suppliers:


Cloud
companies?  Well, possibly.  Or perhaps, more likely, companies with
available (and, obviously, agile) resource for development projects that
might, or might not, be deployed to the cloud.  It’s also possible that
all of these companies are breaking down the legacy systems into
components that can be deployed into the cloud starting as soon as this
new financial year; we will soon see if that’s the case.

To
help understand what is most likely, here’s another way of looking at
the same data.  This plots the length of an engagement (along the
X-axis) against the total spend (Y-axis) and shows a dot with the
customer and supplier name.

A
cloud-related contract under G-Cloud might be expected to be short and
sharp – a few months, perhaps, to understand the need, develop the
strategy and then ready it for implementation.  With G-Cloud contracts
lasting a maximum of two years, you might expect to see no relationship
last longer than twenty four months.

But there are some
big contracts here that appear to have been running for far longer than
twenty four months.  And, whilst it’s very clear that G-Cloud has
enabled far greater access to SME capability than any previous
framework, there are some old familiar names here.

4) Conclusions

G-Cloud
without Lot 4 would look far less impressive, even if the spend it is
replacing was 10x higher.  It’s clear that we need:

– Transparency. What is the Lot 4 spend going to?

– Telegraphing of need.  What will government entities come to market for over the next 6-12 months?

– 
Targets.  The old target was that 50% of new IT spend would be on
cloud.  Little has been said about that in a long time.  Little has, in
fact, been said about plans.  What are the new targets?

Most of those points are not new – I’ve said them before, for instance in a previous post about G-Cloud as a Hobby and also here about how to take G-Cloud Further Forward.

In
short, Lot 4 needs to be looked at hard – and government needs to get
serious about the opportunity that this framework (which broke new
ground at inception but has been allowed to fester somewhat) presents
for restructuring how IT is delivered.

Acknowledgements

I’m
indebted, as ever, to Dan Harrison for taking the raw G-Cloud data and
producing these far simpler to follow graphs and tables.  I maintain
that GDS should long ago have hired him to do their data analysis.  I’m
all for open data, but without presentation, the consequences of the
data go unremarked.

Mind The Gaps – Nothing New Under The Sun

As we start 2015, a year when several big contracts are approaching their end dates and replacement solutions will need to be in place, here’s a presentation I gave a couple of times last year looking at the challenges of breaking up traditional, single prime IT contracts into potentially lots of smaller, shorter contracts:

G-Cloud By The Numbers (To End June 2014)

With Dan’s Tableau version of the G-Cloud spend data, interested folks need never download the csv file provided by Cabinet Office ever again.  Cabinet Office should subcontract all of their open data publication work to him.

The headlines for G-Cloud spend to the end of June 2014 are:

– No news on the split between lots.  80% of spend continues to be in Lot 4, Specialist Cloud Services

– 50% of the spend is with 10 customers, 80% is with 38 customers

– Spend in June was the lowest since February 2014.  I suspect that is still an artefact of a boost because of year end budget clearouts (and perhaps some effort to move spend out of Lot 4 onto other frameworks)

– 24 suppliers have 50% of the spend, 72 have 80%.  A relative concentration in customer spend is being spent across a wider group of suppliers.  That can only be a good thing

– 5 suppliers have invoiced less than £1,000. 34 less than £10,000

– 10 customers have spent less than £1,000. 122 less than £10,000.  How that boxes with the bullet immediately above, I’m not sure

– 524 customers (up from 489 last month) have now used the framework, commissioning 342 suppliers.  80% of the spend is from central government (unsurprising, perhaps, given the top 3 customers – HO, MoJ, CO – account for 31% of the spend)

– 36 customers have spent more than £1m.  56 suppliers have billed more than £1m (up from 51).  This time next year, Rodney, we’ll be millionaires.

– Top spending customers stay the same but there’s a change in the top 3 suppliers (BJSS, Methods stay the same and Equal Experts squeaks in above IBM to claim the 3rd spot)

One point I will venture, though not terribly well researched, is that once a customer starts spending money with G-Cloud, they are more likely to continue than not.  And one a supplier starts seeing revenue, they are more likely to continue to see it than not.  So effort on the first sale is likely to be rewarded with continued business.

Taking G-Cloud Further Forward

A recent blog post from the G-Cloud team talks about how they plan to take the framework forward. I don’t think it goes quite far enough, so here are my thoughts on taking it even further forward.

Starting with that G-Cloud post:

It’s noted that “research carried out by the 6 Degree Group suggests that nearly 90 percent of local authorities have not heard of G-Cloud”.  This statement is made in the context of the potential buyer count being 30,000 strong.  Some, like David Moss, have confused this and concluded that 27,000 buyers don’t know about G-Cloud.  I don’t read it that way – but it’s hard to say what it does mean.  A hunt for the “6 Degree Group”, presumably twice as good as the 3 Degrees, finds one obvious candidate (actually the 6 Degrees Group), but they make no mention of any research on their blog or their news page (and I can’t find them in the list of suppliers who have won business via G-Cloud).  Still, 90% of local authorities not knowing about G-Cloud is, if the question was asked properly and to the right people (and therein lies the problem with such research), not good.  It might mean that 450 or 900 or 1,350 buyers (depending on whether there are 1, 2 or 3 potential buyers of cloud services in each local authority) don’t know about the framework.  How we get to 30,000 potential buyers I don’t know – but if there is such a number, perhaps it’s a good place to look at potential efficiencies in purchasing.

[Update: I’ve been provided with the 30,000 – find them here: http://gps.cabinetoffice.gov.uk/sites/default/files/attachments/2013-04-15%20Customer%20URN%20List.xlsx. It includes every army regiment (SASaaS?), every school and thousands of local organisations.  So a theoretical buyer list but not a practical buyer list. I think it better to focus on the likely buyers. G-Cloud is a business – GPS gets 1% on every deal.  That needs to be spent on promoting to those most likely to use it]

[Second update: I’ve been passed a further insight into the research: http://www.itproportal.com/2013/12/20/g-cloud-uptake-low-among-uk-councils-and-local-authorities/?utm_term=&utm_medium=twitter&utm_campaign=testitppcampaign&utm_source=rss&utm_content=  – the summary from this is that 87% of councils are not currently buying through G-Cloud and 76% did not know what the G-Cloud [framework] could be used for]

Later, we read “But one of the most effective ways of spreading the word about G-Cloud
is not by us talking about it, but for others to hear from their peers
who have successfully used G-Cloud. There are many positive stories to
tell, and we will be publishing some of the experiences of buyers across
the public sector in the coming months”
– True, of course.  Except if people haven’t heard of G-Cloud they won’t be looking on the G-Cloud blog for stories about how great the framework is.  Perhaps another route to further efficiencies is to look at the vast number of frameworks that exist today (particularly in local government and the NHS) and start killing them off so that purchases are concentrated in the few that really have the potential to drive cost saves allied with better service delivery.

And then “We are working with various trade bodies and organisations to continue
to ensure we attract the best and most innovative suppliers from across
the UK.”
  G-Cloud’s problem today isn’t, as far as we can tell, a lack of innovative suppliers – it’s a lack of purchasing through it.  In other words, a lack of demand.  True, novel services may attract buyers but most government entities are still in the “toe in the water” stage of cloud, experimenting with a little IaaS, some PaaS and, based on the G-Cloud numbers, quite a lot of SaaS (some £15m in the latest figures, or about 16% of total spend versus only 4% for IaaS and 1% for Paas).

On the services themselves, we are told that “We are carrying out a systematic review of all services and have, so far, deleted around 100 that do not qualify.”  I can only applaud that.  Though I suspect the real number to delete may be in the 1000s, not the 100s.  It’s a difficult balance – the idea of G-Cloud is to attract more and more suppliers with more and more services, but buyers only want sensible, viable services that exist and are proven to work.  It’s not like iTunes where it only takes one person to download an app and rate it 1* because it doesn’t work/keeps crashing/doesn’t synchronise and so suggest to other potential buyers that they steer clear – the vast number of G-Cloud services have had no takers at all and even those that have lack any feedback on how it went (I know that this was one of the top goals of the original team but that they were hampered by “the rules”).

There’s danger ahead too: “Security accreditation is required for all services that will hold
information assessed at Business Impact Level profiles 11x/22x, 33x and
above. But of course, with the new security protection markings that
are being introduced on 1 April, that will change. We will be
publishing clear guidance on how this will affect accreditation of
G-Cloud suppliers and services soon.”
  It’s mid-February and the new guidelines are just 7 weeks away.  That doesn’t give suppliers long to plan for, or make, any changes that are needed (the good news here being that government will likely take even longer to plan for, and make, such changes at their end).  This is, as CESG people have said to me, a generational change – it’s going to take a while, but that doesn’t mean that we should let it.

Worryingly: “we’re excited to be looking at how a new and improved CloudStore, can
act as a single space for public sector buyers to find what they need on
all digital frameworks.”
  I don’t know that a new store is needed; I believe that we’re already on the third reworking, would a fourth help?  As far as I can tell, the current store is based on Magento which, from all accounts and reviews online, is a very powerful tool that, in the right hands, can do pretty much whatever you want from a buying and selling standpoint.  I believe a large part of the problem is in the data in the store – searching for relatively straightforward keywords often returns a surprising answer – try it yourself, type in some popular supplier names or some services that you might want to buy.   Adding in more frameworks (especially where they can overlap as PSN and G-Cloud do in several areas) will more than likely confuse the story – I know that Amazon manages it effortlessly across a zillion products but it seems unlikely that government can implement it any time soon (wait – they could just use Amazon). I would rather see the time, and money, spent getting a set of products that were accurately described and that could be found using a series of canned searches based on what buyers were interested in.

So, let’s ramp up the PR and education (for buyers), upgrade the assurance process that ensures that suppliers are presenting products that are truly relevant, massively clean up the data in the existing store, get rid of duplicate and no longer competitive buying routes (so that government can aggregate for best value), make sure that buyers know more about what services are real and what they can do, don’t rebuild the damn cloud store again …

… What else?

Well, the Skyscape+14 letter is not a terrible place to start, though I don’t agree with everything suggested.  G-Cloud could and should:

– Provide a mechanism for services to work together.  In the single prime contract era, which is coming to an end, this didn’t matter – one of the oligopoly would be tasked to buy something for its departmental customer and would make sure all of the bits fitted together and that it was supported in the existing contract (or an adjunct).  In a multiple supplier world where the customer will, more often than not, act as the integrator both customer and supplier are going to need ways to make this all work together.   The knee bone may be connected to the thigh bone, but that doesn’t mean that your email service in the cloud is going to connect via your PSN network to your active directory so that you can do everything on your iPad.

– Publish what customers across government are looking at both in advance and as it occurs, not as data but as information.  Show what proof of concept work is underway (as this will give a sense of what production services might be wanted), highlight what components are going to be in demand when big contracts come to an end, illustrate what customers are exploring in their detailed strategies (not the vague ones that are published online).  SMEs building for the public sector will not be able to build speculatively – so either the government customer has to buy exactly what the private sector customer is buying (which means that there can be no special requirements, no security rules that are different from what is already there and no assurance regime that is above and beyond what a major retailer or utility might want), or there needs to be a clear pipeline of what is wanted.  Whilst Chris Chant used to say that M&S didn’t need to ask people walking down the street how many shirts they would buy if they were to open a store in the area, government isn’t yet buying shirts as a service – they are buying services that are designed and secured to government rules (with the coming of Official, that may all be about to change – but we don’t know yet because, see above, the guidance isn’t available).

– Look at real cases of what customers want to do – let’s say that a customer wants to put a very high performing Oracle RAC instance in the cloud – and ensure that there is a way for that to be bought.  It will likely require changes to business models and to terms and conditions, but despite the valiant efforts of GDS there is not yet a switch away from such heavyweight software as Oracle databases.  The challenge (one of many) that government has, in this case, is that it has massive amounts of legacy capability that is not portable, is not horizontally scalable and that cannot be easily moved – Crown Hosting may be a solution to this, if it can be made to work in a reasonable timeframe and if the cost of migration can be minimised.

– I struggle with the suggestion to make contracts three years instead of two.  This is a smokescreen, it’s not what is making buyers nervous really, it’s just that they haven’t tried transition.  So let’s try some – let’s fire up e-mail in the cloud for a major department and move it 6 months from now.  Until it’s practiced, no one will know how easy (or incredibly difficult) it is.  The key is not to copy and paste virtual machines, but to move the gigabytes of data that goes with it.  This will prove where PSN is really working (I suspect that there are more problems than anyone has yet admitted to), demonstrate how new capabilities have been designed (and prove whether the pointy things have been set up properly as we used to say – that is, does the design rely on fixed IP address ranges or DNS routing that is hardcoded or whatever).  This won’t work for legacy – that should be moved once and once only to the Crown Hosting Service or some other capability (though recognise that lots of new systems will still need to talk to services there).  There’s a lot riding on CHS happening – it will be an interesting year for that programme.

The ICT contracts for a dozen major departments/government entities are up in the next couple of years – contract values in the tens of billions (old money) will be re-procured.   Cloud services, via G-Cloud, will form an essential pillar of that re-procurement process, because they are the most likely way to extract the cost savings that are needed.  In some cases cloud will be bought because the purchasing decision will be left too late to do it any other way than via a framework (unless the “compelling reason” for extension clause kicks in) but in most cases because the G-Cloud framework absolutely provides the best route to an educated, passionate supplier community who want to disrupt how ICT is done in Government today.  We owe them an opportunity to make that happen.  The G-Cloud team needs more resources to make it so – they are, in my view, the poor relation of other initiatives in GDS today.  That, too, needs to change.

Am I Being Official? Or Just Too Sensitive? Changes in Protective Marking.

From April 2nd – no fools these folks – government’s approach to security classifications will change.  For what seems like decades, the cognoscenti have bandied around acronyms like IL2 and IL3, with real insiders going as far as to talk about IL2-2-4 and IL3-3-4. There are at least seven levels of classification (IL0 through IL6 and some might argue that there are even eight levels, with “nuclear” trumping all else; there could be more if you accept that each of the three numbers in something like IL2-2-4 could, in theory, be changed separately). No more.  We venture into the next financial year with a streamlined, simplified structure of only three classifications. THREE!  

Or do we?

The aim was to make things easier – strip away the bureaucracy and process that had grown up around protective marking, stop people over-classifying data making it harder to share (both inside and outside of government) and introduce a set of controls that as well as technical security controls actually ask something of the user – that is, that ask them to take care of data entrusted to them.

In the new approach, some 96% of data falls into a new category, called “OFFICIAL” – I’m not shouting, they are. A further 2% would be labelled as “SECRET” and the remainder “TOP SECRET”.  Those familiar with the old approach will quickly see that OFFICIAL seems to encompass everything from IL0 to IL4 – from open Internet to Confidential (I’m not going to keep shouting, promise), though CESG and the Government Security Secretariat have naturally resisted mapping old to new.

That really is a quite stunning change.  Or it could be.

Such a radical change isn’t easy to pull off – the fact that there has been at least two years of work behind the scenes to get it this far suggests that.  Inevitably, there have been some fudges along the way.  Official isn’t really a single broad classification.  It also includes “Official Sensitive” which is data that only those who “need to know” should be able to access.   There are no additional technical controls placed on that data – that is, you don’t have to put it behind yet another firewall – there are only procedural controls (which might range – I’m guessing – from checking distribution lists to filters on outgoing email perhaps).

There is, though, another classification in Official which doesn’t yet, to my knowledge, have a name.   Some data that used to be Confidential will probably fall into this section.  So perhaps we can call it Official Confidential? Ok, just kidding.

So what was going to be a streamlining to three simple tiers, where almost everyone you’ve ever met in government would spend most of their working lives creating and reading only Official data, is now looking like five tiers.  Still an improvement, but not quite as sweeping as hoped for.

The more interesting challenges are probably yet to come – and will be seen in the wild only after April.  They include:

– Can Central Government now buy an off-the-shelf device (phone, laptop, tablet etc) and turn on all of the “security widgets” that are in the baseline operating system and meet the requirements of Official?

– Can Central Government adopt a cloud service more easily? The Cloud Security Principles would suggest not.

– If you need to be cleared to “SC” to access a departmental e-mail system which operated at Restricted (IL3) in the past and if “SC” allows you occasional access to Secret information, what is the new clearance level?

– If emails that were marked Restricted could never be forwarded outside of the government’s own network (the GSI), what odds would you place on very large amounts of data being classified as “Official Sensitive” and a procedural restriction being applied that prevents that data traversing the Internet?

– If, as anecdotal evidence suggests, an IL3 solution costs roughly 25% more than an IL2 solution, will IT costs automatically fall or will inertia mean costs stay the same as solutions continue to be specified exactly as before?

– Will the use of networks within government quickly fall to lowest common denominator – the Internet with some add-ons – on the basis that there needs to be some security but not as much as had been required before?

– If the entry to an accreditation process was a comprehensive and well thought through “RMADS” (Risk Management and Accreditation Document Set) which was largely the domain of experts who handed their secrets down through mysterious writings and hidden symbols

It seems most likely that the changes to protective marking will result in little change over the next year, or even two years.  Changes to existing contracts will take too long to process for too little return. New contracts will be framed in the new terms but the biggest contracts, with the potential for the largest effects, are still some way from expiry.  And the Cloud Security Principles will need much rework to encourage departments to take advantage of what is already routine for corporations. 

If the market is going to rise to the challenge of meeting demand – if we are to see commodity products made available at low cost that still meet government requirements – then the requirements need to be spelled out.  The new markings launch in just over two months.  What is the market supposed to provide come 2nd April?

None of this is aimed at taking away what has been achieved with the thinking and the policy work to date – it’s aimed at calling out just how hard it is going to be to change an approach that is as much part of daily life in HM Government as waking up, getting dressed and coming to work.