In May 2008, on this blog, I wrote about Chateau Palmer (a fine Bordeaux wine) and, specifically, about how making wine forces a long term strategy – vines take years before they produce a yield that is worth bottling (my friends in the business say that the way to make a small fortune in wine is to start with a large one), more years can go by before the wine in the bottle is drunk by most consumers, and yet, every year the process repeats (with some variety, much caused by the weather). It’s definitely a long game.
- Many departments could now recover VAT on “managed services” but not on hardware purchases. Departments are good at exploiting such opportunities and so the outsource vendor would buy the hardware on behalf of the department, sell it to back to the department as part of a managed service, and the department would then reclaim the VAT, getting 20% back on the deal. Those who were around in the early days of G-Cloud will remember the endless loops about whether VAT could be reclaimed – it was some years after G-Cloud started that this was successfully resolved.
- Departments now had a route to buying more IT services, or capability, without needing to go through a new procurement, provided the scope of the original procurement was wide enough. That meant that existing contracts could be used to buy new services. And, as everyone knows, IT doesn’t stay still, so there were a lot of new services, and nearly all of them went through the original contract. Those contracts swelled in size, with annual spend often double or triple the original expectation within the first few years. When e-government, now digital, came along and when departments merged, those numbers often exploded.
- Whilst all of the original staff involved transferred, via TUPE, on the package they had in government – salary plus index linked pensions etc – any new staff brought on e.g. to replace those who had left (or retired) or for new projects, would come on with a deal that was standard for the private sector. That usually meant that instead of pension contributions being 27-33%, they were more likely 5-7%. Instantly, that created an easy save for government – it was 20% or more cheaper, even before we talk VAT, to use the existing provider.
- Whilst departments have long had an obligation to award business to smaller players, the ease of using the big players with whom they already had contracts made that difficult (in the sense that there was an easy step “write a contract change to award this work to X” versus “Write the spec, go to market, evaluate, negotiate, award, work with new supplier who doesn’t understand us”). Small players were, unfairly, shut out.
- When a department wanted to know what something cost, it was very hard to figure out. Email for instance – a few servers for outlook, some admin people to add and delete users etc, how hard can it be to cost? That’s a bit like Heisenberg’s Uncertainty Principle – the more you study where something is the less you know about where it’s going. In other words, if you looked closely at one thing, the money moved around. If something needed to be cheap to get through, the costs were loaded elsewhere. If something needed to be expensive to justify continued investment (avoiding the sunk cost fallacy), costs were loaded on to it. Then, of course, there was the ubqiuity of “shared services” – as in “Well, Alan, if you want me to figure out how much email costs, we need to consider some of Bob’s time as he answers the phone for all kinds of problems, a share of the network costs for all that traffic, some of Heidi’s time because email is linked to the directory and without the work she does on the directory, it wouldn’t work” and so on. Benchmarking was the supposed solution for that – but if you couldn’t break out the costs, how did you know it was value for money? Or not? Did suppliers consciously hinder efforts to find true cost? I suspect it was a mix of the structure they’d built for themselves – they didn’t, themselves, know how it broke down – and a lack of disciplined chasing by departments … because the side effects and the flaws self-reinforced.
What Do We Mean By Disaggregation?
That reinforcement – the side effects and the flaws – plus the inertia of 20+ years of living in a monolithic outsource model meant that change was hard. Really hard.
We also need to consider that modern services are, for the most part, disaggregated from day one – new cloud services are often procured from an IaaS provider, several development companies, a management company and so on. What we are talking about here, for the most part, is the legacy applications that have been around a decade or more, the network that connects the dozens or hundreds of offices around the country (or the world), the data centres that are full of hardware and the devices that support the workload of thousands, or tens of thousands of users. These services are the backbone of government IT, pending the long promised (and delayed even longer than disaggregation), digital transformation. They may not (and indeed, are not) user led, they’re certainly not agile – but they handle our tax, pensions, benefits, grants to farmers and so on.
Writing papers for Ministers many years ago, we would often start with two options, stark choices. The preamble we used to describe these was “Minister, we have two main choices. The first one will result in nuclear war and everyone will die. The second will result in all out ground war and nearly everyone will die. We think we have a third way ahead, it’s a little risky, and there will be some casualties, but nearly everyone will survive.” Faced with that intro, what choice do you think the Minister will make?
In this context, the story would be something like: “Minister, we have two options. The first is to largely stay as we are. We will come under heavy scrutiny, save no money, progress our IT not a jot and deliver none of the benefits you have promised in your various policies. The second is to disaggregate our services massively, throwing our control over IT into chaos, increasing our costs as we transition and sucking up so many resources that we won’t be able to do any of the other work that you have added to our list since you took office. Alternatively … we have a third choice”
Disaggregate a little. Take some baby steps. Build capability in house, manage more suppliers than we’re used to, but not so many suppliers such that your integration capability would be exhausted before it had a chance.
– TUPE nearly always applies. Except when it doesn’t – If you take your email service and move it to Office 365, the staff aren’t moving to Microsoft or to that ubiquitous company known as “the cloud.” But when it does apply, it’s not a trivial process. Handling which staff transition to which companies (and ensuring that the companies taking on the staff have the capability to do it) is tricky. Big outsource providers have been doing this for years and have teams of people that understand how the process works. Smaller companies won’t have that experience and, indeed, may not have the capability to bring in staff on different sets of Ts & Cs.
-Market inability to provide a mature offer; coach the market in what will be wanted so that they have time to prove it
-Too great an uncertainty or risk for the business to take; prove concepts through alpha and beta so risks are truly understood
-Lack of clear return for the investment required; demonstrate delivery and credibility in the delivery approach so that costs are
managed and benefits are delivered as promised
-Delays in delivery of key shared services; close management with regular delivery cycles that show progress and allow slips to be visible and dealt with
-Challenges in creating an organisation that can respond to the stimulus of agile, iterative delivery led by user need; start early and prove it, adjust course as lessons are learned, partner closely with the business
We need to poke and prod and encourage further experimentation. Suppliers need to make it easy to buy and integrate their services (recognising that even the cheapest commodity needs to be run and operated by someone). And when someone seems to take a short cut and extend a contract, or award to an existing supplier, we need to understand why, and where they are on their journey. Departments need to be far more transparent about their roadmap and plans to help that
That means that departments need to:
- Be more open about what their service provision landscape looks like two, three, four and five years out (with decreasing precision over time, not unreasonably). Coach the market so that the market can help, don’t just come to it when you think you are ready.
- Lay out the roadmap for legacy technology, which is what is holding back the increasing use of smaller suppliers, shorter contracts and more disaggregation. There are three roadmap paths – everything goes exactly as you planned for and you meet all your deadlines (some would say this is the least likely), a few things go wrong and you fall a little behind, and it all goes horribly wrong and you need a lot more time to migrate away from legacy. Departments generally consider only the first, though one or two have moved to the second. There’s an odd side effect of the spend control process – HMT requires optimism bias and so on to be included in any business case, spend controls normally strip that out, then departmental controls move any remaining contingency to the centre and hold it there, meaning projects are hamstrung by having no money (subject to approvals anyway) to deal with the inevitable challenges.
- Share what you are doing with modern projects – just what does your supplier landscape look like today
It’s been nearly two years since I last looked at G-Cloud expenditure – when the total spend crossed £1bn at the end of 2015. Well, as of July 2017, spend reached a little under £2.5bn, so I figured it was time to look again. I am, as always, indebted to Dan Harrison for his data analysis – his Tableau work is second to none and it, really, should be taken up by GDS and used as their default reporting tool (obviously they should hire Dan to do this for them).
As an aside, the raw data has been persistently poor and is not improving. Date formats are mixed up, fields are missing, the recent change to combine lots means that there are some mixed up numbers and, interestingly, the project field has been removed – I’d looked at this before and queried whether many projects were actually cloud related (along with the fact that something like 20% of projects were listed as “null” – I can understand that it’s embarrassing having empty data, but removing the field doesn’t make the data qualitatively better, it just makes me think something is being hidden).
Recall this, from June 2014, for instance:
Scanning through the sales by line item, there are far too many descriptions that say simply “project manager”, “tester”, “IT project manager” etc. There are even line items (not in Lot 4) that say “expenses – 4gb memory stick” – a whole new meaning to the phrase “cloud storage” perhaps.
Here’s the graph of spend over the 5 1/2 years that G-Cloud has been around:
The main conclusions I reach are much the same as before:
– 77% of spend continues to be in “Cloud Support” (previously known as “Specialist Services”). It’s actually a little higher than that – now that PaaS and SaaS have been merged (to create a category of “Cloud Software”, Lot 4 has become Lot 3 but both categories are reported in the data. It’s early days for Cloud Software – it would be good if GDS cleaned up the data so that historic lots reflected current lots.
– 2017 spend looks like it will be slightly higher than 2016, but not by much. If the idea was to move work from “People As a Service”, i.e. Cloud Support, to other frameworks, it’s not obvious that it’s happened in a meaningful way, but it may be damping spend a little.
– IaaS spend, now known as Cloud Hosting, has reached £205m. I seem to remember from the early days of the Crown Hosting Service business case that there were estimates that government spent some £400m annually on addressable hosting charges (i.e. systems that could be moved to the cloud). At the moment Cloud Hosting is a reasonably flat £6m/month, or £70m/year. It’s very possible that there’s a 1:10 saving in cloud versus legacy, but everything in me says that much of this cloud hosting is new spend, not reduced spend following migration to the cloud. That’s good in that it avoids a much higher old-style asset rich infrastructure, but I don’t think it shows much of a true migration to the cloud.
28% of spend by the top 5 customers.
In the past I’ve looked at the top spending customers and top earning suppliers, specifically in Lot 4 (now a combination of Lot 4 and the new Lot 3). There are a couple of changes here:
– Back then, for customers … Home Office, MoJ, DVLA, DSA and HMRC were the highest spending departments with around £150m between them. Today … Home Office, MoJ, HMRC, Cabinet Office and DSA (DVLA dropped to 7th place) have spent nearly £800m (total spend across all lots by the top 5 customers is only £100m higher at £925m which shows the true dominance of support services at the top end). £925m out of £2.5bn in just 5 customers. £1.25bn (51%) is from the top 10 customers.
– And for suppliers, Mastek, Deloitte, Cap Gemini, ValTech and Methods were the top 5 with a combined revenue (again in Lot 4) of £67m. Today it’s Equal Experts, Deloitte, Cap Gemini, BJSS and PA Consulting with revenue of £335m (total spend across all lots for the top 5 suppliers is £348m – that makes sense given few of the top suppliers are active across multiple lots – maybe Cap Gemini is the odd one out, getting some revenue for hosting or SaaS). It takes the top 10 suppliers to make for 25% of the spend. I don’t think that was the intention of G-Cloud – that it would be dominated by a small number of suppliers, though, at the same time, some of those companies – UKCloud (£64m) for instance – are still small companies and, without G-Cloud, might not exist or have reached such revenues if they did exist.
A couple of years ago I offered the observation that
“once a customer starts spending money with G-Cloud, they are more likely to continue than not. And one a supplier starts seeing revenue, they are more likely to continue to see it than not.”
That seems to be exactly the case, here’s a picture showing the departments who have contracts that have run for more than 24 months (and up to 50 months – nearly as long as G-Cloud has been around):
- Should there be spend control review of “Cloud Support” contracts to determine what they’re aiming to achieve and then assess whether there really has been a reduction in costs, a migration to the cloud, a change in the contracting model for the service? If we were to do a show of hands across departmental CIOs now and ask how many were running their email in the cloud (the true cloud, not one they’ve made up and badged as cloud that morning), what would the response be? If we were to make it harder and ask about directory services (such as Active Directory), what would the answer be? If we were to look at historic Lot 4 and test how much had been spent in pursuit of such migrations, what would the answer be?
- What incentives could we put in place to encourage departments to make the move to cloud? Departments have control over their budgets, of course, and lots of other things to spend the money on, but could we create a true central capability (key people drawn from departments and suppliers with a brief to build a cloud transition plan) that was architecture agnostic and delivery focused that would support departments in the transition – and that would be accountable (and quite literally held to account) for delivering on the promise of cloud transition? If that was in place, could departments focus on their legacy systems and how to move those to more flexible platforms, in readiness for future cloud moves (or future enhancements to cope with Brexit)?
- What more could we do to encourage UK based cloud companies (as opposed to overseas companies with UK bases) to excel? Plainly they have to compete in a global market – and I were a UK hosting company, I would be watching Amazon very closely and wondering whether I will have a business in a few months – but that doesn’t mean to say we don’t want to encourage a local capability across all lots? What would they need to know to encourage them to invest in the services that will be needed in the future? How could that information be made available so that a level playing field was maintained? Do we want to encourage such a capability in the UK, or should we publish the overall plans and transition maps and let the chips fall where they may?
- Are there changes that need to be made to the procurement model so that every supplier can see what every department is looking for rather than the somewhat peculiar approach now where suppliers may not even know a department is looking to make a purchase? What would that add to the timeline? Would it result in better competition? Would customers benefit as well as suppliers? Could we try it and see – you know that whole alpha, beta, A/B testing thing?
- I’ve been on the record for a long time as saying government should recognise that it doesn’t collaborate with itself – having collaboration services inside the department’s own firewall isn’t collaboration, it’s talking to yourself. I believe that I even once suggested using a clone of Facebook for such collaboration. Government doesn’t need lots of collaboration tools – it needs one or two where everyone, including suppliers and even customers, can get to with appropriate segregation and reviews to make sure people can only see what they’re supposed to see. Whatever happened to Civil Pages I wonder?
Strictly speaking, this is a little more than 10 years after the 10 year mark. In late 2005, Public Sector Forums asked me to do a review of the first 10 years of e-government; in May 2006, I published that same review on this blog. It’s now time, I think, to look at what has happened in the 10 years (or more) since that piece, reviewing, particularly, digital government as opposed to e-government.
Here’s a quick recap of the original “10 years of e-government” piece, pulling out the key points from each of the posts that made up the full piece:
“We will publish a White Paper in the new year for what we call Simple Government, to cut the bureaucracy of Government and improve its service. We are setting a target that within five years, one quarter of dealings with Government can be done by a member of the public electronically through their television, telephone or computer.”
“I am determined that Government should play its part, so I am bringing forward our target for getting all Government services online, from 2008 to 2005”
Accessibility meant, simply, the site wasn’t.
More to come.
that mean G-Cloud has been successful? Has it achieved what it was set
up for? Has it broken the mould? I guess we could say this is a story in four lots.
Well, that depends:
1) The Trend
Let’s start with this chart showing the monthly spend since inception.
shows 400 fold growth since day one, but spend looks pretty flat over
the last year or so, despite that peak 3 months ago. Given that this
framework had a standing start, for both customers and suppliers, it
looks pretty good. It took time for potential customers (and suppliers)
to get their heads round it. Some still haven’t. And perhaps that’s
why things seem to have stalled?
Total spend to
date is a little over £903m. At roughly £40m a month (based on the
November figures), £1bn should be reached before the end of February,
maybe sooner. And then the bollard budget might swing into action and
we’ll see a year end boost (contrary to the principles of pay as you go
cloud services though that would be).
longer publishes total IT spend figures but, in the past, it’s been
estimated to be somewhere between £10bn and £16bn per year. G-Cloud’s
annual spend, then, is a tiny part of that overall spend. G-Cloud fans
have, though, suggested that £1 spent on G-Cloud is equivalent to £10 or
even £50 spent the old way – that may be the case for hosting costs, it
certainly isn’t the case for Lot 4 costs (though I am quite sure there
has been some reduction in rates simply from the real innovation that
G-Cloud brought – transparency on prices).
2) The Overall Composition
until 18 months ago, I used to publish regular analysis showing where
G-Cloud spend was going. The headline observation then was that some
80% was being spent in Lot 4 – Specialist Cloud Services, or perhaps
Specialist Counsultancy Services. To date, of our £903m, some £715m, or
79%, has been spent through Lot 4 (the red bars on the chart above).
That’s a lot of cloud consultancy.
With all that spent
on cloud consultancy, surely we would see an increase in spend in the
other lots? Lot 4 was created to give customers a vehicle to buy
expertise that would explain to them how to migrate from their stale,
high capital, high cost legacy services to sleek, shiny, pay as you go
Well, maybe. Spend on IaaS (the blue
bars), or Lot 1, is hovering around £4m-£5m a month, though has increased substantially from the early days. Let’s call it
£60m/year at the current run rate (we’re at £47m now) – if it hits that
number it will be double the spend last year, good growth for sure, and
that IaaS spend has helped created some new businesses from scratch.
But they probably aren’t coining it just yet.
Perhaps the Crown Hosting Service has, ummm, stolen the crown and taken all of the easy business. Government apparently spends £1.6bn per year on hosting,
with £700m of that on facilities and infrastructure, and the CHS was
predicted to save some £530m of that once it was running (that looks to
be a save through the end of 2017/18 rather than an annual save). But
CHS is not designed for cloud hosting, it’s designed for legacy systems –
call it the Marie Celeste, or the Ship of the Doomed. You send your
legacy apps there and never have to move them again – though, ideally,
you migrate them to cloud at some point. We had a similar idea to CHS
back in 2002, called True North, it ended badly.
more positive way to look at this is that Government’s hosting costs
would have increased if G-Cloud wasn’t there – so the £47m spent this
year would actually have been £470m or £2.5bn if the money had been
spent the old way. There is no way of knowing of course – it could be
that much of this money is being spent on servers that are idling
because people spin them up but don’t spin them down, it could be that
more projects are underway at the same than previously possible because
the cost of hosting is so much lower.
But really, G-Cloud
is all about Lot 4. A persistent and consistent 80% of the monthly
spend is going on people, not on servers, software or platforms. PaaS
may well be People As A Service as far as Lot 4 is concerned.
3) Lot 4 Specifically
narrow Lot 4 down to this year only, so that we are not looking at old
data. We have £356m of spend to look at, 80% of which is made by
central government. There’s a roughly 50/50 split between small and
large companies – though I suspect one or two previously small companies
have now become very much larger since G-Cloud arrived (though on these
revenues, they have not yet become “large”).
knew which projects that spend had been committed to – we would soon
know what kind of cloud work government was doing if we could see that,
Sadly, £160m is recorded as against “Project
Null”. Let’s hope it’s successful, there’s a lot of cash riding on it
not becoming void too.
Here are the Top 10 Lot 4 spenders (for this calendar year to date only):
companies? Well, possibly. Or perhaps, more likely, companies with
available (and, obviously, agile) resource for development projects that
might, or might not, be deployed to the cloud. It’s also possible that
all of these companies are breaking down the legacy systems into
components that can be deployed into the cloud starting as soon as this
new financial year; we will soon see if that’s the case.
help understand what is most likely, here’s another way of looking at
the same data. This plots the length of an engagement (along the
X-axis) against the total spend (Y-axis) and shows a dot with the
customer and supplier name.
cloud-related contract under G-Cloud might be expected to be short and
sharp – a few months, perhaps, to understand the need, develop the
strategy and then ready it for implementation. With G-Cloud contracts
lasting a maximum of two years, you might expect to see no relationship
last longer than twenty four months.
But there are some
big contracts here that appear to have been running for far longer than
twenty four months. And, whilst it’s very clear that G-Cloud has
enabled far greater access to SME capability than any previous
framework, there are some old familiar names here.
without Lot 4 would look far less impressive, even if the spend it is
replacing was 10x higher. It’s clear that we need:
– Transparency. What is the Lot 4 spend going to?
– Telegraphing of need. What will government entities come to market for over the next 6-12 months?
Targets. The old target was that 50% of new IT spend would be on
cloud. Little has been said about that in a long time. Little has, in
fact, been said about plans. What are the new targets?
short, Lot 4 needs to be looked at hard – and government needs to get
serious about the opportunity that this framework (which broke new
ground at inception but has been allowed to fester somewhat) presents
for restructuring how IT is delivered.
indebted, as ever, to Dan Harrison for taking the raw G-Cloud data and
producing these far simpler to follow graphs and tables. I maintain
that GDS should long ago have hired him to do their data analysis. I’m
all for open data, but without presentation, the consequences of the
data go unremarked.
With gov.uk’s Verify appearing on the Performance Dashboard for the first time, I was taken all the way back to the early 2000s when we published our own dashboards for the Government Gateway, Direct.gov.uk and our other services. Here’s one from July 2003 – there must have been earlier ones but I don’t have them to hand:
This is the graph that particularly resonated:
With the equivalent from back then being:
After 4 years of effort on the Identity programme (now called Verify), the figures present pretty dismal reading – low usage, low ability to authenticate first time, low number of services using it – but, you know what, the data is right there to see for everyone and it’s plain that no one is going to give up on this so gradually the issues will be sorted, people will authenticate more easily and more services will be added. It’s a very steep hill to climb though.
We started the Gateway with just the Inland Revenue, HM Customs and MAFF (all department names that have long since fallen away)- and adding more was a long and painful process. So I feel for the Verify team – I wouldn’t have approached things the way they are but it’s for each iteration to pick its path. There were, though, plenty of lessons to learn that would have made things easier.
There is, though, a big hill to climb for Verify. Will be interesting to watch.
As we start 2015, a year when several big contracts are approaching their end dates and replacement solutions will need to be in place, here’s a presentation I gave a couple of times last year looking at the challenges of breaking up traditional, single prime IT contracts into potentially lots of smaller, shorter contracts: