Watching, and playing a very small part, in G-Cloud, the UK government framework for purchasing cloud products and services, over the last 2 1/2 years has been a fascinating experience. It’s grown from something that no one understood and, when they did, something no one thought would work into the first, and probably only, framework in government with greater representation from small companies than large companies and one that refreshes faster than any other procurement vehicle ever.
What G-Cloud doesn’t yet have is significant amounts of money flowing through it – the total at the end of May was some £22m. With its transition from “programme” to business as usual, under the aegis of GDS, it should now get access to the resource it has been starved of since birth. The absence of that would, had it not been for tireless passion and commitment from its small team, have resulted in it being killed off long before now. GDS should also bring it the needed Political cover to help it find a role as part of the agenda for real change, but there are challenges to overcome.
In 1999, Jack Welch told every division in GE, the company he was then CEO of, that e-business would be every division’s “priority one, two, three and four.”
In 2013, UK government went a little further and mandated that, for central government, cloud would be first in every IT purchasing decision. Local government and the wider public sector would be strongly encouraged to follow suit.
It’s a laudable, if unclear, goal.
The previous incarnation of this goal held that “50% of new spend” would be in the public cloud – that was perhaps a little sharper than a “cloud first” goal – if, as I’ve written, we could be clear about what new spend was and track that spend, then achieving 50% would be a binary achievement. Testing whether we are “cloud first” will be as nebulous as knowing whether the UK is a good place to live.
Moving G-Cloud from a rounding error (generously, let’s say 0.05% of total IT spend; it’s probably 1/10th of that even) to something more fundamental that reflects the energy that has gone in from a small team over the last 3 years or so requires many challenges to be overcome. Two of those challenges are:
1) Real Disaggregation
Public sector buyers historically procure their IT in large chunks. It’s simpler that way – one big supplier, one throat to choke, one stop shop etc. Even new applications are often bought with development, hosting and maintenance from one supplier – leading to a vast spread of IT assets across different suppliers (not many suppliers, just different). Some departments – HMRC and DWP perhaps – buy their new applications (tax credits, universal credit) from their existing suppliers to stop that proliferation.
Even today’s in vogue tower model, with the SIAM at the top (albeit not its prime), there is little disaggregation. The MoJ, shortly to announce the winner of its SIAM tender, will move all of its hosted assets from several suppliers, to one (perhaps – there is little to no business benefit in moving hardware around data centres, common sense may prevail before that happens). MoJ had, indeed, planned to move all of its desktop assets from several suppliers to one but recently withdrew that procurement (at the BAFO stage) and returned to the drawing board – the new plan is not yet clear. In consolidating, it will hopefully save money, though some of that will likely be added back when the friction of multiple suppliers interacting across the towers is included. The job of the SIAM will be to manage that friction and deliver real change, whilst working across the silos of delivery – desktop, hosting, apps, security, network etc.
But disaggregating across the functional lines of IT brings nothing new for the business. Costs may go down – suppliers, under competitive pressure for the first time in years will polish their rocks repeatedly, trying to make them look shinier than that of the others in the race. Yet the year, or even two years, after the procurement could easily be periods of stasis as staff are transferred from supplier to supplier (or customer to supplier and even supplier to customer) and new plans are drawn up. During that time, the unknown and unexpected will emerge and changes will be drawn up that bring the cost back to where it was.
In a zero-based corporate cloud model, you would also have your IT assets spread across multiple providers – and you wouldn’t care. Your email would be with Google, your collaboration with Huddle or Podio, your desktops mights be owned by the staff, your network would be the Internet, your finance and HR app would be Workday, your website would be WordPress, your reporting would be with Tableau and so on.
In contrast, the public sector cloud model isn’t yet clear. Does the typical
CIO, CTO, Chief Digital Officer want relationships with twenty or thirty suppliers? Does she want to integrate all of those or have someone else do it? Does she want to reconcile the bills from all of them or have someone else do it?
But if “cloud first” is to become a reality – and if G-Cloud spending is going to be 50% of new IT spend (assuming that the test of “cloud first” is whether it forces spend in a new direction) – then that requires services to be bought in units, as services. That is, disaggregation at a much lower level than the simple tower.
Such disaggregation requires client organisations that look very different from those in place today where the onus is on man-to-man marking and “assurance” rather than on delivery. Too many departments are IT led with their systems thinking; GDS’ relentless focus on the user is a much needed shift in that traditional approach, albeit one that will be relentlessly challenged in the legacy world.
As Lord Deighton said in an interview earlier this month, the “public sector is slightly long on policy skills [and] … slightly short on delivery skills.” I agree, except I think the word “slightly” is redundant.
2) Real Re-Integration
As services disaggregate and are sourced from multiple providers, probably spread around the UK and perhaps the world, the need to bring them all together looms large. We do this at home all the time – we move our data between Twitter, Facebook, e-mail and Instagram all of the time. But public sector instances of such self-integration are rare – connecting applications costs serious money: single sign-on, XML standards, secure connections, constant checking versus service levels and so on.
Indeed, a typical applications set for even a small department might look something like this:
Integrating applications like this is challenging, expensive and fraught with risk every time the need to make a change comes up. Some of these applications are left behind on versions of operating systems or application packages that are no longer supported or that cannot be changed (the skills having long since left the building and, in some cases, this Earth).
New thinking, new designs, new capabilities and significant doses of courage will be required both to bring about the change required to disaggregate at a service level and to ensure that each steps is taken with the knowledge of how a persistent whole will be shown to the user.
The change in security classifications (from BIL to Tiers) will be instrumental in this new approach – but it, too, will require courage to deliver. Fear of change and of high costs from incumbents will drive many departments to wait until the next procurement cycle before starting down the path. They too, will then enter their period of stasis, delaying business benefits until early in the next decade.
To be continued …
One thought on “Cloud First (Second and Third)”
Great read Alan 🙂 interesting times ahead. Marlon