In the 80s the theme in Government IT was inhouse development of big Line of Business applications. In the 90s, it shifted to outsourcing both development and maintenance of all aspects of IT. By the 00s we had two themes – the move online and the creation of multi-billion pound programmes (most of which have resulted in abject failure for whatever reason).
In the 10s we are, again, re-inventing Government IT. With re-invention comes risk, sometimes significant risk. The models in place today are well understood and well practiced yet have also come with significant and continuing risk; there has been no avoiding it. Few would be able to point at more than a handful of successful deliveries in government IT. And yet in the past, changes have occurred relatively slowly. Today we are running several themes at once – vast reinvention on a pan-government scale – which inevitably brings with it more risk. And with that risk will come, again, the risk of failure – sometimes with individual point solutions adopted, other times with whole threads of activity. Spotting those failures before they occur and addressing them will be a time sink for the elite SWAT teams that will doubtless be needed. But some will escape the risks and deliver brilliantly of course; if only we knew which ones up front. Learning the lessons and applying them to everything else going on – as they occur and almost in real time – will be the activity that differentiates this reinvention from those that have gone before.
1. Real Innovstion – gCloud
This week saw the launch of the gCloud framework and its associated CloudStore. For the first time, government buyers can see the price of services upfront and compare across lots of different providers (ok, so it’s not easy to do that yet but it will get easier I’m sure). Companies that had little or no access to government customers before can now chase business and, if they’re priced well, win against the big players. Government can, in turn, try services out with little tail risk. Want to try collaboration for a project? Sign up to a new service for 2 or 3 months and see how your team adapts before committing your entire organisation to it. Fed up running an old version of Exchange that doesn’t support modern mobile phones well? Move to another.
We already know how this plays out in the consumer scene. We sign up to services, use them for a while, abandon them in favour of something newer and shinier and then repeat the process, often with a dozen different services at one. Of course, we rarely pay for these services and there is little tie-in beyond whatever data we commit to them (and so services, naturally, try and take as much off us as possible – hence the recent storm about address books being uploaded into the cloud).
We don’t, though, know how this will play out in government. Big, slow, old-fashioned government is used to buying from big, slow, old-fashioned suppliers and living with them for a decade or longer. Will departments break that mould quickly and buy IT for their staff the way that they buy IT at home? Will they have the courage to bring together 5, 10 or 15 services from different suppliers and manage them as a whole, without the shield of a huge prime contractor? Will they overcome their innate fear of “security” and adopt innovative ideas from new suppliers inexperienced in the government world?
gCloud’s main risk is not that the services fail, but that the whole idea behind it fails – that departments hunker down and ignore it, that they don’t switch enough of their existing spend to it so that it makes a difference or that they use their existing suppliers to do what gCloud wants to do and so undermine gCloud. The incredible energy behind gCloud – small team that it is who manage it – goes a long way to holding that risk in check, but departments are the buyers and they need to show their hands. A move to these kind of services is inevitable, it’s only a question of when. Departments publishing their plans for what they plan to buy (e.g. email, collaboration, storage as a service, etc) and when would allow suppliers to focus their efforts on core products whilst still looking to provide innovative services that departments didn’t even know they needed until they were shown them (or, better still, recommended them by the gEnius tool that doubtless will come with gCloud v2 – “services you need that you didn’t know you needed”™ or perhaps “service discovery as a service”(tm)
For me, gCloud has to work and will work, although there will be bumps, because it’s actually the foundation of the second area of reinvention:
2. Rejection of the old SI model – Service Integration
If the 90s outsourcing model was single, behemoth suppliers providing all of a department’s IT (though not meeting all of its needs, often despite best efforts all round), the model in the 10s is quite different.
The model adopted for the 10s, by at least a few departments, with MoD and MoJ leading the way, is to buy services from as many as a dozen providers – and to have a single Service (as opposed to System) Integrator bring all of those together. The twist is that the SI no longer owns all of the other contracts – instead, they’re all owned by the department. This has the advantages of eliminating the margin on margin common in traditional prime contracts as well as allowing the customer to pick what might have been called “best of breed” (back to the future!) suppliers (either through competition or from a framework) for each strand. On the downside, the procurement process becomes significantly more complicated and operating the end to end service becomes even trickier – liabilities, ownership and incentives will be murkier than government is used to.
To pull this off requires far greater client side expertise to be in place than either currently exists or than has been thought about. In a world of reducing budgets and prohibitions against consultants and contractors, sourcing enough people within the public sector (or transferring them in) will be a huge challenge. Those people certainly exist internally, but given going on for £40bn of contracts at original signing value coming up for renewal in the next 5 years, I very much doubt that there are enough to handle the forward workload. In the 00s, one of the consequences of the multi-billion pound contract era was that government directly drove inflation through its own buying process; that could happen again and so needs to be carefully planned for – by managing the timetable for procurements, by being clear with suppliers about where frameworks will be used and where procurements will take place, by being clear about the baseline that will be in place at the point of transition (departments are not standing still after all, they are busy virtualising their servers, hopefully looking at buying gCloud services and bringing their costs down in line with overall targets) – any procurement underway in the next 2-4 years will be against a very dynamic departmental baseline.
This, then, is a riskier area of reinvention than gCloud. With gCloud, departments are buying in to a service for a few months and maybe for only a small part of their organisation. They are specifically able to try things out – one project team or one function – before committing. And even if they commit, it might be that it doesn’t work and they have to pull out (see the point above about reinvention brings risk of failure).
But the Service Integration model is trying several new things at once, and for longer periods. And, of course, wrapped in these contracts are all of the legacy applications and services (many of which are much the same as they were in the 80s). Failure with these is certainly bigger than with gCloud but of a smaller likelihood than with the old model – if a single provider from within the group of 6, 9 or 12 struggles, then they can be replaced (not, perhaps, by one of the others – that model hasn’t exactly worked out well in the NPfIT/CfH world). The risk may, in fact, be all on the buying process – are the resources there to package all of these services up intelligently and effectively, to establish a great competition amongst suppliers (when many other competitions are likely to be going on at the same time, forcing suppliers to pick where they field their best teams) and to ensure that all of the liabilities, incentives, controls, processes etc work across multiple providers of services so that the user sees things just “working”?
These models will also end up working but I think there will be significant bumps along the way – both during procurement, transition and operations. They should also work better than the current model, but not immediately – I suspect it will take 2-3 years to bed the new model in properly.
3. Relentless Commoditisation
Typically government believed it was special and so developed everything in a “for government” way – it had one of everything and everything at least once. Huge bespoke estates resulted with everything the department needed bought by the department (or the department’s supplier). That resulted in huge build costs and even larger maintenance costs. In the last few years that has changed and, in the last 18 months, that change has dramatically accelerated.
Wherever possible, frameworks are being set up for networks, document archives, IT equipment, cloud services and all sorts of other things. Looking at the GPS list of frameworks just now, I counted 24 for IT (including IT consultancy) and a further 16 for software out of over 600 frameworks in total. I can only see that number going up as government tries to get its “managed spend” figure from its current level which I believe is somewhere over £1bn to far more than that.
Not all frameworks are created equal of course, nor are they used equally. Within the Service Integration model above, departments, such as MoD, have already made clear that they will use frameworks where it makes sense to. Others, like MoJ, have implied that they will create frameworks within their new model so that other departments can use their services (e.g. perhaps for hosting). It’s possible, likely even, that frameworks will be the route by which all equipment is bought even within, say, a hosting service – in which case, suppliers with high degrees of vertical integration (like HP & Fujitsu perhaps) may not, having won, say, a hosting contract be able to fill that data centre with their own hardware unless it is proven to be best value in a competition on a framework.
4. The One That Isn’t Any Of The Above
Perhaps the biggest reinvention is going on within Government Digital Services. There, they aren’t using frameworks (at least so far), aren’t looking for a service integrator or for a dozen suppliers to bring together a service under various contracts and they aren’t doing some of the other things that the Government’s own ICT strategy says should be done, such as using COTS or making the most of small businesses. They are, though, rebuilding – from scratch and with in-house, largely permanent, staff – government’s most visited website (direct.gov.uk to be renamed, or re-renamed, http://www.gov.uk). If the rebuild succeeds then it’s likely both visitor counts and visitor durations will go up – if it’s easier to find everything you need, you may stick around longer to see what else is there and, if transactions are hosted on the site, then completing transactions will add to the stay time. It could be very, very big.
Building websites and services from scratch is, of course, relatively common. After all, there was no COTS for Facebook or Twitter. It isn’t, though, common in the public sector (not since a couple of bright people in CITU did it in the mid-90s anyway). It also doesn’t fit well with the public sector’s current business model, whether IT or operations. I’d say that’s a pretty serious reinvention.
This is early days. The “beta” of http://www.gov.uk is, at best, a proof of concept. It’s got some impressive capability already and the rapid iteration and incrementing of features is exciting to watch. But it’s also got a huge do list ahead of it. One that, in less than a year, must allow direct.gov to be turned off. The lessons to be learned by the .gov.uk team are doubtless very interesting (and they’re learning most of them in the full glare of publicity); I hope that they are not learning every lesson from scratch although given the amount of from scratch building, I suspect they are encountering new problems every day that many others have already encountered, inside and outside of government.
Having your team inhouse is a fascinating experiment though – your only wasted cost is that of opportunity, i.e. would it have been better to have the team work on a, instead of b or did you not even notice c and what that could have done for you. It’s still real money of course – and based on the team size, it might be a real lot of money. You can achieve speeds and quality of delivery that I don’t think can be achieved by all except the most integrated of supplier and customer teams. But you can also spin wheels because the cost is essentially sunk. Every working day you burn £x and measuring whether you have achieved the same in value is hard – especially when your scope is shifting and evolving within the agile methodology. Also, big companies manage turnover on an every day basis – they draft in new people, have a constant stream of more junior employees who can work for lower rates and who can progress up the ladder before moving to oer clients and can bring in additional bodies for particularly tough deadlines – all of that is hard with an inhouse team (especially a small one).
Predicting the future for GDS and the .gov.uk is hard. I suspect a transition to an alternative model in the future – perhaps to a series of suppliers with GDS holding the integration role or, more radically, the creation of a mutual where GDS gets some investment and becomes a supplier to government itself, so adopting a mix of commercial disciplines with at least partial employee ownership. That would free them up to compete for other business and would put all of the incentives in the right place, as well as provide a more structured career path for those in the team. It would almost be a return to the days of CITU.
This is potentially the riskiest reinvention of them all. I hope that the risks don’t come home and that this programme succeeds enormously but I can’t help but feel nervous for its prospects. And Facebook disappeared during development or had a few outages in its first few months, few would have noticed or cared. But when you take on such a big and visible project with an entirely new approach, it could hardly be more challenging.
5. The Underpinning Reinvention
All of these changes are underpinned by an openness and transparency that is incredibly refreshing. Seeing new starters in GDS blog about what it’s like to work there and very senior people across government blog / tweet / respond to comments has opened up the workings of government – my guess is that the regular audience consists of a relatively small number of geeks but the occasional bursts into the mainstream press so no change in message. We have done betas and pilots and test versions in UK government before, but never quite in this way.
As I said at the beginning, with reinvention comes risk. With risk comes the potential for failure. With failure comes interrogation and criticism. The good news is, I think, that all of the interrogation and criticism will have been done on the inside and posted on blogs long before that point.