Is Agile Going To Be Beta?

The blog post announcing the formal launch of the Government Digital Service closed with this paragraph:

Next year we look forward to a faster pace for delivery. While our roadmap is not finalised, and indeed will never be given the agility to which we aspire, we can look forward to some major releases.  

Since reading it I’ve been wrestling with what I think of the approach that GDS is taking. Several times (at least) I’ve decided that I agree with it; several other times I’ve thought it wrong – the issue I’m debating stays the same and, that is, whether it’s wrong or right for government to build a team in-house to design, develop, deliver and operate a system.

The early deliveries from GDS are interesting markers but not definitive about capability – e-petitions was fast and did what it was supposed to (but there was already a perfectly adequate solution in place; replacing it doesn’t seem to have achieved anything new); looked nice and stirred up some interesting debates (accessibility, IE6 and so on) which needed to be had (I’m looking forward to the debate on whether Welsh will be supported and at what cost) but, to the casual viewer, it was just a website (to the techs i spoke to, it was just a custom-built website and many wondered why anyone had bothered to build from scratch again); and the identity programme has been relaunched (early days yet but my fingers are firmly crossed).  It’s good and right that if you’re going to try a new approach you don’t take on something big for your first deliver of course.

The thing that I debate is whether, in an environment where the strategy is reuse, commercial products, British SMEs and frameworks like gCloud (plus PSN and others), should part of government be building a team who will deliver a website that will be the view that almost the entire population has of government online?

An argument for doing it this way might be that the single domain game plan is so different from what the market already has that custom building is the only option. That would, indeed, be the argument that I used in 2002 when we built a new platform for the first time round.  We took a different approach then – we built a team to architect the strategy and then we partnered with commercial companies to design, deliver and support the site.  That didn’t mean that I didn’t get called at 3am if there was a problem, but it did mean that I wasn’t the first person to be called (the suppliers own escalation chain made sure I was called when it was necessary).   We ended up having to build a substantial service management wrapper around our suppliers though – some cared more than others for customers, some cared more than others about paying away service credits and some didn’t seem to care for one or the other.

Ultimately, we had top designers/architects on the team and a world-class service management team – and our suppliers fielded top class people in the design and architecture space too.  But we didn’t have a team coding, building user interfaces, handling style sheets, deploying code on servers, thinking about performanace optimisation, worrying about how much RAM was needed on a server or whether we had enough disk space allocated and so on. We did, for quite a while, have a great team building trial versions of applications that we thought might be useful in the future, but everything that went into production was built and managed by suppliers under contract.

Our contracts were not of the traditional “come up with all the requirements, fix the price and wait for it all to go wrong” variety – we managed our projects in tight phases with agreed deliverables (and agreed assumptions) and we built in flexibility to change direction if an assumption proved wrong or if we needed to trade features coming up to a deadline.  In an early 2000s sort of way, that was probably agile or something like it.

GDS, of course, are huge proponents of the Agile process.  Having your own team isn’t a pre-requisite of being agile but it probably makes it easier – if you can switch direction and take your team with you without thinking about deliverables in a contract you can probably move faster (of course, if you have to switch direction too often, you probably should spend more time up front thinking about what it is that you want to get done first).  Brent Hoberman at Lastminute told me once, whilst he was still running the show there, that he revelled in his ability to walk out onto the floor and have the developers change something – if the sales numbers had dropped, for instance, and he needed to raise the profile of a particular promotion.   You can, of course, do the same with a team provided by a supplier – a lot depends on how you structure the contract and how much resource you have available for “pool” work or support work.

What happens, I wonder, when the site is live (I was going to write “done” but I realise that the paragraph I highlighted from the GDS blog suggests that “done” is going to be all too subjective”)?  Does GDS scale down its team and maintain a small dev/fix team and keep service management in-house?  Does it move the operation to a supplier (transitioning live code developed by one team to another team can be very challenging – especially if the documentation and comments are less than complete)? Does it keep the team large because, actually, it’s true that a site as vast as this can’t ever actually be done and there will always be new features to add, things to tweak or things to replace because they don’t quite work how it was all imagined?

In the 1990s, government started outsourcing its IT.  I’d like to think, had I been there at the time, that I would have argued for keeping it in-house and for outsourcing the handling of all of the paper processing (a set of processes with a low rate of change that cried out for being made more efficient – and that could have been made so with better control of the IT).   Since then, almost every central government department and something like 40% of local authorities, have moved their IT to an outside supplier. They made that choice for a variety of reasons – some would have looked at their own organisation and decided it was too small to operate at an efficient price, some would have found suppliers able to stay up to date with new technology, others would have looked at what everyone else was doing and copied them, some would have wondered how to build a career structure for a role that increasingly looked like a dead end within their own organisation and so on.

So is the GDS strategy a reversal of 20 years of outsourcing?  It is it a one-off aberration that will stand or fall based on the management team in place today?  Is it an experiment that could lead to new approaches right across government in certain niche areas?  Is it an attempt to do something new and the only way do to that was to build a team and, when it’s not new, the team will eventually disband?  Is it a spot of empire-building? I simply don’t know.

The degree of transparency that GDS have opened up about their development process (and the debates therein) is very much to be applauded.  No government team has said quite so much about what they are up to as this one, I’m sure.

I’d like, then, to see the same transparency about what the roadmap (incomplete as it may be) looks like – when does alpha move to beta, when does beta move to production (indeed, is beta going to be production itself), when will the service move to a UK host and to which one, what happens to service management?  What will the team look like in six months, a year, two years? Sometimes the broadcasts need to step out of the weeds and show us the whole landscape so that we can appreciate the true majesty.

gCloud – Why the long face?

A horse walks into a bar … no, wait … why the short contract term?

The comment below was posted yesterday by someone called Michael; it’s a question I’ve heard a lot – usually from the larger suppliers – so I wanted to raise the issue in a post rather than hide it in the comments:

Do you have any comments on viability of contracts for less than one year? In particular, am I right to think that part of the intention is to avoid the burdens of full EU tendering by keeping contract terms shorter, so reducing cost so contracts do not fall into scope of full tendering rules (before you even get to cost competition through the increased transparency of G-cloud etc)?

The context for this is interesting – at least one major department is grappling with contract duration right now.  The one I’m thinking of found it was unable to extend an existing contract and, with the proposed three year replacement contract, only the incumbent was prepared to bid.  The others couldn’t, they say, figure out a way to take over the existing service, deliver the necessary improvements and cost saves and make money.
So is gCloud wrong in its approach of 12 month contracts?
Let me deal with the EU point first.  There is no “avoiding the burdens” of EU tendering with the gCloud procurement.  By setting this up as a framework the team have created an entirely legal and comprehensive vehicle for all public sector authorities to buy cloud services.  Sure, they’ve done it on an accelerated timetable (many are hoping that this marks the start of a major trend) but the point of a framework is to allow purchase of services up to the value of the framework (£60m in this case) without substantial modification of the terms and conditions (that is, you buy what’s for sale on the terms and conditions posted and don’t negotiate variations).  I doubt they’ll even get close to £60m in this round but there’s no limit on the value of the service that can be bought other than that.
Secondly, the aim with gCloud is to show the public sector that there is a better way to buy its IT.  In this first iteration it’s hoped that lots of government bodies will give cloud services a try – maybe they will buy development and test environments, switch their e-mail, try out collaboration tools, seek consultancy to build a roadmap to cloud, host their website (assuming it isn’t captured by the single domain work) and so on.  This is not, yet anyway, about transitioning major legacy applications into the cloud but it is about testing, robustly, a new model.
Thirdly, this is definitely a first of its kind – perhaps not an but a  I think the team want to try it out fast, see what does and doesn’t work and then complete the next iteration quickly.  If it didn’t make me feel ill, I’d probably want to call it agile procurement.  Hopefully those who worked hard to get on this version of the framework won’t have to do much to stay on the next version – and those who didn’t make it (because they didn’t qualify, didn’t notice it or didn’t think it was real – and there were quite a few in the latter category) will work to qualify then.  Wrapped up in this is the fast moving nature of cloud services – as Chris Chant has said, the iPad didn’t exist 2 years ago, iCloud didn’t exist 6 months ago; why build a framework that is fixed for a long period when within months there will be not only new services and new companies but likely entirely new categories of things that government could buy.
There are certainly downsides to such a short contract period – any department that moves its entire email service (assuming they move all the historic mail too) will not look forward to doing it again only 12 months later.  But during that period they will have been able to take advantage of a new service delivered at a much lower cost than their incumbent can deliver it for, they will have learned a lot, and the framework contains provisions for reducing the burden of migration.
This isn’t an easy process – right now I expect the gCloud team are buried in evaluating thousands of services and that’s before they get into the accreditation process – but as I’ve said before the transparency and competition it brings will absolutely mark a significant change in the way government buys IT.
gCloud – the logical extension
Imagine you’re a CIO in a major department and you are being asked by your business to set up a new delivery programme.  The first thing you’ll need for the delivery team is probably development and test environments (ok, I’m skipping a few stages … let’s assume that requirements, governance and so on are ready).  The incumbent IT provider, in the traditional model, will go out and buy some hardware and software – probably hundreds of thousands worth.  In the pre-gCloud model they might look at their own cloud capability and propose a pay as you go service – which might be a hundred k.  With gCloud, the CIO now has options – they can look at what the market price is for that service and it might be 10k.  The CIO might reasonably achieve an order of magnitude save for the sake of a couple of hours looking at what’s available.  Without gCloud that isn’t easy or even possible in some cases.
Once gCloud is in place, I expect that many government CIOs will benchmark everything that they do against the market that gCloud establishes.  Incumbents will be under pressure to reduce their costs significantly and, when they can’t, the CIO will buy from a supplier on the gCloud framework.  That brings with it all kinds of new problems about integrating and managing multiple suppliers, handling single sign on, dealing with bandwidth, upgrading browsers, handling the relationship with the incumbent to name a few.  But all those issues were coming anyway.  This way they get confronted faster.
There, hope that’s answered the question Michael.

gCloud 2012 … What Now?

In the run up to Xmas several hundred suppliers were finalising their submissions for the gCloud framework. Many, perhaps most, had almost certainly never dealt with government before and had certainly not tried to qualify for a framework.  gCloud’s first achievement, then, is to break the mould and really open the door to government IT contracts (in November, the Government Procurement Service announced that 44% of contracts were now being awarded to small and medium enterprises, aka SMEs.  That’s amazing, really, but I suspect few of those contracts were IT related). 
Twitter was alive with just how many expressions of interest had been made in gCloud.  Some 580 last I saw, perhaps more by the end.  Maybe 50-60% of those will actually submit a bid. gCloud’s second achievement will be to achieve such a high conversation rate. (I’m guessing). 
Around the end of this month, UK government could have access to 250-300 companies offering cloud services. Submissions that I saw often included ten or more separate offers across the lots.  So within a few weeks, there could be 2,000 or even 3,000 separate cloud services available (some of those will doubtless be the same service offered at IL0, IL2 and Il3). 
gCloud’s real achievement is this: completely open visibility of every service offered by those 300 suppliers down to individual prices.  Not just open to customers, but open to suppliers.  
For IaaS it will be possible to compare the price of a virtual machine from, probably, dozens of suppliers. For Lot 4, day rates for staff in every company will be visible.  Radical transparency for sure.    In my time in government it was impossible to find out what any other department actually paid for its IT let alone to see what the market pricing as a whole was.  Just as eBay led to lower prices for commodity items, gCloud prices will be revised downwards as suppliers adjust to this new kind of competition – and new, differentiated services (whether through functionality, ease of transition, better integration or whatever) will appear that can command higher prices. 
Hours will be spent, by customers and competitors alike, looking at what is there. Forget Facebook, the new time sink will be gCloud’s pricing catalogue. 
What needs to happen next?
– The Government Digital Service were, last I looked, hosting (and presumably with amazon web services.  Within a month, they should move it to a company on the gCloud framework. If Amazon are on there, they should skip the easy decision and move it to a UK company hosting in the UK. 
– A dozen departments should engage a dozen different companies to produce cloud strategies that, having inventoried what the department has in place, provide a route map to the far cheaper IT that will result from this radical transparency.  The departments should publish those studies to help everyone else see what is possible. 
– One small, one medium and one large department should buy its email service from a gCloud company before the end of June 2012. 
– Five local authorities and one central government department should move 80% (by spend) of their IT to be provisioned by the gCloud framework before the end of this iteration
Achieving just those four targets would be incredible success for this first iteration of the framework. I suspect that whilst success won’t look quite like this, by the end of 2012 enough will have been done for outright victory to be claimed and for the second iteration of the gCloud framework (due sometime in the second half I imagine) to receive even greater support both from customers and new suppliers – especially if the process for retaining existing suppliers is simplified. 
Underpinning all of this is the need for government departments to detail how they will buy their IT over the next few years.  When Francis Maude announced that some £25.8bn of IT contracts would be re-procured over the 5 years from 2012, he didn’t say what he wanted the new price of those contracts to be. Certainly not £25bn. Perhaps only £12bn? Maybe less.  gCloud is definitely part of that, but isn’t all of it – at least not in this iteration.  
The gCloud team have started something here.  Predicting how it will look in 10 years, or even 5, is by difficult of course. But one thing is for sure, markets rarely turn back from transparency once it has been achieved. 
One question that will be increasingly asked is, instead of “why gCloud?” which has been the topic for much of the last 12 months, “why not gCloud?”.  Why not indeed.