GDS Isn’t Working – Part 4 (Verify)

The conclusion to Part 3 (The Reboot) was:

  • Verify – It’s time to be brave and ignore sunk costs (investment to date and contractual exit costs if any) and let this one go.  It hasn’t achieved any of the plans that were set out for it and it isn’t magically going to get to 20m users in the next couple of years, least of all if HMRC are going their own way.  The real reason for letting it go, though, is that it doesn’t solve the real problem – identity is multi-faceted. I’m me, but I do my mother’s tax return, but appoint my accountant to do mins, but I work for a company and I do their payroll, and I counter-sign the VAT return that is prepared by someone else, and I act as the power of attorney for my blind father.  Taking a slice of that isn’t helping.  Having many systems that each do a piece of that is as far from handling user needs as you can get.  Driving take up by having a lower burden of proof isn’t useful either – ask the Tax Credits folks.  HMRC are, by far, the biggest user of the Gateway.  They need citizen and business (big business, sole trader, small company) capability.  Let them take the lead – they did on the Gateway and that worked out well – and put support around them to help ensure it meets the wider needs.

Instead, GDS appear to be doubling down, based on this article in Computer Weekly:

  • GDS speakers at the event encouraged suppliers to use the GaaP tools in their own products, in the hope of widening their use. However, according to guests at the event that Computer Weekly talked to – who wished to remain anonymous due to their ongoing relationships with GDS – GDS was unable to give any guarantees around support or service levels.
  • GDS has now developed a new feature for Verify that allows “level of assurance 1” (LOA1) – a reduced level of verification that is effectively a straightforward user login and password system, which offers “minimal confidence in the asserted identity” of users for low-risk transactions. In effect, LOA1 means the government service trusts the user to verify their own identity.
  • The government has committed to having 25 million users of Verify by 2020, and offering LOA1 is seen as a key step in widening the adoption of the service to meet this target.
This is, though, to miss the point of “What is Verify for?”:

  • The goal isn’t to have 25 million users.  That’s a metric from 1999 when eyeballs were all that mattered.  25 million users that don’t access services, or that sign up for one and never use another service isn’t a measure of relevancy
  • A government authentication platform is instead for:
    • Giving its users a secure, trusted way of accessing information that government holds about them and allowing them to update it, provide new items and interact with government processes
    • Allowing users to act as themselves as well as representatives of others (corporate and personal) with the assurance that there is proper authorisation in place from all necessary parties
    • Putting sufficient protection in the way so as to ensure that my data and interactions cannot be accessed or carried out by people who aren’t me.  In other words, “I am who I say I am” and, by definition, no one else is
What then, if we took away the numbers and the arbitrary measures and said, instead, that the real purpose is to:
  • Create an environment where a first time user, someone who has had no meaningful interaction with government before, is able to transact online and need never use offline processes from that moment on
  • Sixteen year olds would begin their online interaction with government by getting their National Insurance numbers online
  • They would go on to apply for their student loan a couple of years later
  • With their first job they would receive their PAYE information and perhaps claim some benefits
  • Perhaps they would be handling PAYE, or VAT, or CT for their own employer
  • Health information and records would be available to the right people and would move them as they moved jobs and locations
  • Perhaps they would be looking at health information and records for others
  • They would see the impact of pension contributions and understand the impact of changes in taxation
  • Perhaps they would be helping other people figure out their pension contributions and entitlements
  • They might decide whether they can afford an ISA this year
  • In time some would pay their Self Assessment this way
  • Or maybe they would be completing Self Assessments for others
A 2002 Slide

Instead of spot creating some transactions that are nearby or easy, we would seek to change the entire experience that someone has who doesn’t know about government – they would never know that it had been broken for years, that paper forms were the norm for many, or that in 2010 people had to go from department to department to get what they needed.  They would take to this the way a baby learns that you swipe an an iPad screen – it would never occur to them that a magazine doesn’t work the same way.

Along the way, those who were at later stages of life would be encouraged to make the move online, joining at whatever stage of the journey made sense for them.

This wouldn’t be about transformation – the bulk of the users wouldn’t know what it was like before.  This would just be “the way government is”, the way it’s supposed to be.  Yes, in the background there would have been re-engineering (not, please, transformation), but all the user would see is the way it worked, fluidly, consistently and clearly, in their language, the language of the user.

Progress would no longer be about made up numbers, but about the richness of the interaction, the degree to which we were able to steer people away from paper and offline channels, and the success with which we met user needs.  The measure would be simply that they had no need, ever, to go offline.

Verify isn’t the way into this journey.  Verify started out trying to solve a different problem.  It isn’t seen, and wasn’t conceived, as part of a cohesive whole where the real aim is to shift interaction from offline to online.  In its current form, it’s on life support, being kept alive only because there’s a reluctance to deal with the sunk costs – the undoubtedly huge effort (money and time from good people) it’s taken to get here.  But it’s a “you can’t get there from here” problem. And when that’s the case … you have to be brave and stop digging.

If my original take on “What is GDS for” was:

GDS is for facilitating the re-engineering of the way government does business – changing from the traditional, departmentally-led silos and individual forms to joined-up, proactive, thought-through interactions that range widely across government.  It is not, in my view, about controlling, stopping, writing code or religious/philosophical debates about what’s right. It’s job is to remove the obstacles that stop government from championing the user cause.

Then what if GDS took the vanguard in moving government to cater for the user journey, from a user’s first interaction to its last.  A focused programme of making an online government available to everyone.  A way of assessing that “I am who I say I am” is an essential part of that – and starting with a 16 year old with minimal footprint is going to be challenging but is surely an essential part of making this work.  This would be a visionary challenge – something that could be laid out step by step, month to month, in partnership with the key departments.

It can be dull to look backwards, but sometimes we have to, so that we move forward sensibly.  The picture above shows the approach we planned at the Inland Revenue a long time ago.  We would take on three parallel streams of work – (1) move forms online, (2) join up with some other departments to create something new and (3) put together a full vertical slice that was entirely online and extend that – we were going to start with a company because our thinking was that they would move online first (this was in 2000): register the company, apply for VAT and tax status, send in returns, add employees, create pensions etc.

It feels like we’ve lost that vision and, instead, are creating ad hoc transactions based on departmental readiness, budget and willingness to play.  That’s about as far away from user needs as I can imagine being.

As a post-script, I was intrigued by this line in the Computer Weekly report:

GDS was unable to give any guarantees around support or service levels.

On the face of it, it’s true.  GDS is part of the Cabinet Office and so can’t issue contracts to third parties where it might incur penalties for non-delivery.  But if others are to invest and put their own customer relationships on the line, this is hardly a user needs led conversation.  Back in 2004 we spent some time looking at legal vehicles – trading funds, agencies, JVs, spin-offs – and there are lots of options, some that can be reached quite quickly.

My fundamental point, though, is that GDS should be facilitating the re-engineering of government, helping departments and holding them to account for their promises, not trying to replace the private sector, or step fully into the service delivery chain – least of all if the next step in the delivery promise is “you will have to take our word for it.”

GDS Isn’t working – Part 3 (The Reboot)

What is GDS for?  It’s a question that should be asked at a fundamental level at least every year for an organisation that set out to be agile, iterative and user led.   It’s easy to be superficial when asking such a seemingly simple question.  People inside the organisation are afraid to ask it, doubtless they’re busy being busy at what they’re doing.  They’re afraid of the consequences.  They don’t want to touch the question in case it bites – the electric fence that prevents introspection and, perhaps more importantly, outrospection.

There are several reasons why this question should be asked, but one that I would take as important, right now, is because GDS don’t know themselves, as the NAO highlighted recently.

“GDS has found it difficult to redefine its role as it has grown … initially, GDS supported exemplars of digital transformation … major transformations have had only mixed success … GDS has not sustained it’s framework of standards and guidance … roles and responsibilities are evolving … it is not yet clear what role GDS will play [in relation to transformation]”

If there was ever a time to ask “What is GDS for?”, it’s now … to help understand these numbers:

The budget is £150m in 16/17 and 17/18 (though it falls over coming years, to £77m in 19/20) and GDS has around 850 staff today (again, falling to 780 by 19/20).

Let me ask again, what is GDS for?

When those 850 staff bounce into work every morning, what is it that they are looking forward to doing?  What user needs are they going to address?  How will they know that they have been successful?  How will the rest of us know?

Given a budget, Parkinson’s Law of Government, says the department will expand to absorb that budget.

GDS has demonstrated this law in action:

  • The exemplars have finished, with varying degrees of success.  There are no further exemplars planned.  The organisation has only grown.
  • Major digital projects have stumbled badly and, in some cases, failed entirely, for instance:
    • The RPA Common Agriculture Programme, specifically re-engineered by GDS early in its life and then directly overseen by senior staff, failed to deliver.  The lessons learned in the previous RPA project, 7 years earlier, were not learned and the result was the same – a system that was late, high disallowance costs and a poor experience for the real users, the farmers.
    • Digital Borders is progressing slowly at best, even allowing for the tuned and optimistic language in the IPA report.  Seven years after the last programme was terminated in difficult circumstances, the first, less aggressive than planned, rollout of new capability is starting now
  • Nearly 5 years after DWP were ready to complete their identity procurement and around three years since its replacement, Verify design to save millions, was about to enter public Beta, the Government Gateway is still there, 16 years old and looking not a day older than it did in 2006 when the UI was last refreshed.  Verify has garnered around 1.4m users,  a very small fraction of even Self Assessment users, let alone overall Gateway users.
    • The Government Gateway is slated for replacement soon, but Verify is clearly not going to replace it – it doesn’t handle transaction throughput and validation, it doesn’t handle nomination (e.g. please let my accountant handle my Self Assessment) and, most obviously, it doesn’t handle business identity.  Given the vision that we laid down for the Gateway and all of the work that was done to lay the foundations for a long term programme that would support all aspects of identity management, Verify is nothing short of a fiasco, as demonstrated by the increasingly vocal war about its future, with HMRC seemingly building its own identity platform.  Others far more able than me, including Jerry Fishenden and David Moss have exposed its flaws, muddled thinking and the triumph of hope over ability.
    • Even now, instead of bringing departmental transactions on board, addressing true user needs and massively improving completion rate from its current low of less than 50%, the Verify team are talking up their prospects of getting 20m users by lowering identity standards and getting the private sector on board.  They blame lack of take up to date on slow delivery of digital services by departments, according to the IPA report.
  •, whilst a triumphal demonstration of political will to drive consolidation and a far greater achievement in presenting a joined up view of government to the citizen than achieved before, is still a patchy consolidation with formats and styles changing as you move from level to level, departmental websites still having their own separate space (compromising, as soon as you arrive in a departmental domain, the sense of consolidation), PDFs abound, and, of course, it lacks major transactions (and those that are available often have a very disjointed journey – follow the route to filing a VAT return for instance).  The enormous early progress seems to have lapsed into iterative tinkering.
  • Alongside all of that we have the latest in a long series of transformation strategies. For many months the strapline on this blog read “transforming government is like trying to relocate a cemetery, you can’t expect the residents to help”.  Since then I’ve revised my view and now believe, firmly, that in any effort to achieve transformation, government will remain the catalyst, in the true chemical sense of the word.  This strategy says that by 2020 “we will”
    • design and deliver joined-up, end-to-end services
    • deliver the major transformation programmes
    • establish a whole-government approach to transformation, laying the ground for broader transformation across the public sector
  • We all want to believe those words.  We know that these have been the goals for years, decades even.  We know that little has really been achieved.  And yet here we are, after 7 years of GDS, being asked to believe that transformation can be achieved in the next 3.  There is a Jerry Maguire feeling to this, not so much “show me the money” as “show me the plan”
  • And, lastly, we have Government as a Platform.  No one was ever quite sure what it was.  It might include the Notifications and Payments service – oddly, two services that were available on the Gateway in 2002/3, but that were turned off for some reason.
So why not ask “What is GDS for?” and use the thinking generated by that question to restructure and reboot GDS.  Any reboot requires a shutdown, of course, and some elements of GDS’s current work will, as a result of the introspection, close down.

If I were asked to answer the question, I would suggest

GDS is for facilitating the re-engineering of the way government does business – changing from the traditional, departmentally-led silos and individual forms to joined-up, proactive, thought-through interactions that range widely across government.  It is not, in my view, about controlling, stopping, writing code or religious/philosophical debates about what’s right. It’s job is to remove the obstacles that stop government from championing the user cause.

Within that the main jobs are:
  • Standards and guidelines for IT across government.  This could get dangerously out of hand but, as the NAO note, GDS has, to date, not kept its standards up to date.  Some key areas:
    • Data formats – messaging standards to allow full interoperability between government services and out to third parties through APIs.  In 2000, we called this govtalk and it worked well
    • Architecture – eventually, government IT will want to converge on a common architecture.  We are likely decades away from that on the basis it’s hardly started and replacing some of the existing systems will take more money than is available, let alone increased capacity across the user and technology community at a time when they have plenty going on.  New projects, though, should be set on a path to convergence wherever possible – that doesn’t mean getting religious about open source, but it does mean being clear about what products work and what doesn’t, how interactions should be managed and how we streamline the IT estate, improve resilience and reliability and reduce overall cost.  This team will show what the art of the possible is with small proofs of concept that can be developed by departments
    • Common component planning – all the way back in 2003 I published a first take on what that could look like.  It’s not the answer, but it’s a start.  I’m a strong believer in the underlying principles of Government as a Platform – there are some components that government doesn’t need more than one of and some that it needs just a few of.  They need to be in place before anyone can intercept with them – promising to deliver and then having a queue of projects held up by their non-availability won’t work.  And they don’t have to be delivered centrally, but they do have to take into account wider requirements than just those of whoever built them
  • publishing team – joined up content will best come from the centre.  This team will control what to publish and how to publish and how to ensure consistency across  They will rationalise the content is there, doing what Martha originally set out – kill or cure – to make sure that the user is getting what they need
  • Agile and user needs – perhaps the single largest achievement of GDS so far,  far beyond consolidating websites for me, is getting government to recognise that there are many ways to deliver IT and that taking a user-led approach is an essential part of any of them.  I’m not wedded to agile or any other methodology, but there’s a strong argument for a central team who can coach departments through this and checkpoint with them to see how they are doing, refresh knowledge and transfer skills so that everyone isn’t learning the same lessons over and over again
  • Spending controls – a team of elite people who know how to get inside the biggest projects, not waste time on the small ones, and understand what’s being built and why and who can help design the solution at a lower cost than proposed, who can help create the hooks for current and/or future common components and who can help negotiate better deals.  These folks should be the best that can be found – a SWAT team sent to work on mission critical projects.  Their job will be to help drive delivery, not slow it down through interminable bureaucracy and arguments about the philosophy of open source.
  • Transactions team – people who go beyond the pure publishing role into understanding how to hook users into a transaction and drive completion through smart design, innate user understanding and the ability partner with departments, not preach to them from some remote ivory tower.  These folks won’t make promises they can’t keep, they will work closely with departments to move transactions that are offline today to the online world, designing them to foster high take up rates and better service for users.  This team is the future of government – they will be a mix of people who can help rethink policy and legislation, service designers, UI folks who know how to put something slick together and technologists who can understand how to manage load and resilience and integrate with third parties inside and outside of government.
  • Project managers – a mixed team who know how to deliver small and large projects, who are comfortable managing all aspects of delivery, can work with users as well as departments and suppliers and who understand the tension that is always there between waiting and shipping.
Lastly, two areas that I think are contentious; there may be others:
  • development – Personally, I’m in favour of using companies to do build work.  They can maintain a bench and keep their teams up to date with evolving technologies.  They can locate wherever it makes sense and call on disparate teams, around the globe if necessary.  They can call on experience from other clients and use relationships with partners and the big vendors to do the heavy lifting.    The in-house project managers will keep the suppliers in check and will manage scope, cost and time to bring projects home.  This is contentious I know – there’s an increasing appetite for government to bring development in-house; some departments, such as HMRC, have had to locate far from the usual places to ensure that they can recruit and retain staff and I think, if you’re going to do it, that’s more sensible than trying to recruit in Holborn or Shoreditch. But, me, I would give it to an up and coming UK company that was passionate about growth, entirely aligned with the user led approach and looking to make a splash.  I’d then work closely with them to make an effective transition, assuming that the code stands up to such a transition.
  • Verify – It’s time to be brave and ignore sunk costs (investment to date and contractual exit costs if any) and let this one go.  It hasn’t achieved any of the plans that were set out for it and it isn’t magically going to get to 20m users in the next couple of years, least of all if HMRC are going their own way.  The real reason for letting it go, though, is that it doesn’t solve the real problem – identity is multi-faceted. I’m me, but I do my mother’s tax return, but appoint my accountant to do mins, but I work for a company and I do their payroll, and I counter-sign the VAT return that is prepared by someone else, and I act as the power of attorney for my blind father.  Taking a slice of that isn’t helping.  Having many systems that each do a piece of that is as far from handling user needs as you can get.  Driving take up by having a lower burden of proof isn’t useful either – ask the Tax Credits folks.  HMRC are, by far, the biggest user of the Gateway.  They need citizen and business (big business, sole trader, small company) capability.  Let them take the lead – they did on the Gateway and that worked out well – and put support around them to help ensure it meets the wider needs.
How many people does that make? I’m very interested in views, disagreements, counter-points and omissions.

    Taking G-Cloud Further Forward

    A recent blog post from the G-Cloud team talks about how they plan to take the framework forward. I don’t think it goes quite far enough, so here are my thoughts on taking it even further forward.

    Starting with that G-Cloud post:

    It’s noted that “research carried out by the 6 Degree Group suggests that nearly 90 percent of local authorities have not heard of G-Cloud”.  This statement is made in the context of the potential buyer count being 30,000 strong.  Some, like David Moss, have confused this and concluded that 27,000 buyers don’t know about G-Cloud.  I don’t read it that way – but it’s hard to say what it does mean.  A hunt for the “6 Degree Group”, presumably twice as good as the 3 Degrees, finds one obvious candidate (actually the 6 Degrees Group), but they make no mention of any research on their blog or their news page (and I can’t find them in the list of suppliers who have won business via G-Cloud).  Still, 90% of local authorities not knowing about G-Cloud is, if the question was asked properly and to the right people (and therein lies the problem with such research), not good.  It might mean that 450 or 900 or 1,350 buyers (depending on whether there are 1, 2 or 3 potential buyers of cloud services in each local authority) don’t know about the framework.  How we get to 30,000 potential buyers I don’t know – but if there is such a number, perhaps it’s a good place to look at potential efficiencies in purchasing.

    [Update: I’ve been provided with the 30,000 – find them here: It includes every army regiment (SASaaS?), every school and thousands of local organisations.  So a theoretical buyer list but not a practical buyer list. I think it better to focus on the likely buyers. G-Cloud is a business – GPS gets 1% on every deal.  That needs to be spent on promoting to those most likely to use it]

    [Second update: I’ve been passed a further insight into the research:  – the summary from this is that 87% of councils are not currently buying through G-Cloud and 76% did not know what the G-Cloud [framework] could be used for]

    Later, we read “But one of the most effective ways of spreading the word about G-Cloud
    is not by us talking about it, but for others to hear from their peers
    who have successfully used G-Cloud. There are many positive stories to
    tell, and we will be publishing some of the experiences of buyers across
    the public sector in the coming months”
    – True, of course.  Except if people haven’t heard of G-Cloud they won’t be looking on the G-Cloud blog for stories about how great the framework is.  Perhaps another route to further efficiencies is to look at the vast number of frameworks that exist today (particularly in local government and the NHS) and start killing them off so that purchases are concentrated in the few that really have the potential to drive cost saves allied with better service delivery.

    And then “We are working with various trade bodies and organisations to continue
    to ensure we attract the best and most innovative suppliers from across
    the UK.”
      G-Cloud’s problem today isn’t, as far as we can tell, a lack of innovative suppliers – it’s a lack of purchasing through it.  In other words, a lack of demand.  True, novel services may attract buyers but most government entities are still in the “toe in the water” stage of cloud, experimenting with a little IaaS, some PaaS and, based on the G-Cloud numbers, quite a lot of SaaS (some £15m in the latest figures, or about 16% of total spend versus only 4% for IaaS and 1% for Paas).

    On the services themselves, we are told that “We are carrying out a systematic review of all services and have, so far, deleted around 100 that do not qualify.”  I can only applaud that.  Though I suspect the real number to delete may be in the 1000s, not the 100s.  It’s a difficult balance – the idea of G-Cloud is to attract more and more suppliers with more and more services, but buyers only want sensible, viable services that exist and are proven to work.  It’s not like iTunes where it only takes one person to download an app and rate it 1* because it doesn’t work/keeps crashing/doesn’t synchronise and so suggest to other potential buyers that they steer clear – the vast number of G-Cloud services have had no takers at all and even those that have lack any feedback on how it went (I know that this was one of the top goals of the original team but that they were hampered by “the rules”).

    There’s danger ahead too: “Security accreditation is required for all services that will hold
    information assessed at Business Impact Level profiles 11x/22x, 33x and
    above. But of course, with the new security protection markings that
    are being introduced on 1 April, that will change. We will be
    publishing clear guidance on how this will affect accreditation of
    G-Cloud suppliers and services soon.”
      It’s mid-February and the new guidelines are just 7 weeks away.  That doesn’t give suppliers long to plan for, or make, any changes that are needed (the good news here being that government will likely take even longer to plan for, and make, such changes at their end).  This is, as CESG people have said to me, a generational change – it’s going to take a while, but that doesn’t mean that we should let it.

    Worryingly: “we’re excited to be looking at how a new and improved CloudStore, can
    act as a single space for public sector buyers to find what they need on
    all digital frameworks.”
      I don’t know that a new store is needed; I believe that we’re already on the third reworking, would a fourth help?  As far as I can tell, the current store is based on Magento which, from all accounts and reviews online, is a very powerful tool that, in the right hands, can do pretty much whatever you want from a buying and selling standpoint.  I believe a large part of the problem is in the data in the store – searching for relatively straightforward keywords often returns a surprising answer – try it yourself, type in some popular supplier names or some services that you might want to buy.   Adding in more frameworks (especially where they can overlap as PSN and G-Cloud do in several areas) will more than likely confuse the story – I know that Amazon manages it effortlessly across a zillion products but it seems unlikely that government can implement it any time soon (wait – they could just use Amazon). I would rather see the time, and money, spent getting a set of products that were accurately described and that could be found using a series of canned searches based on what buyers were interested in.

    So, let’s ramp up the PR and education (for buyers), upgrade the assurance process that ensures that suppliers are presenting products that are truly relevant, massively clean up the data in the existing store, get rid of duplicate and no longer competitive buying routes (so that government can aggregate for best value), make sure that buyers know more about what services are real and what they can do, don’t rebuild the damn cloud store again …

    … What else?

    Well, the Skyscape+14 letter is not a terrible place to start, though I don’t agree with everything suggested.  G-Cloud could and should:

    – Provide a mechanism for services to work together.  In the single prime contract era, which is coming to an end, this didn’t matter – one of the oligopoly would be tasked to buy something for its departmental customer and would make sure all of the bits fitted together and that it was supported in the existing contract (or an adjunct).  In a multiple supplier world where the customer will, more often than not, act as the integrator both customer and supplier are going to need ways to make this all work together.   The knee bone may be connected to the thigh bone, but that doesn’t mean that your email service in the cloud is going to connect via your PSN network to your active directory so that you can do everything on your iPad.

    – Publish what customers across government are looking at both in advance and as it occurs, not as data but as information.  Show what proof of concept work is underway (as this will give a sense of what production services might be wanted), highlight what components are going to be in demand when big contracts come to an end, illustrate what customers are exploring in their detailed strategies (not the vague ones that are published online).  SMEs building for the public sector will not be able to build speculatively – so either the government customer has to buy exactly what the private sector customer is buying (which means that there can be no special requirements, no security rules that are different from what is already there and no assurance regime that is above and beyond what a major retailer or utility might want), or there needs to be a clear pipeline of what is wanted.  Whilst Chris Chant used to say that M&S didn’t need to ask people walking down the street how many shirts they would buy if they were to open a store in the area, government isn’t yet buying shirts as a service – they are buying services that are designed and secured to government rules (with the coming of Official, that may all be about to change – but we don’t know yet because, see above, the guidance isn’t available).

    – Look at real cases of what customers want to do – let’s say that a customer wants to put a very high performing Oracle RAC instance in the cloud – and ensure that there is a way for that to be bought.  It will likely require changes to business models and to terms and conditions, but despite the valiant efforts of GDS there is not yet a switch away from such heavyweight software as Oracle databases.  The challenge (one of many) that government has, in this case, is that it has massive amounts of legacy capability that is not portable, is not horizontally scalable and that cannot be easily moved – Crown Hosting may be a solution to this, if it can be made to work in a reasonable timeframe and if the cost of migration can be minimised.

    – I struggle with the suggestion to make contracts three years instead of two.  This is a smokescreen, it’s not what is making buyers nervous really, it’s just that they haven’t tried transition.  So let’s try some – let’s fire up e-mail in the cloud for a major department and move it 6 months from now.  Until it’s practiced, no one will know how easy (or incredibly difficult) it is.  The key is not to copy and paste virtual machines, but to move the gigabytes of data that goes with it.  This will prove where PSN is really working (I suspect that there are more problems than anyone has yet admitted to), demonstrate how new capabilities have been designed (and prove whether the pointy things have been set up properly as we used to say – that is, does the design rely on fixed IP address ranges or DNS routing that is hardcoded or whatever).  This won’t work for legacy – that should be moved once and once only to the Crown Hosting Service or some other capability (though recognise that lots of new systems will still need to talk to services there).  There’s a lot riding on CHS happening – it will be an interesting year for that programme.

    The ICT contracts for a dozen major departments/government entities are up in the next couple of years – contract values in the tens of billions (old money) will be re-procured.   Cloud services, via G-Cloud, will form an essential pillar of that re-procurement process, because they are the most likely way to extract the cost savings that are needed.  In some cases cloud will be bought because the purchasing decision will be left too late to do it any other way than via a framework (unless the “compelling reason” for extension clause kicks in) but in most cases because the G-Cloud framework absolutely provides the best route to an educated, passionate supplier community who want to disrupt how ICT is done in Government today.  We owe them an opportunity to make that happen.  The G-Cloud team needs more resources to make it so – they are, in my view, the poor relation of other initiatives in GDS today.  That, too, needs to change.

    Fully Costed Oracle

    Who knew that’s what FCO actually stood for?  All that time we’ve been thinking it was simply about diplomats in far flung locations enjoying “unimaginable luxury” and getting up to who knows what.

    Late in January, the FCO announced, to predictably widespread criticism, that it was intending to launch a new framework:

    …  supporting the Cabinet Office Shared Service strategy … for the provision of Oracle Enterprise Resource Planning (ERP) development, delivery and support services … 

    The framework intends to have a limited number of vendors, for instance a number of Lots might be awarded to the same vendor. 

    The scope intends to cover existing Oracle platforms in UK government departments … and to include upgrades and implementations of new Oracle versions for these existing platforms. It also intends to cover any move of a Department from a non-Oracle platform to an Oracle platform

    The value of the framework is suggested to be £250m to £750m.  The notice is silent on framework duration but others have suggested a minimum of three years with an extension of one year.

    Time clearly being of the essence, the first meeting for suppliers is planned for the 11th February.  Attendance is expected to be restricted, such will be the crush of entrants. Book now to avoid disappointment.

    Parsing government procurement announcements, particularly those for frameworks, is challenging.  But here are a few points:

    – Framework values are always made up.  When a framework is launched, there’s never any idea of what the take up will be (and it’s rarely mandatory that frameworks be used – and, even if it were, there are many overlapping frameworks that would mean you could use a different one). But what’s important is that the number is set as large as possible because that (a) ensures that the limit will never be breached, which would be terrible and (b) ensures that suppliers take notice and seek to bid.

    – Frameworks offer you a chance to bid for future work, not a right to it.  So you compete, as a supplier, against generic requirements providing detailed pricing (that you can be held to) and get on the framework, and then you have to wait for business to arrive or you have to chase business which you will also have to compete for (against specific requirements). What’s missing in this notice is the statement “and here are the departments who have already committed to using this framework and this is why we came up with the range £250m – £750m).  I’m not feeling the love.

    – This framework, unusually, says it will seek to limit the number of vendors.  It’s also unusual in that it says one supplier might win multiple lots.  Yah boo to the small business agenda one might say.  Other departments – the MoJ and FCO for instance – have sought to ensure diversity of supply by making it difficult (even impossible) for one supplier to win multiple lots in their ongoing IT procurements.  This framework seems to lessen competition and certainly takes an opposite view from G-Cloud’s hugely successful “Come one, come all” approach.

    – Existing departmental Oracle systems (or any other ERP system for that matter) are almost always wrapped up in their wider outsourcing agreement.  So IBM run Defra’s services, Cap run those for HMRC and Logica runs the MoJ’s (though the MoJ is more complicated than that with its multiple divisions). So this framework only ‘works’ when an existing contract comes up for renewal and a department wants to separate its ERP from its other IT. I don’t see why a department would do that as its first choice – they’re struggling already with managing the splits into a dozen towers, brought together by a SIAM.  Only direction (read force) will change that – the equivalent of in ERP. 
    – Separately, the Cabinet Office Shared Service Strategy which targets savings of £400m-600m/year with full delivery expected by 2014.  This document was only published in December 2012. It includes this paragraph:
    1. Single Oracle ERP Platform. A number of customers included in ISSC 2 require
      an upgrade of their Oracle Release 11 ERP solutions. It is felt that, rather than allow
      departments to upgrade separately, this situation provides a unique opportunity to
      consolidate platforms and provide standard processes across the major Oracle-based

      A feasibility study will be commissioned to test whether the aspiration of government is
      realistic and the design will be based on a ‘prove why it cannot work for you’ approach
      rather than a ‘what would you like’ approach. This study will also look at Oracle
      departments who are not immediately in scope for ISSC 2 such as the Foreign and
      Commonwealth Office (FCO). This project will be managed as part of ISSC 2 until the
      completion of the feasibility study. 

    I haven’t seen the results of a feasibility study that says such a consolidation is possible but one assumes the issuance of the FCO’s framework means that it’s already been proven. Otherwise why go to market?   The project plan in the strategy shows the feasibility study completing in about mid-February 2013 and implementation completing in December 2013 (that would suggest to me that the solution is already known – I don’t see anyone buying one, let alone building one by then otherwise)
    -The scope includes “upgrades … implementations … moves from non-Oracle to an Oracle platform”?  Surely it should read “migrations to THE Oracle platform”.  The strategy also doesn’t say that the FCO will lead the delivery of a single Oracle platform (only that they will participate in the feasibility study) – though the notice does say that FCO are supporting the Cabinet Office.
    – Elsewhere the strategy says that the current cost per head of Oracle services is £160, though the DWP achieve £89 (I have no insight as to whether these comparisons are truly like for like – the strategy notes comparing these things is challenging).  It goes on to say that one solution should save 40% and avoid £32m in upgrade costs to Oracle 12 (because there would only be one Oracle 12 in government).  It notes also that DWP have already completed their upgrade to Oracle 12.  So if DWP is the cheapest, and they have Oracle 12 already, are they not the obvious place to consolidate to?
    – Cabinet Office recently conducted a review of existing and inflight frameworks. Some frameworks were kept (G-Cloud, PSN), some were stopped in their tracks (SIAM, G-Host).  Bill Crothers was quoted as saying: “This is a new approach to frameworks to procure ICT for central government. This approach will support goal of making it easier for all ICT suppliers, particularly with eye to SMEs, to do business with us.”
    This new framework is, then, confusing, inconsistent with the recent framework review, the overall ICT strategy, the Shared Services Strategy and common sense.  
    But it does potentially provide a route to consolidate away from multiple Oracle solutions (that exist today) to a single Oracle solution (that hopefully exists today – because building another one to satisfy everyone is never going to happen, not for £750m nor in 750 years).  And they certainly built Versailles in less time than that – though probably not for less money (estimates for the palace vary wildly from £1.5bn to £200bn)
    It is, though, very hard to see how a true cost save is achieved any time soon if there is a long line of departments waiting for their turn to migrate to the new single Oracle system, each one wanting a tweak or a change every 5 lines of code let alone once you factor in the cost of data transition, re-working of departmental accounting, retraining of staff (and possibly redundancies on the assumption that one systems needs fewer people to operate).
    That said, why wouldn’t you go to one system if you could?  Large banks rarely run their books on a country by country, business by business basis.  Cisco doesn’t have to send couriers to every corner of the world to get its financial results.  National Grid doesn’t have to ask each division to send a spreadsheet once a month with how much they’ve spent.  
    The first question then, is which system?  The second is why consolidate to what you have, at a high and rising cost, rather than to something else at a lower and more stable cost?

    – If the target per head cost of an Oracle-based system is the DWP’s £89, then the best way to get that price is to configure a system that is identical to the DWP’s and able to support other departments.  We could call it something like, oh, “the DWP Oracle ERP system”.  Let’s have a competition amongst suppliers to see who can look after the existing system (including all of the people and surrounding processes) and see if the cost can be brought down further.

    – Getting from other, higher cost, Oracle solutions to DWP’s will not be free of charge and will certainly not be pain free.  Every department with Oracle will have configured theirs to be “just so” and will happily die in several ditches (over and over again) to protect their unique and absolutely required configuration.  So let’s be sure to add that cost in.

    – And then let’s see what the ground up cost per head of an alternative solution is given that the migration costs are going to be there in either case.  I’d be surprised, I think, if it turned out to be as high as £89.

    And then you need a line of departments (whatever the solution) ready to adopt the new system so that the cost per head target can be achieved and further economies of scale sought.  The last thing you need is lots of suppliers competing to supply a similar thing as they can never achieve the same economies of scale.

    It would, of course, not surprise me to see this framework be marked for, ummm, ‘review’ and for it to disappear from view before too long.

    The Emperor’s New Clothes

    They hung around in post offices and job centres videoing people interacting with government services, they carried out surveys on the street asking people about the different forms they needed to fill in, they watched people use both paper services and online ones so as to understand what did and didn’t work, there was much angst over why the tax credits forms said on the last page (and in very small print) “also available in large font”, they built tables of what services were used and by who and figured out which services were the most complicated and needed to be joined up, they counted transactions across the whole of central and local government looking for services to take online that would have the most impact…

    … and they stitched together technology in an attempt to deliver on a promise that government should be online and joined up, they wrote an entire web site delivery platform from scratch integrating existing search engines and databases, they made sure the engine rendered on mobile devices and, yes, tablets as well as every possible browser the world had ever seen, their deliveries were rapid and iterative and user testing was prevalent throughout with videoed sessions with users (pulled from the street) working with the site (leading to yet more iterative deliveries), they released beta test versions for the public and watched what happened…
    … they were a mixed team of civil servants (many borrowed from right across government), contractors and supplier staff, they walked the floor of government in casual gear using macs alongside a restricted network and with all types of the latest smartphones in use, they owned the government’s approach to online identity and worked with all of government to deliver authenticated transactions, and they supported the rest of government in their efforts to get services online as well as to recover things that hadn’t gone so well …
    … and they relentlessly published facts and figures of what they were doing whilst secreting themselves in a building far away from the madding crowds of the rest of Whitehall … and created a single website for all of government that could be accessed from the URL
    Who am I talking about? 
    GDS 2012?  
    No, e-Delivery team 2001-2005.
    June 2001
    We embarked on a similar journey then as the one GDS is on now, though we were under the watchful eye of the e-Envoy (and then the head of the e-Government Unit) rather than the Executive Director, Digital. We too inherited an existing site – – that didn’t quite do what the vision (outlined some years before in a paper called had proposed.  We set ourselves the grand aim of transforming the users’ experience of government – yes, a huge focus on the citizen – into something that truly represented 100% online and joined up.   A dozen years ago this was all going on before the words agile, digital by default, user experience and easier
    done than said
    were coined.   After all, we put men on the moon before any of those words were used so I can’t say that we were breaking even a little bit of new ground.
    Watching GDS from afar, it is hard not to see the similarities, but much harder to see the differences. Perhaps that’s because I am at a distance.   We achieved a lot in a short space of time, whether measured in government cycles or geological ages (which are often much the same).  GDS too, appear to be achieving a lot, though separating smoke and mirrors from reality is difficult for an outsider.
    We got a lot right – and much of what was done then is still running and is still referred to in government documents published during the Coalition’s term as the best examples of delivery – but we also got a lot wrong.  I’d like to think it balanced out to the positive, but others will be better judges of that than I am.  
    I was never sure, back then, whether I was the Emperor and everyone else was really unable to see the wonder that lay before them or whether I really didn’t have any clothes on.
    I am fascinated by what I see in GDS now – the diverse people, the agile approach, the focus on delivery, the excitement, the enthusiasm, the arrogance (well, the hubris really) and also the sense (wrapped up in that arrogance) that this is all new and that those who went before are not worth listening to.
    Government is crying out for change.  Change needs new ideas, new people and new ways to execute. This kind of change is very hard to get rolling and many times harder than that to sustain.   I watch, then, with fascination wondering if this is change that will stick and, especially, if it is change that will pervade across government.  Or whether its half-life is actually quite short – that when the difficult stuff comes along (as well as the routine, mind-numbing stuff), things will stall.  Perhaps the departments will rebel, or the sponsors will move on, or delivery will be undermined by some cockups, or the team will tire of bureaucracy once they move into the transaction domain.
    If GDS now is much like eDt then, and with the launch of the new website only hours away, I wanted to think through some of the issues that need to be addressed.
    Are You Just Too Different?
    Different is good in some ways. It creates a shared identity amongst those who are in the new team – they consciously step away from the constraints and limitations of the old ways of doing things. They cast aside contracts, process, bureaucracy, legacy IT, dress codes and whatever else they need to do to get things done. Meanwhile those who aren’t part of the new club look in, some jealously very much wanting to be part of it and some expectantly, waiting for the seemingly inevitable failure – the egg on the face, the fall from the ivory tower, the crash and the prolonged burn. I suspect
    the camps are pretty evenly split right now, with everything to play for.
    July 2000
    The question is really how to turn what GDS do into the way everyone else does it.  In parallel with GDS’ agile implementations, departments are out procuring their next “generation” of IT services – and when you consider that most are still running desktop operating systems released in 2000 and that many are working with big suppliers wrapped up in old contracts supporting applications that often saw the light of day in the 80s or, at best, the 90s, “generation” takes on a new meaning.  To those people, agile, iterative, user experience focused services are things they see when they go home and check Facebook, use Twitter or Dropbox or have their files automagically backed up into the cloud.  Splitting procurements into towers, bringing in new kinds of integrators, promising not to reward “bad” suppliers and landing new frameworks by the dozen is also different of course, but not enough to bridge the gap between legacy and no legacy. 
    I am on the record elsewhere as noting that, today, GDS is an aberration, not the new normal.  Becoming the new normal is a massive, sustained job – and one that needs a path laid out so that everyone gets it.  Some will take what I say below as an attack on GDS; that’s far from what it is, it’s an attempt to look ahead and see what is coming that will trip it up and so allow action to be taken to avoid the trouble.
    The Absence of Roadmap
    One of the strengths of the approach that GDS is adopting is that the roadmap is weeks or maybe months long.  That means that as new things come along they can be embraced and adopted – think what would have happened if a contract for a new site had been let three months before the iPhone came out? Or a month before the iPad came out? 
    It is, though, also a significant weakness.  Departments plan their spending at least a year out and often further; they let contracts that run for longer than that.  If there is – as GDS are suggesting – to be a consolidation of central government websites by April 2013 and then all websites (including those belonging to Arm’s Length Bodies) by April 2014 then there needs to be a very clear plan for how that will be achieved so that everyone can line up the resource.  Likewise, if transactions are to be put online in new, re-engineered ways (from policy through to user interaction), that too will take extensive planning.
    Having a roadmap that shows, even roughly, what is planned and when is one way to bring departments towards you rather than have them wait to be told.  The digital strategies that are due out around the end of the year look, so far, too vague to count as a roadmap.  They contain aspirations rather than commitments and look a lot like what we saw in 2001.
    Beware The Hockey Stick
    In 2001, we looked at departmental plans for achieving the Prime Minister’s stated aim of 100% of government services online by 2005.  What we saw, perhaps obviously in hindsight, was a very high proportion of services magically appearing online in the last quarter of 2005 – a hockey stick shaped graph.  It feels like we are heading that way again.  It’s not clear how things will be done in the new way (who will pay, what will need to be done, how will it be contracted, what’s the sequence etc) so departments are hedging and putting things out, quite conveniently I imagine, to around the time of the next election.
    Can You Do It Yourself?
    GDS have taken what is, in my view, a brave decision to do the bulk (if not all) of the work in-house – it is, in many ways, an approach that is entirely inconsistent with everything that the government preaches elsewhere, in IT and business.  As a result not only am I unclear what problem they are solving, but I’m also wondering whether they are solving the wrong problem the wrong way.
    It is, though, an interesting bet. In five years, is it likely that the same model will be in place? 
    In 2001 we formed a small team of folks skilled in business and technical architecture, project delivery and commercial/finance/procurement.  We wrote no code ourselves (not for production at least – we had a team that worked on proof of concept ideas that tested out what we might get others to do).  We believed that code writing was one of the many things that government outsourced.
    We contracted with various suppliers to do the work – the supply chain for the government gateway (often described as built by Microsoft) involved, for instance, over 40 UK-owned  small businesses.  We consciously did this because government – and especially the Cabinet Office – had little desire to maintain a substantial delivery team in house after it had spent the last decade outsourcing it. We created an intelligent customer that represented the whole and not just the single parts of the government. 
    We chose that model because we believed that building a team for the long term is very difficult, especially within the constraints of the civil service. We also believed that suppliers, over the long term, would outperform us because they would bring in new talent, train staff and keep them focused on the task. If one person, or one supplier, didn’t work out, there’d be another one behind that one, and another one and another one.  Quite different from the civil service model that makes hiring difficult (especially in this fiscal environment) and exiting staff near impossible.
    Sustained Sponsorship
    There is no doubt that Francis Maude is a key driver, perhaps even the key driver, of the change agenda across government, particularly in ICT.  I’m told, frequently, that when issues with departments arise, Mr Maude is briefed and he handles the issue in a bi-lateral with the relevant departmental minister and progress is then unlocked.   That is certainly a big help – though I suspect some departments are readying their rebellious faces whether or not Mr Maude moves to be Government Chief Whip.
    Being closely associated with a political sponsor is, to my mind, quite new for those involved at the sharp end of technology delivery.  I expect Ministers to champion policies – where would Universal Credit be without the sustained sponsorship of Ian Duncan Smith (and, conversely, where will NHS reform go now that Andrew Lansley has moved on).  But to see such close involvement from Ministers (ok, from one Minister) in website reform and the technology choices that underpin that is fascinating and potentially dangerous for GDS.
    During the time of the e-Envoy we had four Ministers and, if you add in eGU, nine.  I suspect that my experience of the Cabinet Office is more common than the current experience where there has been stability for the last 2 ½ years.  GDS will need a plan B if Mr Maude does move on to something new.  There will also need to be a 2015 plan B if power changes hands.  Of course, if your roadmap goes out only weeks or months, then no one is looking at 2015.  That’s a mistake.
    What’s The Model, Really?
    Any delivery model can be made to work and, of course, any delivery model can be done badly.  Picking a model is necessary but it’s not the only part of success.  How that model gets optimal impact needs to be understood along with how it will evolve.
    eDt felt that they were in the wrong place notwithstanding outstanding support from our sponsors. We were in a policy department with no reputation or desire to own delivery.  Indeed, the Cabinet Office had acquired this very team by accident after bidding for, and winning, some money from HMT which was then supplemented by further funds from the Inland Revenue).  Over a couple of years, we explored all of the available options then – trading funds, agency status, spin off, joint ventures with the private sector – but, in the end, the team was folded into a big department, the DWP, and there ended government’s flirtation with a very different approach for delivering services across the whole of government.  Until, of course, somewhat unexpectedly, some of it returned to the Cabinet Office.  It truly is a funny world.
    Cabinet Office has, then, acquired GDS by accident. History repeats.  Chris Chant landed in the somewhat foundering G-Cloud programme, arranged for a lot of Macs to replace some ageing and expensive PCs and, somewhere along the way, fired up a programme to replace and achieve massive cost savings and so was born  Not a lot of people know that I think.
    It would be a shame for history to continue to repeat. If and everything that underpins it from a delivery approach is to survive 5 years, let alone 10 years there needs to be thinking about how this will work.  I’ve said on this blog before that I believe the right answer may be a spin-off of GDS or a mutual so that it can get access to capital, bid for work and fully reflect its costs.  There are other choices; what’s important is to look at them and lay the ground work for making a choice and achieving it.
    Transparency Of Everything
    The GDS approach looks similar to a startup backed by a venture capitalist prepared to lose everything if the bet doesn’t work out (and who was anyway backing multiple other horses running the same and similar races). The VC in this case is UK government.
    GDS have succeeded in being wildly transparent about their technology choices and thinking.  They are not, though, transparent about their finances.  That should change.  The close association with politicians seems to mean that GDS must champion everything that they do as a cost save – witness recent stories on identity procurement costs, website costs comparing and and so on.
    Comparative costs need to be properly comparative, not presented only in the best possible light. Use fully loaded costs (that is, costs including items such as accommodation, pensions, employer NI contributions and so on, all of which would be included were the numbers like for like with a supplier cost).  Let’s see the numbers.
    Given the inhouse staffing model that GDS is operating, changes are really represented only by cost of opportunity.  That makes comparing options and, particularly, benefits difficult.  In a beta world, you make more changes than you do in a production world – once you’re in production, you’re likely to make incremental changes than major ones (because, as Marc Andreessen said long ago, interfaces freeze early – people get used to them and are confused by too big a change).
    It is important to know what “done” is – and not to claim that done is never done because there are always new things to do. The budget for “done” needs to be known – so that variances to that status are clear, so that opportunities can be embraced are understood in the context of scope done and cost done.  
    In this agile world, done is never done; there is always another iteration to deliver. In government IT as a whole, done is never done too – requirements change, new transactions appear new devices come into play and others fade away.  
    The important thing is to be clear what is going to be delivered in return for X million pounds so that the consequences of that can be measured – a gambler (that is, the government when acting as a VC) only  backs a horse that keeps running races and that wins more than it loses.
    It’s Transactions That Are Important
    GDS’ most public delivery is “just another website” – those who know (and care) about these things think that it might be one of the sexiest and best websites ever developed, certainly in the government world.  But it isn’t Facebook, it isn’t iTunes, it isn’t Pirate Bay.  It’s a government website; perhaps “the” government website. Once you’ve packaged a lot of content, made wonderful navigation, transformed search, you end up with the place where government spends the real money – transactions (and I don’t just mean in IT terms).  
    Back when I published graphs on how many websites government had, I guessed that there was an easy £250m spent on front ends each year.  The figure spent on transactions is many times that – probably ten or even a hundred times that, especially if you add in the cost of fraud, error, debt, call centres, support and so on.  That’s also where the legacy applications are – and all of the legacy processes that are tied up in complex outsourcing agreements that were written a few years ago and certainly
    don’t mention agile, iterative or quick.  Worse, many of those very same contracts are being replaced this year and next year – and the signs so far are that the contracts will look much the same; though they will be shorter duration and smaller in value (because of the split into towers).  They’re not being replaced with the thought that transactions will be fundamentally different and that the user experience will be at the forefront.
    September 2001
    In building the Government Gateway, we came up against the back end legacy systems.  Once you are integrating to those, the complex dance between interlocking systems governs your speed of process – you can change this one here, but that one needs to change at the same time, or you can change this end, but not that end.  Change control, version control, security, data protection and all kinds of other constraints become the norm.  There’s a reason that 10 years on the Gateway is still in place, operating much as it did on day one – it’s because it has integrated very well into the engines that drive government transactions as well as the dozens of third party products who talk to it when they talk to government – and when they need to change, it needs to be for a good reason that benefits the customer as well as the supplier of the third party product; most are not in it for charity.
    Soon GDS will tell departments that their top transactions need to be re-engineered from policy through to service provision with a clear focus on the user.  At that point we move away from the technologists who are attracted to shiny new things and we hit the policy makers who are operating in a different world – they worry about local and EU legislation, about balancing the needs of vastly differing communities of stakeholders and, of course, they like to write long and complicated documents to explain their position having evaluated the range of possible options.
    Tackling transactions is both fundamentally necessary and incredibly hard, though most of that isn’t about the shiny front end – it’s about the policy, the process and the integration with existing back end systems (which absorb some 65% of the £12-16bn spent per year on IT in government).  There is a sense of “Abandon Hope All Ye Who Enter Here.”
    It’s more than ten years since a single website for government was proposed (I know, I was the one who proposed it and wrote it up); it was an idea that was successively endorsed in various reports and strategies.  In a couple of years it may even be a reality.  There isn’t, though, a vision, let alone an action plan, for how transactions will be delivered – where will they be hosted, how will they integrate with identity providers (and how will the government gateway be retired), how will personal data be managed, how will pre-population take place, what will be done with the transactions that are already out there and working (some with take up of 80% or more). 
    There is also no proposal for how local government will be integrated into this offering, though many of the transactions undertaken by the average citizen are at a local level (and still with “government” rather than “central government”).
    Beyond that, there isn’t a vision for how the need for some transactions will be removed entirely – why should I apply for a tax disc for my car, why isn’t personal tax handled automatically and so on.  That would be truly transformational – until we do that, we are persisting two centuries or more (in only some cases admittedly) of process.
    July 2002
    All of that needs to be laid out – I’ll take bite-sized chunks for now but it needs to be thought through to avoid dead ends.
    Reliability, Resilience, Testing, Process, Bureaucracy
    When turns on – and turns off after 8 years of operation it will be different, better, faster, smoother, have nicer fonts, easier search and a thousand more things. 
    When a site needs to cater for 30m visitors a month and not just a few thousand beta testers who are interested in the technology, the presentation and what’s new on the web, then a new kind of discipline appears.  Breaking such a site is a bad idea – it will make news, cause disruption and make life harder.
    From tomorrow, a new kind of operational rigour is inevitable.  The live site can’t break.  It can’t be taken down for a few hours for an upgrade or a database refresh. As transactions are made available, that pressure only increases. Suddenly there are complex windows when services absolutely must be available; freeze dates take over from the previous free-wheeling approaches and lots of people need to be involved to ensure that the end to end process – from the shiny new front end all the way to the ugly, old, legacy back end – works.
    It will be interesting how the worlds of agile and operational rigour collide.  Things can slow down quite dramatically as regression tests are run and re-run and fixes
    are made and then tested again (and not just at the front end, but across the entire delivery chain from new to old).  It’s all part of the evolution process but I suspect it will come as a shock to some on the team.
    The Vision and the Roadmap
    In their seminal article, “the importance of being agile”, GDS quote Louis Gerstner (of IBM) who said “the last thing [we] need right now is a vision”.  I’m making that up but it feels like it could be true.  
    The first thing the rest of government needs – and is looking for – is a sense of how does this all work in the future.  What does it look like and feel like when government has only one (or a few) website(s); how will transactions work when identity is provided from a market of potentially many suppliers; when will sites and services close down; how will service levels work across government; who will pay for what when transactions are put on; how will transition from one model to the next work and so on.  A million questions, some of which could do with being answered now so that plans can be made.
    eDt tried very hard to paint a picture of how we thought it would play out.  We got it wrong on almost every count.  Progress was neither as rapid nor as far reaching as we expected.  Services that we thought would do very well – secure email to support exchange of personal information, payments to government, or SMS services for notifications – didn’t do anything like the volume that we expected.  
    eDt was around in an environment where there were almost no fiscal constraints.  Bidding for money was certainly a lengthy process but if you put together a compelling case, you had a good chance of being allocated funding.  The Treasury soon got fed up with being hoodwinked by departments who promised huge savings yet didn’t deliver on them and tightened the controls.  Today, though, it’s a different world.  There
    isn’t any money, there aren’t many people (and there are progressively fewer)
    and so making a case for investing to save further money will see scrutiny
    unlike any time before now.  Does the lack of money and the lack of capacity mean, though, that much of this won’t be, or can’t be done? And if it does, how will that be resolved?
    Wrap Up
    What is happening now has the air of a great science experiment – ironically, that’s what GDS call some of the work that they do internally as they test out concepts.  Such experiments can go bang of course.  Sometimes, or at least once if scientists are right, there is a big bang.  That’s largely inconsistent with a government approach where requirements are mapped out and delivered over a period of years, fulfilling policy objectives as they are ticked off.
    Of course, the historic approach has not worked out so well – we only need look at NHS IT, ID cards, Fire Control and so on to see that a new model is needed.
    The question is whether the GDS model is the one that achieves scale transformation right across government, or whether it is another iteration in a series of waves of change that, in the end, only create local change, rather than truly structural change.
    It seems unlikely that GDS can scale to take on even a reasonable chunk of government service delivery.  It also seems unlikely that enough people in
    departments can be trained in the new approaches to the point where they can
    shoulder enough of the burden so as to allow GDS to only steer the ship. If we add in the commercial controls, the supply chain and the complexity of policy (and the lack of join up of those policies), the challenges look insurmountable.
    None of that is an argument for not trying. is old and tired and needed a massive refresh; transactions are where the real potential can be unlocked and they need to be tackled in a new way.  Much of this has been tried before, painful lessons have been learned and it would be more than a shame if the latest effort didn’t achieve its aims too.  The trick, then, is to pick the battles to fight and create the change in the right areas with the aim of infecting others.  Taking on too much at once will likely lead to failure.
    Perhaps GDS is the new emperor and I am the little boy, or perhaps it is the other way round.
    Fingers crossed for tomorrow’s launch of then.  A successful launch will be a
    massive boost.  Great feedback from consumers would help create a rolling wave of change that would be sustained by successive iterations of high quality transaction delivery.  That would be a very good place to be.  It would, though, only be the start.

    Where Did All The Big Shots Go?

    Is it me or did all the big players in UK Government IT take up roles at names you’ve never heard of?

    Joe Harley … joins Amor Group as an advisorrevenue £34m.

    Steve Lamey … joined Kelway as COOrevenue £350m.

    John Collington … joined Alexander Mann Solutions as COO … revenue unknown (hideous website too).

    Is this a sign that the “usual suspects” are #unacceptable employers for former government leaders? Or that opportunities are better in smaller companies? Or that the government A team isn’t quite so A? Three is perhaps the start of a trend but not the confirmation of a trend.

    Meanwhile, in other news, government has now lost two female permanent secretaries in a matter of weeks – Helen Ghosh (Home Office) leaves to be CEO at the National Trust, Moira Wallace (DECC) is leaving with no new role yet announced; Ursula Brennan has moved on from MoD to MoJ (leaving MoD, ummm, rudderless?) and Melanie Dawes (who stood in for Ian Watmore after his departure) has not been confirmed in the role.

    Delivery by Default

    A dozen years ago in my first presentation to an audience of senior civil servants, drawn from across the whole of what was then the Inland Revenue, I put up this slide:

    The quote at the bottom was drawn from a memo that had crossed my desk reporting on progress on a major programme that the department had underway.  I was struck, dumbstruck even, by the leack of certainty both in being “into its stride” and “the autumn”.  The slide – with its animation – became widely known in the department as the “falling leaves” slide.

    So I certainly chuckled when I saw in the action summary for the Civil Service Reform initiative:

    By autumn we will have a cross-Civil Service
    capabilities plan that identifies what skills are missing and how gaps
    will be filled.

    By autumn the Cabinet Office will have completed a review with
    departments to see what further examples of change in delivery models
    can be implemented this Parliament

    I then read @pubstrat’s thinking on bowler hats and was drawn to remember another slide deck from around the same time:

    The road to reform is long, winding and very challenging.  Countless companies – with access to the very best talent – have failed at it (whether that be Nokia, Kodak, Comet or any other company that has gone to the wall or is about to).  Government’s very security is that it is around forever without competition.
    The road from plan to execution – from talking to delivering – is also long, winding and challenging.  And execution allows you to measure what has been done; talking doesn’t.
    I’ve uploaded both of the source decks to my profile page on slideshare. [Testing that link, it looks like slideshare has a problem right now.  They say it will be fixed shortly]

    Just 18 Months

    Now that it’s out that Chris Chant is retiring a common phrase in articles is that he’s going after “just 18 months” running G-Cloud. It’s Chris’ style to leave quietly and, as I’ve known him, worked with him, and for him, for about a dozen years I thought I might recap some of the things he’s done, both in just 18 months as well as over the period I’ve known him.
    G-Cloud has proved to be a hot topic in the world of government IT – and a little beyond – with attention increasing dramatically after Chris’ #unacceptable speech in October 2011. In this climate of openness, transparency, blogging and tweeting, I don’t think anyone has (ever) managed to be quite as open, provocative and engaging – nor respond to as many comments, articles, opinions and analyst reports – on as Chris has. He will be a hard act to follow. 
    Some have already claimed, or at least thought, that he was only as open as he was because he knew he was going – and those same people usually refer to his twitter handle (@cantwaitogo) as evidence of that. I know they’re wrong – Chris has always been as open as he is now, it’s just that he has been able to take advantage of new channels recently to ensure the message gets out more widely. His twitter handle shows only that his spelling is as questionable as ever and refers to some volunteer work that he did at Ambue Ari (hence his leopard profile picture – apparently the Mario pictures were all copyrighted).  If anything, his post-retirement twitter handle, rather than being @gone, will be @cantwaitogoback.
    In the just 18 months Chris ran G-Cloud he managed to design, develop and launch an entirely new approach within government procurement, aided only by a tiny team from departments and the Government Procurement Service, most of whom had day jobs as well. G-Cloud plainly leads the world in its thinking and its action. The second iteration, due in the next couple of weeks, will further extend that lead and provide a more flexible procurement platform whilst making it simple for those on the existing framework to transition.  In a while, we will look back and see 2012 as a pivotal year in the evolution of government IT, even though the changes orchestrated now will take some time to bed in.
    But before there was even a procurement vehicle, Chris had to galvanise the inevitably disparate parts of government to want such a thing – a truly open, transparent, everything published and visible to all procurement catalogue. He had to bring people together to think about how it might work, get funding, get support from permanent secretaries and ministers, convince people to become foundation delivery partners, work with suppliers to shape it, persuade lawyers and commercial people to accept a radically slimmed down approach, convince SMEs to play a major part (70% of those on the framework are SMEs), figure out how to publish all supplier information (including prices – a first time ever), persuade people that rating suppliers not only made sense but was absolutely necessary and present endlessly to audiences inside and outside of government to help get the message over. On top of that he had to fight to convince those who said it couldn’t be done, that suppliers wouldn’t sign up for it and that customers wouldn’t buy from it. Creating change in government has never been easy and G-Cloud shows that, whilst that is still true, it can be done but that it takes enormous effort, huge commitment from a small number of individuals and relentless focus.
    Alongside that Chris was breaking the mould in other areas – introducing public cloud email into the Cabinet Office (their first taste of their own dog food I am sure) and switching people away from expensive government standard devices to far cheaper off the shelf devices (showing an 80% cost saving). He also laid the groundwork for an entirely new approach to government web delivery by kicking off what became known as and that will result in being replaced a year or less from now.  Somewhere on this path he managed to become the 17th most influential person in UK IT (more to his own surprise than that of anyone else I’m sure).
    When I first met Chris he was a tax man- not a career IT guy as some have said (actually Chris knows as much about IT as the average dormouse – something that has certainly counted in his favour as he sought simpler and simpler solutions). He went from there to delivering online transactions at the Inland Revenue (now HMRC) including PAYE the first time, upgrades to Self Assessment, Corporation Tax and so on. Since 2001, HMRC has had by far the largest take up of online transactions across government – they were the first to try out incentives (few will remember, perhaps, the £10 rebate for filing your Self Assessment online), the first to move to mandation (after the Carter report) and will likely be the first to get 100% take up if not already, then very soon. Chris also worked on or ran and a dozen other government websites as well as the the government gateway.  
    In other roles, Chris has switched departments from 100% desktop to 100% laptop so allowing remote working and a reduction in carbon footprint, rolled out collaboration tools to support joint working, assured technology for the Olympics and many other things, not to mention loitered outside fish and chip shops counting the sacks of potatoes being delivered so that he could estimate sales and so, in turn, figure out how much tax the owner really should have been paying – them’s proper metrics them is.
    And, along the way, he has routinely regaled his friends, colleagues, customers and suppliers with endless tall tales, mostly crap jokes, comments about the poor quality of football at Arsenal (and the significantly better quality at Tottenham), infectious enthusiasm for the latest gadgets (from ‘phones to cameras to televisions and beyond) and, until he switched to eating only carrots, was one of the finer dinner companions in the UK – certainly higher than the 17th most influential dinner companion in the UK I believe.  Not every day was a blast with Chris and he didn’t get everything right, but I’m hard pressed to remember the bad ones amongst the torrent of good ones.
    If only others could accomplish as much in such a period as Chris has managed to, in just 18 months.  As wiser folks than me have said, success has 1000 fathers (and failure is an utter b*stard).   Chris will be missed by many though doubtless some, particularly those who were on the receiving end of some of his more explosive blasts, will be pleased that he’s gone, hoping that the cloud will become what they always thought it was, vapour.  It won’t.
    I’m sure the world of Government IT will be a slightly quieter place come the beginning of May, but I am confident that few of the changes Chris has kicked off will be unwound. And some will be reinforced even more strongly by the tiny team with the day jobs – they’re still there and will be working just as hard to support Chris’ successor, Denise McDonagh.

    ICT Futures for non-ICT deals?

    Liam Maxwell’s ICT Futures team are, as far as can be seen, getting properly stuck into IT deals.  But who is going to look at the other deals going on in government?

    1) Frameworks are being generated across every part of government.  Central government has its PSN, G-Cloud, commodity (known, I think, as Achilles) and according to the GPS site there are more to come and, whilst these deals are set up for the entire public sector to use, there are still other frameworks being generated including regional and local PSNs, NHS frameworks and also frameworks within departments themselves.  And then there’s the overlap between frameworks, such as PSN and G-Cloud both offering, for instance, e-mail. Pretty soon suppliers will have a choice of three dozen routes to market for any given deal yet will have little in the way of business to show for their hard efforts.  After all, getting on to a framework isn’t free – far from it in many cases.  There needs to be a charge to rationalise all of the frameworks, eliminate overlap and ensure that everyone knows where to go to buy what they need, rather than have them think that what they need is yet another framework.

    2) BPO deals are shortly to become common place, I suspect, as departments move their attention away from the apparently “easy to count” (the PASC might disagree) world of IT to the more complicated world of business process.  Whilst IT costs anywhere from £13bn-£25bn a year depending on who you listen to, government’s spend on its core business may soon be up for grabs – and that annual spend could easily reach £100bn-200bn (again, depending on who you listen to).  Figuring out what is and isn’t a good deal for these will be a complicated process.  We need a BPO futures team to look across the public sector, starting in the centre perhaps, to see how these deals are being structured and what needs to be in place to help prevent poor deals being done, ensure lessons are learned and that service improves as a result.

    So John Collington is doubtless the Liam Maxwell of “Framework Futures” though he’ll want to get moving and put the “hairdryer treatment” on all those setting up their own frameworks in competition with his own.

    But who is going to be the BPO Futures lead? The person who reviews business outsource deals to make sure that they make sense and are in line with the overall strategy?

    Reinventing Government IT

    In the 80s the theme in Government IT was inhouse development of big Line of Business applications. In the 90s, it shifted to outsourcing both development and maintenance of all aspects of IT. By the 00s we had two themes – the move online and the creation of multi-billion pound programmes (most of which have resulted in abject failure for whatever reason).

    In the 10s we are, again, re-inventing Government IT. With re-invention comes risk, sometimes significant risk.  The models in place today are well understood and well practiced yet have also come with significant and continuing risk; there has been no avoiding it.  Few would be able to point at more than a handful of successful deliveries in government IT.  And yet in the past, changes have occurred relatively slowly.  Today we are running several themes at once – vast reinvention on a pan-government scale – which inevitably brings with it more risk.  And with that risk will come, again, the risk of failure – sometimes with individual point solutions adopted, other times with whole threads of activity.  Spotting those failures before they occur and addressing them will be a time sink for the elite SWAT teams that will doubtless be needed. But some will escape the risks and deliver brilliantly of course; if only we knew which ones up front.  Learning the lessons and applying them to everything else going on – as they occur and almost in real time – will be the activity that differentiates this reinvention from those that have gone before.

    1. Real Innovstion – gCloud

    This week saw the launch of the gCloud framework and its associated CloudStore.  For the first time, government buyers can see the price of services upfront and compare across lots of different providers (ok, so it’s not easy to do that yet but it will get easier I’m sure).  Companies that had little or no access to government customers before can now chase business and, if they’re priced well, win against the big players.  Government can, in turn, try services out with little tail risk. Want to try collaboration for a project? Sign up to a new service for 2 or 3 months and see how your team adapts before committing your entire organisation to it. Fed up running an old version of Exchange that doesn’t support modern mobile phones well? Move to another.

    We already know how this plays out in the consumer scene.  We sign up to services, use them for a while, abandon them in favour of something newer and shinier and then repeat the process, often with a dozen different services at one.  Of course, we rarely pay for these services and there is little tie-in beyond whatever data we commit to them (and so services, naturally, try and take as much off us as possible – hence the recent storm about address books being uploaded into the cloud).

    We don’t, though, know how this will play out in government.  Big, slow, old-fashioned government is used to buying from big, slow, old-fashioned suppliers and living with them for a decade or longer. Will departments break that mould quickly and buy IT for their staff the way that they buy IT at home? Will they have the courage to bring together 5, 10 or 15 services from different suppliers and manage them as a whole, without the shield of a huge prime contractor?  Will they overcome their innate fear of “security” and adopt innovative ideas from new suppliers inexperienced in the government world?

    gCloud’s main risk is not that the services fail, but that the whole idea behind it fails – that departments hunker down and ignore it, that they don’t switch enough of their existing spend to it so that it makes a difference or that they use their existing suppliers to do what gCloud wants to do and so undermine gCloud.  The incredible energy behind gCloud – small team that it is who manage it – goes a long way to holding that risk in check, but departments are the buyers and they need to show their hands.  A move to these kind of services is inevitable, it’s only a question of when.  Departments publishing their plans for what they plan to buy (e.g. email, collaboration, storage as a service, etc) and when would allow suppliers to focus their efforts on core products whilst still looking to provide innovative services that departments didn’t even know they needed until they were shown them (or, better still, recommended them by the gEnius tool that doubtless will come with gCloud v2 – “services you need that you didn’t know you needed”™  or perhaps “service discovery as a service”(tm)

    For me, gCloud has to work and will work, although there will be bumps, because it’s actually the foundation of the second area of reinvention:

    2. Rejection of the old SI model – Service Integration

    If the 90s outsourcing model was single, behemoth suppliers providing all of a department’s IT (though not meeting all of its needs, often despite best efforts all round), the model in the 10s is quite different.

    The model adopted for the 10s, by at least a few departments, with MoD and MoJ leading the way, is to buy services from as many as a dozen providers – and to have a single Service (as opposed to System) Integrator bring all of those together.  The twist is that the SI no longer owns all of the other contracts – instead, they’re all owned by the department.  This has the advantages of eliminating the margin on margin common in traditional prime contracts as well as allowing the customer to pick what might have been called “best of breed” (back to the future!) suppliers (either through competition or from a framework) for each strand. On the downside, the procurement process becomes significantly more complicated and operating the end to end service becomes even trickier – liabilities, ownership and incentives will be murkier than government is used to.

    To pull this off requires far greater client side expertise to be in place than either currently exists or than has been thought about.  In a world of reducing budgets and prohibitions against consultants and contractors, sourcing enough people within the public sector (or transferring them in) will be a huge challenge.   Those people certainly exist internally, but given going on for £40bn of contracts at original signing value coming up for renewal in the next 5 years, I very much doubt that there are enough to handle the forward workload.  In the 00s, one of the consequences of the multi-billion pound contract era was that government directly drove inflation through its own buying process; that could happen again and so needs to be carefully planned for – by managing the timetable for procurements, by being clear with suppliers about where frameworks will be used and where procurements will take place, by being clear about the baseline that will be in place at the point of transition (departments are not standing still after all, they are busy virtualising their servers, hopefully looking at buying gCloud services and bringing their costs down in line with overall targets) – any procurement underway in the next 2-4 years will be against a very dynamic departmental baseline.

    This, then, is a riskier area of reinvention than gCloud.  With gCloud, departments are buying in to a service for a few months and maybe for only a small part of their organisation.  They are specifically able to try things out – one project team or one function – before committing.  And even if they commit, it might be that it doesn’t work and they have to pull out (see the point above about reinvention brings risk of failure).

    But the Service Integration model is trying several new things at once, and for longer periods.  And, of course, wrapped in these contracts are all of the legacy applications and services (many of which are much the same as they were in the 80s).  Failure with these is certainly bigger than with gCloud but of a smaller likelihood than with the old model – if a single provider from within the group of 6, 9 or 12 struggles, then they can be replaced (not, perhaps, by one of the others – that model hasn’t exactly worked out well in the NPfIT/CfH world).  The risk may, in fact, be all on the buying process – are the resources there to package all of these services up intelligently and effectively, to establish a great competition amongst suppliers (when many other competitions are likely to be going on at the same time, forcing suppliers to pick where they field their best teams) and to ensure that all of the liabilities, incentives, controls, processes etc work across multiple providers of services so that the user sees things just “working”?

    These models will also end up working but I think there will be significant bumps along the way – both during procurement, transition and operations.  They should also work better than the current model, but not immediately – I suspect it will take 2-3 years to bed the new model in properly.

    3. Relentless Commoditisation

    Typically government believed it was special and so developed everything in a “for government” way – it had one of everything and everything at least once.  Huge bespoke estates resulted with everything the department needed bought by the department (or the department’s supplier).  That resulted in huge build costs and even larger maintenance costs.  In the last few years that has changed and, in the last 18 months, that change has dramatically accelerated.

    Wherever possible, frameworks are being set up for networks, document archives, IT equipment, cloud services and all sorts of other things.  Looking at the GPS list of frameworks just now, I counted 24 for IT (including IT consultancy) and a further 16 for software out of over 600 frameworks in total.  I can only see that number going up as government tries to get its “managed spend” figure from its current level which I believe is somewhere over £1bn to far more than that.

    Not all frameworks are created equal of course, nor are they used equally.  Within the Service Integration model above, departments, such as MoD, have already made clear that they will use frameworks where it makes sense to.  Others, like MoJ, have implied that they will create frameworks within their new model so that other departments can use their services (e.g. perhaps for hosting).  It’s possible, likely even, that frameworks will be the route by which all equipment is bought even within, say, a hosting service – in which case, suppliers with high degrees of vertical integration (like HP & Fujitsu perhaps) may not, having won, say, a hosting contract be able to fill that data centre with their own hardware unless it is proven to be best value in a competition on a framework.

    4. The One That Isn’t Any Of The Above

    Perhaps the biggest reinvention is going on within Government Digital Services.  There, they aren’t using frameworks (at least so far), aren’t looking for a service integrator or for a dozen suppliers to bring together a service under various contracts and they aren’t doing some of the other things that the Government’s own ICT strategy says should be done, such as using COTS or making the most of small businesses.  They are, though, rebuilding – from scratch and with in-house, largely permanent, staff – government’s most visited website ( to be renamed, or re-renamed,  If the rebuild succeeds then it’s likely both visitor counts and visitor durations will go up – if it’s easier to find everything you need, you may stick around longer to see what else is there and, if transactions are hosted on the site, then completing transactions will add to the stay time. It could be very, very big.

    Building websites and services from scratch is, of course, relatively common. After all, there was no COTS for Facebook or Twitter.  It isn’t, though, common in the public sector (not since a couple of bright people in CITU did it in the mid-90s anyway).  It also doesn’t fit well with the public sector’s current business model, whether IT or operations. I’d say that’s a pretty serious reinvention.

    This is early days. The “beta” of is, at best, a proof of concept. It’s got some impressive capability already and the rapid iteration and incrementing of features is exciting to watch.  But it’s also got a huge do list ahead of it. One that, in less than a year, must allow to be turned off.  The lessons to be learned by the team are doubtless very interesting (and they’re learning most of them in the full glare of publicity); I hope that they are not learning every lesson from scratch although given the amount of from scratch building, I suspect they are encountering new problems every day that many others have already encountered, inside and outside of government.

    Having your team inhouse is a fascinating experiment though – your only wasted cost is that of opportunity, i.e. would it have been better to have the team work on a, instead of b or did you not even notice c and what that could have done for you. It’s still real money of course – and based on the team size, it might be a real lot of money.  You can achieve speeds and quality of delivery that I don’t think can be achieved by all except the most integrated of supplier and customer teams.  But you can also spin wheels because the cost is essentially sunk. Every working day you burn £x and measuring whether you have achieved the same in value is hard – especially when your scope is shifting and evolving within the agile methodology.  Also, big companies manage turnover on an every day basis – they draft in new people, have a constant stream of more junior employees who can work for lower rates and who can progress up the ladder before moving to oer clients and can bring in additional bodies for particularly tough deadlines – all of that is hard with an inhouse team (especially a small one).

    Predicting the future for GDS and the is hard.  I suspect a transition to an alternative model in the future – perhaps to a series of suppliers with GDS holding the integration role or, more radically, the creation of a mutual where GDS gets some investment and becomes a supplier to government itself, so adopting a mix of commercial disciplines with at least partial employee ownership.  That would free them up to compete for other business and would put all of the incentives in the right place, as well as provide a more structured career path for those in the team.   It would almost be a return to the days of CITU.

    This is potentially the riskiest reinvention of them all.  I hope that the risks don’t come home and that this programme succeeds enormously but I can’t help but feel nervous for its prospects.  And Facebook disappeared during development or had a few outages in its first few months, few would have noticed or cared. But when you take on such a big and visible project with an entirely new approach, it could hardly be more challenging.

    5. The Underpinning Reinvention

    All of these changes are underpinned by an openness and transparency that is incredibly refreshing.  Seeing new starters in GDS blog about what it’s like to work there and very senior people across government blog / tweet / respond to comments has opened up the workings of government – my guess is that the regular audience consists of a relatively small number of geeks but the occasional bursts into the mainstream press so no change in message.  We have done betas and pilots and test versions in UK government before, but never quite in this way.

    As I said at the beginning, with reinvention comes risk. With risk comes the potential for failure. With failure comes interrogation and criticism.  The good news is, I think, that all of the interrogation and criticism will have been done on the inside and posted on blogs long before that point.