Site icon Alan Mather – In The Eye Of The Storm

The Trouble With … Transition

In my post two weeks ago (Taking G-Cloud Further Forward), I made this point:

I struggle with the suggestion to make contracts three years instead
of two.  This is a smokescreen, it’s not what is making buyers nervous
really, it’s just that they haven’t tried transition.  So let’s try some
– let’s fire up e-mail in the cloud for a major department and move it 6
months from now.  

Until it’s practiced, no one will know how easy (or
incredibly difficult) it is.  The key is not to copy and paste virtual
machines, but to move the gigabytes of data that goes with it.  This
will prove where PSN is really working (I suspect that there are more
problems than anyone has yet admitted to), demonstrate how new
capabilities have been designed (and prove whether the pointy things
have been set up properly as we used to say – that is, does the design
rely on fixed IP address ranges or DNS routing that is hardcoded or
whatever).  

This won’t work for legacy – that should be moved once and
once only to the Crown Hosting Service or some other capability (though
recognise that lots of new systems will still need to talk to services
there).  There’s a lot riding on CHS happening – it will be an
interesting year for that programme.

Eoin Jennings of Easynet responded, via Twitter, with a view that buyers thought that there was significant procurement overhead if there was a need to run a procurement every 2 years (or perhaps more frequently given there is an option within G-Cloud to terminate for convenience and move to a new provider). Eoin is seemingly already trying to convince customers – and struggling.

Georgina O’Toole (of Richard Holway’s Tech Market View) shared her view that 2 years could be too short, though for a different reason:

An example might be where a Government organisation opts for a ‘private
cloud’ solution requiring tailoring to their specifications. In these
cases, a supplier would struggle to recoup the level of investment
required in order to make a profit on the deal.  The intention is to
reduce the need for private cloud delivery over time, as the cloud
market “innovates and matures” but in the meantime, the 24-month rule
may still deter G-Cloud use.

Both views make sense, and I understand them entirely, in the “current” world of government IT where systems are complex, bespoke and have been maintained under existing contracts for a decade or more. 

But G-Cloud isn’t meant for such systems.  It’s meant for systems designed under modern rules where portability is part of the objective from the get go.   There shouldn’t be private, departmentally focused, clouds being set up – the public sector is perfectly big enough to have its own private cloud capability, supplied by a mixture of vendors who can all reassure government that they are not sharing their servers or storage with whoever they are afraid of sharing them with.  And if suppliers build something and need long contracts to get their return on investment, then they either aren’t building the right things, aren’t pricing it right or aren’t managing it right – though I see that there is plenty of risk in building anything public sector cloud focused until there is more take up, and I applaud the suppliers who have already taken the punt (let’s hope that the new protective marking scheme helps there).

Plainly government IT isn’t going to turn on a sixpence with new systems transporting in from far off galaxies right away, but it is very much the direction of travel as evidence by the various projects set up using a more agile design approach right across government – in HMRC, RPA, Student Loans, DVLA and so on.

What really needs to happen is some thinking through of how it will work and some practice:

– How easy is it to move systems, even those designed, for cloud where IP ranges are fixed and owned by data centre providers?

– How will network stacks (firewalls, routers, load balancers, intrusion detection tools etc) be moved on a like for like basis?

– If a system comes with terabytes or petabytes of data, how will the be transferred so that there is no loss of service (or data)

– In a world where there is no capex, how does government gets its head around not looking at everything as though it owned it?

– If a system is supported by minimal staff (as in 0.05 heads per server or whatever), TUPE doesn’t apply (but it may well apply for the first transfer), how does government (and the supplier base) understand that?

– How can the commercial team sharpen their processes so that what is still taking them many (many) weeks (despite the reality of a far quicker process with G-Cloud) can be done in a shorter period?

Designing this capability in from the start needs to have already started and, for those still grappling with it, there should be speedy reviews of what has already been tried elsewhere in government (I hesitate to say that the “lessons must be learned” on the basis that those 4 words may be the most over-used and under-practiced words in the public sector.

With the cost of hardware roughly halving every 18-24 months and therefore the cost of cloud hosting falling on a similar basis (perhaps even faster given increasing rates of automation), government can benefit from continuously falling costs (in at least part of its stack) – and by designing its new systems so that they avoid lockin from scratch, government should, in theory, never be dependent on a small number of suppliers and high embedded costs.

Now, that all makes sense for hosting … how government does the same for application support, service integration and management and how it gets on the path where it actually redesigns its applications (across the board) to work in such a way that they could be moved whenever new pricing makes sense is the real problem to grapple with.

Exit mobile version