I’ve had to turn on comment “word verification”. If you leave a comment you’ll be asked to enter a word that appears on screen as an image. I’ve resorted to this only because someone is leaving multiple triplets of comments that link to whoknowswhere (I daren’t look). Don’t let it put you off.
I lived in Paris for about 4 years, from 1997 to 2001, with a few gaps here and there. Driving there was always an experience – the first time I ventured on to the roundabout at Place D’Etoile forever sticks in my memory; it’s one of the roundabouts where traffic coming on has right of way.
This video, all 8 mins and 39 seconds of it, puts my driving experience in Paris to shame. The sound track is that of a Ferrari or maybe a Lambourghini; the visuals are a bumper level view of Paris – the Arc de Triomphe, the Louvre, Sacre Coeur, a few pidgeons scattering, one or too shocked pedestrians and a couple of near misses. My favourite moment? The driver turns left into the Louvre (the pyramid on the right) and the engine noise reverberates off the archway. That and shooting more than a dozen sets of red lights – the only time I did that in Paris was when I ran the marathon and I wasn’t going quite so fast.
Watch it with the sound up loud.
All tours of Paris should be this much fun.
The Government Gateway won an award this week (“another one?” I hear you cry). It’s an IDDY award, or perhaps an IDDYIOT award. Apparently it’s only for those deployments of Liberty technology. Here’s what was done to win it:
– Deployment — The Government Gateway Authentication Service has been
designed as the authentication server for all e-government services in
the UK. Nearly eight million citizens in the UK are registered to use
the gateway service.
– Circle of Trust — The Gateway provides authentication services on
behalf of multiple other public-sector bodies, based on trust
principles established in UK e-government legislation. The Gateway also
supports a “tiered” authentication scheme according to the level of
assurance provided by the user enrolment process and the type of
– User-Centric Capabilities — The project has been developed to provide
citizens and businesses with ease-of-use capabilities for accessing a
variety of UK government services; not only does the Gateway provide a
single authentication and entry-point for online government services,
it now supports the predominant open standards on the market, making it
easier for public sector bodies to integrate its authentication
capability with their own service provision systems.
– Highlights — Deployment supports all federation standards to allow for
complete interoperability between government agencies nationwide, there
is less need for each local authority to develop or implement its own
secure authentication mechanism. The Gateway provides local authorities
with a single, consistent and robust security mechanism at minimal cost
and effort on their part.
– Interoperable Federation Technologies — A principal aim of this
project was to reduce the cost and complexity experienced by government
departments and other public sector bodies (such as Local Authorities)
in making use of the centralized authentication service. To that end,
the Gateway was enhanced to support both WS-Federation and the Liberty
Alliance Identity Federation Framework standards. This delivers a level
of interoperability and protocol-independence which greatly simplifies
the task of integrating service-provision systems with the Gateway’s
authentication functions. It also means the Gateway can deliver
consistent authentication to its users without requiring them all to
adopt a single standard, which could potentially alienate a substantial
segment of the user-base.
All those apart from the last bullet have been true for a while. I didn’t know the Gateway did (used? incorporated?) Liberty so I asked the guys back at the Cabinet Office what was with that. Jim (It’s life but not as we know it), replies:
This year we built a single sign on portal as part of the Gateway UI. The business objective was to deliver a white labeled common authentication page that would manage the authentication calls with the Gateway. In order to do this we had to implement single sign on to mange the user’s authenticated session between the Gateway domain and the participating portal domain. We did this by implementing an interoperable SSO protocol handler that allows the portal to select whether they want to use one of the Liberty, WS-Federation or SAML protocols. The security token that they receive is a SAML 1.1 token but each one can be customised per portal.
So that’s all clear then. Congratulations. Nearly 6 years on, it’s great to see the Gateway still be recognised as leading the way.
I must be 5 years behind everyone else on this but I discovered Goggles today, the Google Earth flightsim. Setting it to start in London, I was delighted to recognise Hyde Park “beneath” me, and then to fly down Knightsbridge and to my own house. Wonderful stuff. Next up please an online version where you can shoot your friends. What’s a bit of flying where you can’t shoot other planes?
Monty Hall was the host (and later the producer) of a TV quiz show called “Let’s make a deal.” The show started in the early 1960s and finished as late as 1990. The show, and the host particularly, gave rise to a paradox known as the Monty Hall Problem.
The essence of the problem is given with this paragraph:
Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?
Dan has written about this problem before and that, coupled with a recent comment from Ian where he said, “I’d rather pay 1000 quid for something that I liked than 500 for something I didn’t” got me thinking. Ian went on to propose that government needed some kind of proof of concept lab, not run by the big vendors, to make sure everything would work out before too much money was spent.
It turns out that in the Monty Hall problem, it’s better for you to switch doors than to stay with your original choice. You won’t believe me, so check the links, or play for yourself at this site, where you’ll be presented with the stats from past winners and losers.
So let’s play out this problem in the government space. You’re starting a project and there are three outcomes – two are the same, you spend a fortune and it doesn’t work out (back to the two standard problems of government IT projects, (1) it costs too much and doesn’t work and (2) it’s all big brother); the third door is the route to success.
You don’t get to find out which outcome is going to be your gift upon completion right away. But let’s suppose half way through the project, Monty pops up and offers you the choice of doors. Behind one is successful completion of the project for only £1000, behind the other two is failure having spent £500 (using Ian’s numbers – and I know he said “something I liked”, but bear with me as I slightly change that to “something that works”).
You pick one door and Monty opens one of the others. It shows the project failing at a cost of £500. Do you switch doors, i.e. strategies, or do you carry on? If the odds were the same as the Monty Hall problem, you’d make a switch, because the odds of success were 66%, for the investment of only another £500. The key in all this is that Monty knows what is behind the doors.
What if the numbers were bigger? What if it was £5,000 more? Or £50,000? Or £500,000? or £500,000,000? Would you still change? If you’d already spent £500,000,000 and thought that you might pull it off for another £500,000,000 (or that there was a 2/3rd probability you would), would you still go for it?
In the world of government IT, the numbers are far more likely to be in the tens or hundreds of millions (and, sadly, occasionally in the billions) and, worse, there aren’t just 3 doors and, still worse, no one helps out by opening a few of the doors to narrow your choices – and, we don’t know what’s behind the doors (other than some inevitable press headlines either way perhaps).
So, back to Ian’s proposal. Would a proof of concept lab materially improve the odds of delivery?
Back in the original procurement for the NHS IT programme (known now as NPfIT or Connecting for Health), each of the vendors was required to construct complex proofs of concept of all of their technologies, and integrate them to other vendor results (mostly the spine) to show that their proposal would work. It was during this stage that many pulled out.
Before the Tax Credits programme, several vendors were asked to come in and carry out a proof of concept, over 8 weeks if memory serves, using Java, .net and coolgen so that a decision could be made on which technology to use.
Intellect, the supplier organisation, did a deal with OGC to offer up a service called “Concept Viability” that would let government test out ideas, in a multi-vendor environment (although not necessarily building them), before making a commitment to any one route.
I appreciate that vendors have been involved in these and what Ian is suggesting is some kind of in-house lab, but the idea has been well-trialled. And, I assume, the tests carried out gave some degree of reassurance that the idea was sound.
My experience with government IT leads me to believe that it is not the fundamentals of the technology that fall apart in most projects – the servers talk to each other, the applications do what they’re supposed to do, the network connects to offices and so on. No, there are some other problems:
– Often the relationship between the requirement and the solution is not understood. A proof of concept can operate with only part of the requirement clear. As the requirements evolve and time passes, decisions made early on are found to hamper later delivery – complex set ups need to be unwound and rebuilt.
– The subtleties of legislation are not usually clear at the outset, the type of client base that will be supported is not known (tax credits for instance). This leads to assumptions being made that, when tested much later in the process, often turn out to be flawed.
– Proofs of concept (or, perhaps, proof of concepts) rarely have to scale and operate nationally. A system designed to work in a lab doesn’t have to deal with dozens or hundreds of offices scattered around the nation, it doesn’t have to deal with millions of annual claims each year. Simple decisions taken early on for expedience play out poorly when scaled up massively.
A good design up front coupled with some really good scenario thinking can resolve these points. A system can be designed and built that is flexible enough to allow late stage changes, is scalable enough to deal with national usage and capable enough to deal with the quirks of changing legislation.
I’m not sure, though, why government would be any better at doing this than the vendors it employs – government long ago gave away much of its IT expertise so that it could be reinvoiced by the big players. What route in is there for new talent – and how would government compete for that talent with people who really know how to design such systems, perhaps the designers of eBay, Amazon, Google or whoever?
There is, though, a need to do more up front to reduce the later risk; but there is also a need to get the later scenarios right and to test those both technically and contractually. As the range of possible scenarios goes up, the risk premium the deal attracts will, of course, go up and, in turn, the cost to government (the insurance risk) will go up. Some stress testing of contracts would make a lot of sense, notwithstanding the fact that there are is always more than one party in any contract and predicting outcomes is certainly not a science.
Finally, if we knew which doors were in front of us and someone could open a door and say “down this path lies certain ruin”, I’m not sure that people would make so brave a call as to halt at £500 (or £500,000,000); the vendors, likewise, would not likely take the loss at that point. The bigger the number, the harder the call and the further to fall there is.
It’s a big question to resolve and one that needs more than a simple blog article. It can’t be solved with one idea or another, but will need much effort from many angles – vendors, government departments, interested third parties, universities (who are turning out the next generations of outsoucer employees) and interested customers of government who have a point of view.
So, this is not to belittle the idea that Ian offers – I think there’s merit there – it’s just to lay out some of the complexities. Now, what I think could be interesting, is a place to prove legislative thinking – to understand better the impact of various policies on government, its customers and its workers.
There is perhaps no finer example than Tax Credits. The Inland Revenue assumed that the beneficiaries of tax credits would be people who paid tax, i.e. their usual customer base. These people were, for the most part, nice, controlled people who paid the IR monthly through PAYE or annually through Self Assessment (with some variations to deal with e.g. self cmployed people). In reality, they got mothers with kids who would super glue their hands to the tables at tax offices, mothers who would leave their kids in the tax office and insist that they would stay there until the IR paid them money so that they could eat and, to make it all so much worse, people who had an income that changed month to month, let alone year to year. Understanding the difference there could have paved the way for a far more sensible discussion about the best way to calculate and pay the credits. There were many in the IR who thought this way at the time and who argued endlessly that there would be a better way to do it, but their protests fell on deaf ears. The system problems that occurred at launch only compounded a problem that was already going to be terrible at best, disastrous at worst. Subsequent problems with ID fraud and fraudulent claims, only make the original thinking yet more dangerous.
In truth, there are some fundamental flaws that make the risks big for government projects. These are borrowed from a colleague who I suspect would choose to remain nameless:
– A willingness to buy the aspiration rather than the reality
– A failure to confront the gap between policy and operation (i.e. having a policy is not the same as delivering it)
– A bizarre reliance on IT to solve problems which ought to be addressed in business terms (and IT’s willingness to accept this challenge). This is where Mike Cross chips in with “If I had a pound for every time someone says ‘it’s the business, not the technology’.”
For as long as these remains the case, all major cross-cutting (such a lovely government word that: many people are cross in government and not nearly enough is cut, but I digress) programmes will present an extremely high level of risk.
Borrowing some further words from a learned colleague, who realised all this was tough far longer ago than I did and who has spent most of his time since trying to correct it, here are the “we will nots” that every IT department should stick to:
We will not:
Accept the challenge of integrating, at the back end, what Policy wonks and Ministers have designed, at the front end, to be separate and different.
Entertain discussion of ‘whole customer experience’ for so long as policies and products continue to be defined in terms of non-entities (i.e. things/people/organisations that government doesn’t actually transact with)
Put sophisticated management information tools in the hands of managers who have no aptitude for managing by reference to information.
Record management information at any frequency which is shorter than the business’s decision-making cycle. For most this means annually.
Accept Business Cases containing reference to benefits of greater flexibility, improved working environment for staff or better decision making. Too many broken promises, too many bruised hearts…
Tolerate delay while anguished discussions take place about FoI or DPA. It’s a business issue, so sort it out before you ask us to build anything.
Try to persuade you that the latest upgrade will solve all your problems. It won’t. You are on the treadmill of ever-increasing expenditure for ever-decreasing additional benefit. Get used to it.
On those points, I’ll close.
It’s probably easy to get too anal about all the running data I get from my Garmin Forerunner 305, but after a tough run on Sunday I was looking for some answers. As part of the prep for the New York Marathon on November 5th (sponsor me!), I ran the Robin Hood Nottingham Half Marathon. There was no sign of Robin, and definitely no one handing out money to the poor (or even jelly babies to the runners) but there was plenty of sunshine – despite a foggy start to the day, it was 24C at the off and somewhere near 27C by the time I got round. It was easily the toughest run I’ve been through – partly because I wasn’t feeling great, partly because of the heat (although I’ve run in hotter times) but mostly because there were lots of hills and they all felt steep. And so I must not be in the shape that I thought I was in.
The Garmin records distance, elevation and speed. My SportTracks software graphs that better than the Garmin in-house software does (and, I think, better than MotionBased). So here are three graphs in a row, one from the Great North Run (last September, 1h 59m), one from the Liverpool Half (March this year, 1h 45m) and one from Nottingham (yesterday, 1h 46m 50s).
Nottingham felt steep, but I wanted to know if it was as steep as I thought – and how did it affect my running. This first graph shows the elevation and pace for the Great North Run. I started off too fast – that’s what comes of being on the white start line when they sound the klaxon. From there, I get slower throughout the race, independent of the hills (and everyone tells me that the GNR is one of the steeper races) – there’s a brief blip at 14km where I caught someone who nearly fell over as I was running alongside them – and then I get a little quicker near the end as I see the 2 hr time get a little too close on my watch and pick up the pace a bit. I finished in 1h 59 something.
Next up is the Liverpool Half. Here I started somewhere in the pack and so the first few hundred metres are spent dodging people. From then on I ran pretty much consistently, mostly between 4:50/km and 5:10/km. There’s a big hill in the middle – the course takes you down the hill, along a little and then round and back up the same hill. That certainly took something out of me, but the hill was short and steep and it was over with quickly.
Lastly, Sunday’s race at Nottingham. The spread of pace here is from 4:20 to 5:20, with a high degree of volatility. The hills are more frequent, steeper and generally last longer than either Liverpool or the GNR. Half way through I was ready to drop – volatility increases, largely independent of the hills, yet I consciously try and speed up in the last 4 km, trying to get round inside my target time. I was very focused on 1h 40m at the start, and held that time for, at most, a mile. Things got worse from there and I finished in 1h 46m.
In 2 weeks I have the Windsor Half Marathon which is supposed to be moderately bumpy. Between then and now I have another 30km+ run to get done, then a week off after that followed by the Nike 10k. And then it will be a month to go.
So far I’ve run 445km in training. I’m a little ahead versus the run up to London earlier in the year but my times are looking a little slower. Of course, I ran Liverpool and Reading 4 weeks and 2 weeks before the London Marathon, whereas I’m running these 8 weeks and 6 weeks before, respectively.
Lots of variables in every run. Perhaps more on this when I’m through Windsor. Hopefully the Queen will stop by to wish us all well.
Awesome looking car … apparently it has over 6,000 laptop batteries powering it … I only hope that they’re not made by Sony otherwise there’s going to be a mighty bang.