Monty Hall was the host (and later the producer) of a TV quiz show called “Let’s make a deal.” The show started in the early 1960s and finished as late as 1990. The show, and the host particularly, gave rise to a paradox known as the Monty Hall Problem.
The essence of the problem is given with this paragraph:
http://www.blogger.com/img/gl.link.gif
Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?
Dan has written about this problem before and that, coupled with a recent comment from Ian where he said, “I’d rather pay 1000 quid for something that I liked than 500 for something I didn’t” got me thinking. Ian went on to propose that government needed some kind of proof of concept lab, not run by the big vendors, to make sure everything would work out before too much money was spent.
It turns out that in the Monty Hall problem, it’s better for you to switch doors than to stay with your original choice. You won’t believe me, so check the links, or play for yourself at this site, where you’ll be presented with the stats from past winners and losers.
So let’s play out this problem in the government space. You’re starting a project and there are three outcomes – two are the same, you spend a fortune and it doesn’t work out (back to the two standard problems of government IT projects, (1) it costs too much and doesn’t work and (2) it’s all big brother); the third door is the route to success.
You don’t get to find out which outcome is going to be your gift upon completion right away. But let’s suppose half way through the project, Monty pops up and offers you the choice of doors. Behind one is successful completion of the project for only £1000, behind the other two is failure having spent £500 (using Ian’s numbers – and I know he said “something I liked”, but bear with me as I slightly change that to “something that works”).
You pick one door and Monty opens one of the others. It shows the project failing at a cost of £500. Do you switch doors, i.e. strategies, or do you carry on? If the odds were the same as the Monty Hall problem, you’d make a switch, because the odds of success were 66%, for the investment of only another £500. The key in all this is that Monty knows what is behind the doors.
What if the numbers were bigger? What if it was £5,000 more? Or £50,000? Or £500,000? or £500,000,000? Would you still change? If you’d already spent £500,000,000 and thought that you might pull it off for another £500,000,000 (or that there was a 2/3rd probability you would), would you still go for it?
In the world of government IT, the numbers are far more likely to be in the tens or hundreds of millions (and, sadly, occasionally in the billions) and, worse, there aren’t just 3 doors and, still worse, no one helps out by opening a few of the doors to narrow your choices – and, we don’t know what’s behind the doors (other than some inevitable press headlines either way perhaps).
So, back to Ian’s proposal. Would a proof of concept lab materially improve the odds of delivery?
Back in the original procurement for the NHS IT programme (known now as NPfIT or Connecting for Health), each of the vendors was required to construct complex proofs of concept of all of their technologies, and integrate them to other vendor results (mostly the spine) to show that their proposal would work. It was during this stage that many pulled out.
Before the Tax Credits programme, several vendors were asked to come in and carry out a proof of concept, over 8 weeks if memory serves, using Java, .net and coolgen so that a decision could be made on which technology to use.
Intellect, the supplier organisation, did a deal with OGC to offer up a service called “Concept Viability” that would let government test out ideas, in a multi-vendor environment (although not necessarily building them), before making a commitment to any one route.
I appreciate that vendors have been involved in these and what Ian is suggesting is some kind of in-house lab, but the idea has been well-trialled. And, I assume, the tests carried out gave some degree of reassurance that the idea was sound.
My experience with government IT leads me to believe that it is not the fundamentals of the technology that fall apart in most projects – the servers talk to each other, the applications do what they’re supposed to do, the network connects to offices and so on. No, there are some other problems:
– Often the relationship between the requirement and the solution is not understood. A proof of concept can operate with only part of the requirement clear. As the requirements evolve and time passes, decisions made early on are found to hamper later delivery – complex set ups need to be unwound and rebuilt.
– The subtleties of legislation are not usually clear at the outset, the type of client base that will be supported is not known (tax credits for instance). This leads to assumptions being made that, when tested much later in the process, often turn out to be flawed.
– Proofs of concept (or, perhaps, proof of concepts) rarely have to scale and operate nationally. A system designed to work in a lab doesn’t have to deal with dozens or hundreds of offices scattered around the nation, it doesn’t have to deal with millions of annual claims each year. Simple decisions taken early on for expedience play out poorly when scaled up massively.
A good design up front coupled with some really good scenario thinking can resolve these points. A system can be designed and built that is flexible enough to allow late stage changes, is scalable enough to deal with national usage and capable enough to deal with the quirks of changing legislation.
I’m not sure, though, why government would be any better at doing this than the vendors it employs – government long ago gave away much of its IT expertise so that it could be reinvoiced by the big players. What route in is there for new talent – and how would government compete for that talent with people who really know how to design such systems, perhaps the designers of eBay, Amazon, Google or whoever?
There is, though, a need to do more up front to reduce the later risk; but there is also a need to get the later scenarios right and to test those both technically and contractually. As the range of possible scenarios goes up, the risk premium the deal attracts will, of course, go up and, in turn, the cost to government (the insurance risk) will go up. Some stress testing of contracts would make a lot of sense, notwithstanding the fact that there are is always more than one party in any contract and predicting outcomes is certainly not a science.
Finally, if we knew which doors were in front of us and someone could open a door and say “down this path lies certain ruin”, I’m not sure that people would make so brave a call as to halt at £500 (or £500,000,000); the vendors, likewise, would not likely take the loss at that point. The bigger the number, the harder the call and the further to fall there is.
It’s a big question to resolve and one that needs more than a simple blog article. It can’t be solved with one idea or another, but will need much effort from many angles – vendors, government departments, interested third parties, universities (who are turning out the next generations of outsoucer employees) and interested customers of government who have a point of view.
So, this is not to belittle the idea that Ian offers – I think there’s merit there – it’s just to lay out some of the complexities. Now, what I think could be interesting, is a place to prove legislative thinking – to understand better the impact of various policies on government, its customers and its workers.
There is perhaps no finer example than Tax Credits. The Inland Revenue assumed that the beneficiaries of tax credits would be people who paid tax, i.e. their usual customer base. These people were, for the most part, nice, controlled people who paid the IR monthly through PAYE or annually through Self Assessment (with some variations to deal with e.g. self cmployed people). In reality, they got mothers with kids who would super glue their hands to the tables at tax offices, mothers who would leave their kids in the tax office and insist that they would stay there until the IR paid them money so that they could eat and, to make it all so much worse, people who had an income that changed month to month, let alone year to year. Understanding the difference there could have paved the way for a far more sensible discussion about the best way to calculate and pay the credits. There were many in the IR who thought this way at the time and who argued endlessly that there would be a better way to do it, but their protests fell on deaf ears. The system problems that occurred at launch only compounded a problem that was already going to be terrible at best, disastrous at worst. Subsequent problems with ID fraud and fraudulent claims, only make the original thinking yet more dangerous.
In truth, there are some fundamental flaws that make the risks big for government projects. These are borrowed from a colleague who I suspect would choose to remain nameless:
– A willingness to buy the aspiration rather than the reality
– A failure to confront the gap between policy and operation (i.e. having a policy is not the same as delivering it)
– A bizarre reliance on IT to solve problems which ought to be addressed in business terms (and IT’s willingness to accept this challenge). This is where Mike Cross chips in with “If I had a pound for every time someone says ‘it’s the business, not the technology’.”
For as long as these remains the case, all major cross-cutting (such a lovely government word that: many people are cross in government and not nearly enough is cut, but I digress) programmes will present an extremely high level of risk.
Borrowing some further words from a learned colleague, who realised all this was tough far longer ago than I did and who has spent most of his time since trying to correct it, here are the “we will nots” that every IT department should stick to:
We will not:
Accept the challenge of integrating, at the back end, what Policy wonks and Ministers have designed, at the front end, to be separate and different.
Entertain discussion of ‘whole customer experience’ for so long as policies and products continue to be defined in terms of non-entities (i.e. things/people/organisations that government doesn’t actually transact with)
Put sophisticated management information tools in the hands of managers who have no aptitude for managing by reference to information.
Record management information at any frequency which is shorter than the business’s decision-making cycle. For most this means annually.
Accept Business Cases containing reference to benefits of greater flexibility, improved working environment for staff or better decision making. Too many broken promises, too many bruised hearts…
Tolerate delay while anguished discussions take place about FoI or DPA. It’s a business issue, so sort it out before you ask us to build anything.
Try to persuade you that the latest upgrade will solve all your problems. It won’t. You are on the treadmill of ever-increasing expenditure for ever-decreasing additional benefit. Get used to it.
On those points, I’ll close.
This is a fair piece of analysis, you\’ve just made. I\’d hate to have to take you on in an argument over a coffee table. I\’m therefore not going to try to be a twit with you, as I don\’t want to lose.That said, I offer a couple of points.Firstly, I used the phrase \”for something I liked\” than \”something that worked\” because it seems that as Microsoft\’s Jerry Fishenden (recently and correctly) put it, a lot of IT projects aren\’t just automation of tasks, they\’re actually business process re-engineering projects. Something I like imlpicitly means it works, but I also explicitly am bought in (if it\’s actually any good.)If you can\’t get buy in from the staff on these jobs, they\’re just going to be \”I\’ve always done it this way!,\” (just look at the CSA) i.e. demotivated and predisposed to not using it properly. I therefore view the prototype concept as more than just a \”proof that it works\”, but also as a kind of \”design document\” that the shop floor can understand and critique, and just as importantly, feel ownership of. For instance I recently redesigned a website for a pretty large multinational, and I spent large amounts of time talking to the women at the counters, (rather secretly, as the ICT (er) inexperienced head of operations wanted to design it himself, and thrust it upon them.)Providing a prototype, with a questions and answers built in, also has one added advantage. In my humble opinion, there\’s almost always some very clever person in a department who, for reason of being personally disliked by his seniors, not being aggressive enough, looking ugly, or just being socially inept, has never made it. I actively seek out this person as soon as I arrive on site; he\’s the guy, (often a timid librarian type female battered by testosterone) who if they\’d been asked \”any questions?\” would have said before the system went live, \”How do I do [whatever]?\” They didn\’t ask however, and because of human error somewhere in either the requirements gatherers, the gap analysts or the BAs, consequently it becomes a requirement change, or an enhancement. If she\’s got a forum in which to question the prototype, without fear of being ridiculed, you get to hear her.This brings me to a reinforcement of that point. The complaint I hear the most from doctors is \”they\’ve never asked me\”, they\’re really p*ssed off about this. These chaps are all not stupid, they\’ve got degrees that took 5 years to obtain.Now you and I both know that given 100,000 doctors, you could probably get most of the requirements you want from just 50, (if you picked the right 50) but this disenfranchises the other 999,950.If there was a prototype that simluated say, the booking system, the ability to view a consultant\’s notes, or to see an X-Ray, then everyone would see how it worked, (as well as proving the logic) and no one would be frightened of it. It helps overcome the fear factor. Even the educated like to stick to what they know. (As can be seen by the contrasting vb syntax and c/c++/java syntax viewpoints.)On the subject of scaleability, I should also point out that having designed/and or troubleshooted many systems with a very large scaleability requirement, I find that today (right now,) with modern toolsets, most good prototype developers (who I define as being \”very clever but conservative in their view of new technology\”) can implement prototypes that are quite scaleable by default, in fact with ADO.net 2.0 it\’s pretty hard not to implement a nonscaleable system. For instance, I recently for a laugh, wrote a simulator for the Riposte middleware used by all the counters in the post office. I used just .net remoting, and an RDBMS. Instead of it taking man years, as it did by a team of MIT graduates, it took only the thick end of two hours, due to the improvement of toolsets, and class libraries.)I recognise this hasn\’t always been the case. This wasn\’t the case even just two years ago, but it is now. The mechanics of the design of software toolsets has interdicted this issue at source.On a different note, I think your comment \”A willingness to buy the aspiration rather than the reality\” is outstanding. It\’s the motivation behind women who marry violent men, as well as a hole load of current affairs issues we have today, and is I believe the fundamental difference between the artist and the scientist. (It\’s also the only way I can beat my wife in an argument, if I can get her to the point where the only paths she can continue in the argument are Sophie\’s Choice; i.e., for her to win, she\’d have to hold an unfair, or nasty position, she fails to hold this position, with all the silent treatment that encompasses :-)Ian
Wholy cow! Did I say hole?I
Actually, just to go one step further on the vanguard development argument. Microsoft are bringing in the nail in the coffin to unscaleable prototyping with the arrival of a new in-memory querying language called linq. (slated for .net V3.5)To achieve scaleable prototypes, they\’ve got to been written pretty much with the business logic in the middle tier, which ADO.net 2.0 lets you do very nicely, (just as you would in a full system,) but in order to do things such as, say, a full e-commerce order system, in just a week or two, you\’ve got to do this in a powerful language, like (APL -Hahahah!) or sql. This is still not scaleable, because SQl runs on the DB server introducing contention issues, and APL runs like a pig. However linq sees this problem dead, (assuming you can do updates too,) because it\’s a powerful query language, but it runs in memory and on the business servers. Game over.We won\’t see this until 2009 though, methinks.Kind regards,I.
This \”In my humble opinion, there\’s almost always some very clever person in a department who, for reason of being personally disliked by his seniors, not being aggressive enough, looking ugly, or just being socially inept, has never made it. I actively seek out this person as soon as I arrive on site; he\’s the guy, (often a timid librarian type female battered by testosterone) who if they\’d been asked \”any questions?\” would have said before the system went live, \”How do I do [whatever]?\”\”I agree with completely. They\’re the folks who realize the inanity of the policy dialogue and the impracticality of the implementation.But then, with\”Actually, just to go one step further on the vanguard development argument. Microsoft are bringing in the nail in the coffin to unscaleable prototyping with the arrival of a new in-memory querying language called linq. (slated for .net V3.5)\”you lost me completely. although microsoft putting nails in a coffin sounds like a great application.