Ego Sum Ostium

westminster cathedral

I know that I often link into posts here by the strangest route. I think this hook will qualify as the most unlikely one so far – stranger than wine, Whole Foods Market, poker, Parisian bicycles and so on.

I used to work in Victoria and, most days, walked past Westminster Cathedral. I had never been in. A few weeks ago, walking between offices, I walked past it and thought, “I really should go in.” I had fifteen minutes before my next meeting and so went through the door. As you can see from the photo, it’s quite unlike your “average” cathedral – this is no St Paul’s. St Paul’s is one of my favourite buildings in London; one of the things that makes it particularly special is that it was the only cathedral of its era that was designed and built through to completion by the original architect. Strangely, the architect and builder of Wesminster Cathedral died the year before the first service was held, continuing the rarity of seeing it through.

There are three reasons why Westminster Cathedral was built in this Byzantine style (as opposed to the more usual Gothic style):

1. To be completely different from the Gothic style of Protestant cathedrals and, particularly, to contrast with Westminster Abbey which is at the top of the road

2. The structure is based on domes not arches and so allows for relatively open and spacious areas (the nave is 34m high by 18m wide, the largest in the country) within the church – up to 2,000 people, seated, have unobstructed views of the sanctuary

3. Because it can be built more quickly. In effect, the frame goes up quickly and the decoration is left to those who follow.

There’s a fourth interesting point for me which may be related to the building style or may not – it’s running cost is £1,000,000 a year. That covers all operational costs (not the occasional capital costs for major structural repairs). This church is just over 100 years old and it’s going through a small capital repair project now – new electricity, roofing replacement and so on – and they’re after about £3,000,000 to do that work.

Putting aside the fact that the UK’s Catholic Church is run from this cathedral – a whole religion for a million quid a year! – what got me was two fold: that the operational costs are so low and that they had the foresight, a 100 years ago, to say “We’ll build it and let other people add and modify and decorate it later, incrementally”. Without major modification, it’s stood the test of a 100 years. Show me an IT project that you could say that about even for 5 years.

So I’ll pause at this point and ask that anyone reading puts the religious intro to one side – it really was only a lead in, not a point for debate about the merits of any particular religion (or absence of one) – and concentrates on the IT and e-government thread that I continue with:

£1 million doesn’t sound a lot. Is it just that once you’re in government for a while you start thinking in multiples of £5 million or £10 million? Is it only the true believers – those, say, in MySociety, who can both conceive of, deliver and operate a service for less than £50,000? Is it that a government doesn’t take something seriously if it isn’t priced in the tens of millions? Or is there some weird risk factor that gets added to cater for inevitable delays, requirement adjustments and re-thinking of specifications?

Why I’m on this point is that over the last few years I’ve been brought into several projects – and not just in UK government but other governments around the world and in private sector organisations – or seen projects from a moderate distance, that shared a few characteristics:

  1. Capital spend was largely complete versus the original budget (and, in a few cases, spend was in excess of budget)
  2. Actual scope delivered was some way (often quite some way) from the original expectation – meaning that more money would have to be found to deliver the full scope, or a commercial dispute with the supplier(s) would be needed
  3. Benefits case was starting to look decidedly flaky (and the business units were suffering because of the shortfall in scope, either needing more people or doing less for their customers than they expected)
  4. Ongoing operational costs were being calculated as the live date loomed and they were looking very much higher than had been forecast (putting pressure on future budgets). Sometimes this was because the builder was not the same as the operator – times had changed, contracts had been let separately and so on.
  5. Cost of future upgrades had not been factored, usually on the assumption that such upgrades would each have their own business case, even where the upgrade was necessary just to stay within the support of the various packages

I have no statistics to bring here but it would seem, based on my experience, that projects too often match those characteristics. So, to provoke a debate:

Knowing the cost of change

What if you developed a system / application / solution with a known cost to operate? This would be a set of calculations covering a range of things, such as: cost to add a new customer, cost to add a new user, cost to add 100 product pages, cost to connect to a 3rd party system, cost to add a new tax credit / benefit, cost to add a new taxation profile, cost to delete 100 pages etc. You’d have to come up with the list at the beginning but the idea would be to cover two bases – the first would give you a known operational cost assuming you knew roughly what your business was going to do (note, I’m not saying here that you would set some modelled combination of these as your actual operating base, I’m saying that you would be able to forecast the cost of future change based on these numbers).

Is that even possible?

Some people solve this by allocating a fixed cost for post-live enhancement – a pot of £1 million or £10 million into which all changes go until some future point when a major business case is prepared for a big upgrade. The pot pays for a fixed set of developers who work their way through a hopper of proposed code changes, getting as many done as possible. This approach is as often used in the private sector as the public sector. You need more changes? Add more people and the hopper gets [somewhat] bigger – Fred Brooks’ rules still apply.

What do other people do? What are the approaches?

3 thoughts on “Ego Sum Ostium

  1. Those projects are generally fire and forget, so there is just no feedback loop on TCO thinking. The planning, design and construction phases are oblivious to full life costs without that loopback data to act as a TCO converging influence on financial models.Modern technical architectures welcome inefficiencies, so everyone is happy, designers and builders get paid so the suppliers can sell obscenely expensive maintenance upgrades. TCO per transaction hides in a black box that the taxpayer will never know about and the NAO have in their blindspot. Now if the Catholic Church worked out the number of catholics or prospects going through the doors per year, they could calculate the cost visit per year. How many people were in the place when you visited given the Pope spent nearly £3,000 to support your visit? Did that spend achieve its purpose and bring you a little closer to the Catholic god and church?Or is it another legacy project began with god intent that should be razed to the ground and replaced by another London shopping mall?

  2. What a holiday I\’ve had. I\’d make some comment about David Frost, but like everyone else I\’ve ran out of funny comments, so I\’ll just ridicule religion instead.\”A whole religion for a million quid a year?\” I think that\’s operating costs. Turnover will be much higher. Profits very high.I think the reason churches are done so well, is because, well, it\’s the place people come to pay the priest to brainwash their children. It therefore makes sense that it be built as long term as possible.As for the last question; what do other people do, I design my systems, with a tiny core (ie. one user capacity,) but with the ability to scale linearly and infinitely, just by adding more hardware. It\’s not rocket science, anyone can do it, but what happens is that there\’s loads of \”technical architects\” out there, most of whom have never delivered anything ever, who read books, and design a system to handle a hundred thousand customers, but with slightly non-linear scalability, and hugely non trivial means of optimisation. Secondly, I design systems based on what I think they should be able to do, I think most people believe business analysts to be the people for specifying requirements, but I believe designers are much better at it.I.

  3. I think your matrix and observation is a valid test – then ongoing measure of a quite sensible way to think. My question though is what happens when the sum is £1bio or thereabouts. My observation is at that level calculations and plans seem to take on a dewy-eyed lack of reality – giving rise to wide margins, lashings of \’who really knows\’ built in error intention, poor allowances for critical uncertainty (this stuff simply flies out of the window – were they even in the minds of the calculators in the first place)…ho hum…

Leave a Reply