Football Fireworks

The last time I published a shot looking in this direction it was of snow-covered grass. Roll forward only 6 weeks, combine with a big football match and this is what you get. The fireworks went off at full time – I guess whoever sponsored them couldn’t wait until the match was decided, and I have no idea which team they supported. Still, it’s not every day you get a 20 minute fireworks extravaganza outside your window, so I am grateful to them whoever they may be.

fireworks 1 (1).jpg fireworks 2 (1).jpg fireworks 3 (1).jpg

fireworks 4 (1).jpg fireworks 5 (1).jpg fireworks 6 (1).jpg

Strategic Thinking Part 2

Continuing my earlier post on strategic investment choices over the next 10 years, here’s another:

3. Corporate Memory (knowledge management in another context)

200805180454.jpgThe longer you stay in a role, the more you know. Trite but true. The corollary to that is, the longer you stay in a role, the more different people you will see around you. The further corollary is that if you are on the customer side for a long time, you will see significantly more turnover on the supplier side than on your own side, and the less they’ll know about what’s gone before. People come, people go. They work hard, for the most part, contribute to the overall project, programme or business operation, and then they leave with most of what they know safely and securely stored, at least for the short-term, in their heads. And then it’s all gone.

For example, commercial deals are done by whole teams of people who work for months and sometimes years on the intricacies of operation. They write clauses with specific intent, sometimes misplaced but always designed to achieve something particular. As the writers of the clauses roll-off and go onto the next thing, the next company or into retirement, those conversations about intent and desire are forgotten. No one quite remembers what was meant. Worse, the supplier personnel have stayed consistent and they’re sure they remember, but the customer has no corporate memory; or vice versa. It’s just gone. Disputes happen, work is redone, tasks fall between the cracks, delays occur, people fall out and projects suffer.

The hi-technology solution to losing corporate memory was alway the shared folder. Didn’t work back at the beginning, doesn’t work now. But it’s in widespread use. It’s just a dumping ground. Huge numbers of files are dumped in extensive hierarchies of folders that few understand beyond those who set them up originally. Primitive search capability is put on top of the folders in the hope that people will find what they need. People don’t. New documents replicating old ones are created. New procedures are developed. New controls are put on top of old controls. The supposedly shared folder is written to and rarely read.

Worse, some companies rely entirely on e-mail – the most silo’d form of storage yet created – once the two ends holding the conversation have moved on, the knowledge is lost; or, more likely, bizarre limits on the ability to store email on the central server mean that regular archiving becomes deleting vast chunks of email to ensure that you can send and receive new email. Some companies that I’ve worked in have had policies that meant I was killing email on a weekly basis, trying to stay within my quota.

Corporate memory is poor, even nonexistent in places – it varies from job to job, project to project. But overall, there is little harnessing of that memory.

Many departments, companies and agencies I meet and work with are deploying Microsoft Sharepoint as a solution to this; some have used earlier versions for years, some have used Lotus Notes instead (but many stick with the shared folder – the fabled “network drive”). There are some stand-out Sharepoint implementations that have really made a big impact on collaboration and cross-working on projects; there are some poor versions that are just souped up shared folders. The trick is to figure out now how to be the former, and how to embed it in all the projects you have underway or are about to start (thinking again about the intercept strategy listed in point 1 in the previous post).

So the strategic investment choice to make is to decide whether what you do now is right, enough and will take you through the next ten years. I’d bet, for most, it isn’t and it won’t. So you should choose then to invest in a way of working and a set of tools that deliver you an enhanced corporate memory. Modafinil for your operation.

With a significant portion of staff in most government organisations retiring over the next 10 years (as much as 40 or 50% I believe), the rate of loss of memory will accelerate. Likewise, with a wave of big projects starting in many countries – whether they be preparation for World Cups, Olympics, electronic border control, ID cards or new phases of e-government – the need to set up the projects correctly – with inbuilt memory – is more essential. i spoke to a guy a few weeks ago who said there was little point in corporate memory – as soon as something was written down, the market had moved on and it was no longer useful to anyone (I believe a related quote is that such things “have the shelf life of a banana”).

So if you’re in the project business, which I suppose everyone is in some way, why wouldn’t you set up an initiative to preserve Corporate Memory – to retain the details of projects, programmes and operations so that everything that’s needed is stored for later retrieval, analysis, dispute resolution or lessons learned? At the same time, the very best such initiatives will ensure that the day to day operation of the programme is more efficient, because duplication will be reduced.

It doesn’t matter whether it’s Sharepoint, Groove, a variety of google-enhanced tools or some other bit of software that underpins your corporate memory. But it does matter that you have a consistent way of setting up the memory for a project, within and across your organisation; One that contains all the standards and disciplines that you want a project to follow, one that is easy to index and that makes that indexing as simple as possible for the participants, one that has a search engine that means people can find what they need, one that encourages emails to be stored and available to everyone on an agreed list (and provides all the space you’ll ever need – if google/yahoo/hotmail can do it, I’m sure the big outsourcers can do it for their clients), that prompts and facilitates discussion and interaction. And it does matter that you have a plan for how you’re going to keep hold of this data and manage it over the next 10 or 20 years. Worry about this in the same way that you might worry about data retention periods for tax records or investigations or whatever.

Why is this important?

Because you’re signing outsourcing deals with 10 year or longer durations; you’re putting in place systems that will still be there 10 years, even 20 years from now; you’re losing people faster than you’ve ever lost them before; you’re bringing in consultants to do work for you who, one day, are going to leave quicker than you can transition the knowledge; you’re firing up new projects as fast as you ever have and you need them to get off to a flying start with the base information that they need at their fingertips;

Are there any downsides?

Sure. You’ll be prescribing a systematic process that your teams will have to follow – that will take time to adopt, will be a burden at the beginning even. You’ll have another tool to worry about and manage – one that eats diskspace at an insatiable rate (show me an application that doesn’t!). You’ll need an intercept strategy (see earlier post, item number 1) that lets you figure out whether you apply this to new projects only, or try and retrofit to ones underway.

But all worth living with for the bigger prize.

Strategic Drinking. Errr Thinking.


What if you could only decide your overall strategy every 10 years and then had to just get on and execute it? Reading an article in this month’s Decanter magazine I came across a quote from the head winemaker at Chateau Palmer. He was being chased over rumours that the property is about to be acquired, probably by one of the big insurance or luxury goods companies that have made astute investments in various wineries around the world but particularly in Bordeaux. He refuted the idea by noting that “the shareholders meet only once every 10 years and they had just met, allocated investments” and that, presumably, would be that. They’ve been making wine at Chateau Palmer since about 1816, so that would be around 19 strategy documents in total if they’d been pursuing this approach since the beginning. Contrast that with the number produced by your own organisation in just the last 10 years (or even less).

It’s worth pausing about what such a strategy might entail. After all wine-making is not a simple business and whilst there is much that can be managed, there remains plenty – not least the weather – that is far out of control. You can make some big decisions at a 10 year level – whether to buy more land, replace the equipment, extend the cellars, invest in automation (there are several vineyards in Burgundy that continue to use a horse to plough the fields), adopt a new approach to vinification, explore foreign territories, maybe even create a second wine (many of the big chateau have only relatively recently offered a second or even third wine under their own labels). But whether to prune early, when to pick, how to pick, whether to bring in extra labour, whether to do something to prevent frost damage, how to handle mould or what price to sell at are all decisions taken day to day or even hour to hour.

The evidence that both these macro and micro decisions have an impact on quality is found in the price of fine wines over the last few years – those from 2005 far outstrip those of 2001, 2002 and 2004. 2000 and 2003 fetch a premium to those, but not to 2005. 2006 and 2007 are, mostly, lower still.

Feedback on the decisions you made, whether in the fields or in the winery, takes time: the critics arrive, they pronounce their verdicts, you announce your prices; in a year or more, consumers take delivery and the supply/demand constraint starts to be fully exercised as supply is drunk or squirreled away (laid down as they say). More critics pronounce or re-pronounce. Demand goes up, prices move (up usually). Suppliers note the supply is diminishing and urge you to buy more before it’s all gone and so things continue. And it’s been that way since Madeira was traded in the London market in the 1700s (back then, Madeira was more valuable if it had spent some time in America – apparently the heat of the journey each way and of the climate there was good for the wine).

So what if a government IT strategy was set once every 10 years?

You’d have to make some pretty big and bold directional investment choices whilst hedging to make sure that you weren’t risking too much with each investment and that no single choice going awry could bring your entire strategy to a grinding halt. So what strategic investment choices would you make? Here are the kind of things that I’d assume were going to happen and have plans in place to harness them as they arose.

1. Cloud computing

This is going to be increasingly talked about until you can’t remember when people didn’t talk about and then, finally, people are going to do it. Even governments. If you don’t have your own, there are very good odds that you will use someone else’s. This idea has been around for a while – indeed, many will say that it goes back to the beginning when the only way to use computers was through something called “time sharing.” It’s been called grid or utility computing too. In essence, instead of what you have now, which is most likely a fragmented mess of “one of everything ever released” servers, you will have a single (logically single), consistent architecture on which all of your applications run. You may own this or you may use someone else’s. Right now you could use resource offered by Google, Amazon, Sun, Joyent or others. The bet, then, is that governments will use cloud/grid/utility applications and that they will both use grids provided by others and one built for their own exclusive use.

If you’re a government department, there is no reason I suspect why you couldn’t start to develop your infrastructure towards a cloud architecture now. You might not know when it’s coming, or exactly how it would look – indeed, you might not yet know whether your wider government org is going to get a few ministries together and build one for the whole of your government to use. What you could take a good punt at though is the standards, architecture and framework that might be involved in such a grid. You could work with the main vendors, see where they’re heading and take some bold decisions that moved your existing and forward application development plan so that it aligned with the necessary architectures.


When we were working on central infrastructure (now known by the sexier sounding shared services) in the UK, we spent more than a bit of time working on what we called an “intercept strategy.” We knew that departments would be all over the map with their readiness to adopt the e-government services we could offer from the centre of government. Some would have have raced ahead and have more capability in their own tools than we had, some would be far behind and not even in need of what we offered. By looking at who was ahead of us and figuring out a release strategy that embraced what they did (and took it forward), and by looking at those behind us and figuring out what they would need to make adoption easier, the idea was that we could plot out over a period of 2 or 3 years what functionality we would need and when. At the same time, we could build adoption plans with those departments, agencies and local authorities – whether they were ahead or behind – that would allow intersection with the central infrastructure platform. We didn’t get too scientific about this – there were plenty of departments who lacked plans that were detailed enough to allow interception, plenty of local authorities who had requirements that were beyond what we thought we could offer (at a cost that was effective enough that others would want to adopt it) and still more agencies who we’d have loved to work with if we could, but either didn’t know much about them or didn’t have the capacity to find out (let alone develop what they needed). You definitely can’t be all things to all people when you are developing shared services.

My investment would relate to a belief that in 1 or 2 years, some governments will start building their own cloud for internal use, in the same way that some of them have secure networks, centralised email servers, central virus scanners and so on. They’ll do this because they want to control the architecture, the standards and the security. This is akin to governments self-assuring instead of using insurance companies – they do this because they’re big enough to be able to; why put your business into the market if you can manage the risk internally? It’s not quite the same with IT given the risks that come with government IT projects. But I’m not necessarily betting that a government will insource its own IT and create a grid, but that they’ll place contracts for creating such a grid. At the same time, that grid will be entirely government owned and housed in government premises so that if the contracts expire or need to be changed, then transitioning to a new provider is a little simpler.

Making this investment choice has a significant upside:

– It starts or continues your consolidation process taking it to a new end point. If the stories are true that the average server is only ever 10% busy (I’m sure there’s a horrible black swan driven curve here which means the actual rule is that it’s 10% busy until it’s consistently 100% busy because you’ve got a problem), then having everything in one place with a consistent architecture and sharing servers between applications as load changes must make sense. It’s green, it lowers your power bills, it reduces your capital costs and reduces your licence costs.

– Architecting for a common platform should drive down development costs, improve productivity of existing developers and infrastructure teams and improve thinking about availability, security and so on (of course, if the security is wrong to start with, then it’s wrong for everything – but the sense here is you only have to get it right once for it to be right for everything).

As a complete aside here, I really think there is an opportunity for after a government has defined its cloud architecture standard – the way that it will be developed, deployed and operated, the coding standards, interface calls, security and reliability standards and so on. If the government of any given country created, in consultation with the IT vendor world, such a document, it could then build that into all of its tenders, driving down cost of supply for vendors. At the same time, it could encourage the education system to provide early training in the necessary skills to support those standards in all computing and IT courses, so that future generations would already be familiar with the approach, whether they were leaving school after O levels, taking their first degree or returning to college for further education. This would increase the pool of talent available to develop the architecture in the future. After all, I learned Fortran and Pascal at university and have never written a line in either since I left – wouldn’t it be more useful to learn about a real architecture deployed across a government?

On the downside:

– If you don’t manage it well, your capital costs could balloon early as you set off the migration (so there’s some good planning to be done about when to refresh, when to stop investing and what changes can you make to your portfolio now to make this easier to pull off later, so avoiding a hockey stick profile of effort)

– I remain unconvinced that the management tools are in place to safely operate a large, consolidated system set in this kind of model. Plainly some people do it incredibly well, but I’m not sure yet that the average government entity is quite ready, and nor perhaps are their suppliers. But this is about a directional investment choice – there is time for this to come good

– Perhaps the last thing we want is lots of clouds within one government operation (meant in the widest sense to include all of the departments, agencies and/or ministries). Then we have lots of servers operating efficiently at a local level, but a high degree of inefficiency at a top level. I think this is balanced out by the proviso that most government systems are used most heavily by staff in a given department. They use only systems within that department, so if the departmental architecture is sufficiently optimised, there may be little to gain from consolidating an already efficient architecture with another efficient architecture from a different department. If this assumption changes to something like “in the future, most operations will be by citizens or business via the Internet”, then I believe the case for consolidation shifts.

There’s no question that this will be hard and that some of the standards that you adopt and decisions that you make will be wrong, which is why you have to make as much as you can of the second investment choice.

2. Application Rationalisation


This is an old saw by now. We’ve already seen several rounds. Many banks consolidated their application base ahead of the euro’s introduction (1999) and almost every company adopted an elimination strategy ahead of the Year 2000, all to reduce the cost and/or risk of transition. Yet still, most government organisations will have dozens, hundreds or even thousands of apparently important, even business critical, applications that support what they do. These will be spread around various sites, they will sit under people’s desks on servers that few know about, they will be in the main data centre but supported by Jo who’s been around since year dot, there will be at least 4 and maybe 10 of what most people would define as “common applications” – GIS/mapping systems, statistical analysis systems, workflow tools, asset recording systems, timesheet analysis/time-recording tools and so on.

Taken across a single country’s government as a whole, the total number of applications will be a frightening number, as will the total cost to support them all. There are several layers of consolidation, ranging from declaring “end of life” for small systems and cutting their budgets to zero (and waiting for them to wither and die – this might take eons) to a more strategic, let’s use only one platform (SAP, Oracle etc) from here on in and migrate everything to that single platform (this too could take eons).

The likely approach lies, as ever, somewhere in the middle. There’s no doubt that you will increasingly get rid of low end applications (you’re doing this already and have been for ages). But the bet is that across government, you only want to have a few serious, high end platforms. You might want only 3 or 4 GIS systems, 5 or 6 HR systems, 10 time recording tools etc. I don’t think you’ll ever want one of these unless you’re very small or you have a really efficient way of executing a centralisation plan and keeping that central provider nimble and exciting in their efforts to keep their customers happy (I’ll happily debate this one topic for hours – alternately taking the “centralise” and the “partially centralise” as well as, occasionally, the “fully decentralise” positions).

So part (a) of this investment choice is that governments will define a pool of “acceptable” platforms or providers for mainstream services. There will be more than 1 and less than 10 such platform providers in each category of service. Blanket licences covering pan-government use will be negotiated. Three years from now, they will evaluate all of the providers in the pool and eliminate one from that pool – there will be no new development on that platform and anyone doing a refresh must cost out moving away and going to one of the remaining providers. Licence fees will be renegotiate to bring costs down (best to do this before you eliminate one – you’ll get a much better deal if the threat of obsolescence is hanging over the head of someone). Three years after that, another will be knocked out. After 10 years, there will be at least 3 fewer providers in the mix. More aggressive governments might go with a 2 year cycle. Or you might vary the cycle depending on the service category or the total investment you have in a given category.

Part (b) of the investment choice is that the individual solutions will be provided by competing organisations within government. Competition is rare within government itself but having 4 delivery organisations that sit inside government but not within a single ministry (perhaps at arm’s length) will lead to healthy competition in the provision of IT services. That competition will drive lower prices, better innovation, greater product turnover, easier integration, stronger solutions and better customer service. If I were running an IT function, I’d perhaps want to drive the capability to deliver this within my own organisation so that I could do the same for others in the future.

This post is now final (17/5/08) and additional investment choices are being added as separate posts.

In The Cloud


All the talk now is about cloud computing, software as a service and combinations of those. Utility computing, the buzzword of perhaps 2002 has been reborn and, perhaps even turned into a product that will make money – and not just from advertising.

With that in mind, I wonder who will be the first government department, in the world, to:

1. Move their desktop application suite to a hosted software as a service model with Exchange, Word, Powerpoint, Excel, Sharepoint (or Google or open source equivalents). I’m not expecting too many people to do the same for their mobile users just yet but it is, in theory, viable.

2. Deploy an entire web-based application – perhaps a website (either content only or content plus transactions) – into Google’s cloud such that the government department no longer owns any of the infrastructure or software that supports that web application.

Has anyone done this already? I’d be impressed and surprised. If not, when will the first such government department, anywhere in the world, announce that they are now a virtual IT department (at least in part)?