Strategic Drinking. Errr Thinking.

200805052011.jpg

What if you could only decide your overall strategy every 10 years and then had to just get on and execute it? Reading an article in this month’s Decanter magazine I came across a quote from the head winemaker at Chateau Palmer. He was being chased over rumours that the property is about to be acquired, probably by one of the big insurance or luxury goods companies that have made astute investments in various wineries around the world but particularly in Bordeaux. He refuted the idea by noting that “the shareholders meet only once every 10 years and they had just met, allocated investments” and that, presumably, would be that. They’ve been making wine at Chateau Palmer since about 1816, so that would be around 19 strategy documents in total if they’d been pursuing this approach since the beginning. Contrast that with the number produced by your own organisation in just the last 10 years (or even less).

It’s worth pausing about what such a strategy might entail. After all wine-making is not a simple business and whilst there is much that can be managed, there remains plenty – not least the weather – that is far out of control. You can make some big decisions at a 10 year level – whether to buy more land, replace the equipment, extend the cellars, invest in automation (there are several vineyards in Burgundy that continue to use a horse to plough the fields), adopt a new approach to vinification, explore foreign territories, maybe even create a second wine (many of the big chateau have only relatively recently offered a second or even third wine under their own labels). But whether to prune early, when to pick, how to pick, whether to bring in extra labour, whether to do something to prevent frost damage, how to handle mould or what price to sell at are all decisions taken day to day or even hour to hour.

The evidence that both these macro and micro decisions have an impact on quality is found in the price of fine wines over the last few years – those from 2005 far outstrip those of 2001, 2002 and 2004. 2000 and 2003 fetch a premium to those, but not to 2005. 2006 and 2007 are, mostly, lower still.

Feedback on the decisions you made, whether in the fields or in the winery, takes time: the critics arrive, they pronounce their verdicts, you announce your prices; in a year or more, consumers take delivery and the supply/demand constraint starts to be fully exercised as supply is drunk or squirreled away (laid down as they say). More critics pronounce or re-pronounce. Demand goes up, prices move (up usually). Suppliers note the supply is diminishing and urge you to buy more before it’s all gone and so things continue. And it’s been that way since Madeira was traded in the London market in the 1700s (back then, Madeira was more valuable if it had spent some time in America – apparently the heat of the journey each way and of the climate there was good for the wine).

So what if a government IT strategy was set once every 10 years?

You’d have to make some pretty big and bold directional investment choices whilst hedging to make sure that you weren’t risking too much with each investment and that no single choice going awry could bring your entire strategy to a grinding halt. So what strategic investment choices would you make? Here are the kind of things that I’d assume were going to happen and have plans in place to harness them as they arose.

1. Cloud computing

This is going to be increasingly talked about until you can’t remember when people didn’t talk about and then, finally, people are going to do it. Even governments. If you don’t have your own, there are very good odds that you will use someone else’s. This idea has been around for a while – indeed, many will say that it goes back to the beginning when the only way to use computers was through something called “time sharing.” It’s been called grid or utility computing too. In essence, instead of what you have now, which is most likely a fragmented mess of “one of everything ever released” servers, you will have a single (logically single), consistent architecture on which all of your applications run. You may own this or you may use someone else’s. Right now you could use resource offered by Google, Amazon, Sun, Joyent or others. The bet, then, is that governments will use cloud/grid/utility applications and that they will both use grids provided by others and one built for their own exclusive use.

If you’re a government department, there is no reason I suspect why you couldn’t start to develop your infrastructure towards a cloud architecture now. You might not know when it’s coming, or exactly how it would look – indeed, you might not yet know whether your wider government org is going to get a few ministries together and build one for the whole of your government to use. What you could take a good punt at though is the standards, architecture and framework that might be involved in such a grid. You could work with the main vendors, see where they’re heading and take some bold decisions that moved your existing and forward application development plan so that it aligned with the necessary architectures.

200805051808.jpg

When we were working on central infrastructure (now known by the sexier sounding shared services) in the UK, we spent more than a bit of time working on what we called an “intercept strategy.” We knew that departments would be all over the map with their readiness to adopt the e-government services we could offer from the centre of government. Some would have have raced ahead and have more capability in their own tools than we had, some would be far behind and not even in need of what we offered. By looking at who was ahead of us and figuring out a release strategy that embraced what they did (and took it forward), and by looking at those behind us and figuring out what they would need to make adoption easier, the idea was that we could plot out over a period of 2 or 3 years what functionality we would need and when. At the same time, we could build adoption plans with those departments, agencies and local authorities – whether they were ahead or behind – that would allow intersection with the central infrastructure platform. We didn’t get too scientific about this – there were plenty of departments who lacked plans that were detailed enough to allow interception, plenty of local authorities who had requirements that were beyond what we thought we could offer (at a cost that was effective enough that others would want to adopt it) and still more agencies who we’d have loved to work with if we could, but either didn’t know much about them or didn’t have the capacity to find out (let alone develop what they needed). You definitely can’t be all things to all people when you are developing shared services.

My investment would relate to a belief that in 1 or 2 years, some governments will start building their own cloud for internal use, in the same way that some of them have secure networks, centralised email servers, central virus scanners and so on. They’ll do this because they want to control the architecture, the standards and the security. This is akin to governments self-assuring instead of using insurance companies – they do this because they’re big enough to be able to; why put your business into the market if you can manage the risk internally? It’s not quite the same with IT given the risks that come with government IT projects. But I’m not necessarily betting that a government will insource its own IT and create a grid, but that they’ll place contracts for creating such a grid. At the same time, that grid will be entirely government owned and housed in government premises so that if the contracts expire or need to be changed, then transitioning to a new provider is a little simpler.

Making this investment choice has a significant upside:

– It starts or continues your consolidation process taking it to a new end point. If the stories are true that the average server is only ever 10% busy (I’m sure there’s a horrible black swan driven curve here which means the actual rule is that it’s 10% busy until it’s consistently 100% busy because you’ve got a problem), then having everything in one place with a consistent architecture and sharing servers between applications as load changes must make sense. It’s green, it lowers your power bills, it reduces your capital costs and reduces your licence costs.

– Architecting for a common platform should drive down development costs, improve productivity of existing developers and infrastructure teams and improve thinking about availability, security and so on (of course, if the security is wrong to start with, then it’s wrong for everything – but the sense here is you only have to get it right once for it to be right for everything).

As a complete aside here, I really think there is an opportunity for after a government has defined its cloud architecture standard – the way that it will be developed, deployed and operated, the coding standards, interface calls, security and reliability standards and so on. If the government of any given country created, in consultation with the IT vendor world, such a document, it could then build that into all of its tenders, driving down cost of supply for vendors. At the same time, it could encourage the education system to provide early training in the necessary skills to support those standards in all computing and IT courses, so that future generations would already be familiar with the approach, whether they were leaving school after O levels, taking their first degree or returning to college for further education. This would increase the pool of talent available to develop the architecture in the future. After all, I learned Fortran and Pascal at university and have never written a line in either since I left – wouldn’t it be more useful to learn about a real architecture deployed across a government?

On the downside:

– If you don’t manage it well, your capital costs could balloon early as you set off the migration (so there’s some good planning to be done about when to refresh, when to stop investing and what changes can you make to your portfolio now to make this easier to pull off later, so avoiding a hockey stick profile of effort)

– I remain unconvinced that the management tools are in place to safely operate a large, consolidated system set in this kind of model. Plainly some people do it incredibly well, but I’m not sure yet that the average government entity is quite ready, and nor perhaps are their suppliers. But this is about a directional investment choice – there is time for this to come good

– Perhaps the last thing we want is lots of clouds within one government operation (meant in the widest sense to include all of the departments, agencies and/or ministries). Then we have lots of servers operating efficiently at a local level, but a high degree of inefficiency at a top level. I think this is balanced out by the proviso that most government systems are used most heavily by staff in a given department. They use only systems within that department, so if the departmental architecture is sufficiently optimised, there may be little to gain from consolidating an already efficient architecture with another efficient architecture from a different department. If this assumption changes to something like “in the future, most operations will be by citizens or business via the Internet”, then I believe the case for consolidation shifts.

There’s no question that this will be hard and that some of the standards that you adopt and decisions that you make will be wrong, which is why you have to make as much as you can of the second investment choice.

2. Application Rationalisation

consolidation.jpg

This is an old saw by now. We’ve already seen several rounds. Many banks consolidated their application base ahead of the euro’s introduction (1999) and almost every company adopted an elimination strategy ahead of the Year 2000, all to reduce the cost and/or risk of transition. Yet still, most government organisations will have dozens, hundreds or even thousands of apparently important, even business critical, applications that support what they do. These will be spread around various sites, they will sit under people’s desks on servers that few know about, they will be in the main data centre but supported by Jo who’s been around since year dot, there will be at least 4 and maybe 10 of what most people would define as “common applications” – GIS/mapping systems, statistical analysis systems, workflow tools, asset recording systems, timesheet analysis/time-recording tools and so on.

Taken across a single country’s government as a whole, the total number of applications will be a frightening number, as will the total cost to support them all. There are several layers of consolidation, ranging from declaring “end of life” for small systems and cutting their budgets to zero (and waiting for them to wither and die – this might take eons) to a more strategic, let’s use only one platform (SAP, Oracle etc) from here on in and migrate everything to that single platform (this too could take eons).

The likely approach lies, as ever, somewhere in the middle. There’s no doubt that you will increasingly get rid of low end applications (you’re doing this already and have been for ages). But the bet is that across government, you only want to have a few serious, high end platforms. You might want only 3 or 4 GIS systems, 5 or 6 HR systems, 10 time recording tools etc. I don’t think you’ll ever want one of these unless you’re very small or you have a really efficient way of executing a centralisation plan and keeping that central provider nimble and exciting in their efforts to keep their customers happy (I’ll happily debate this one topic for hours – alternately taking the “centralise” and the “partially centralise” as well as, occasionally, the “fully decentralise” positions).

So part (a) of this investment choice is that governments will define a pool of “acceptable” platforms or providers for mainstream services. There will be more than 1 and less than 10 such platform providers in each category of service. Blanket licences covering pan-government use will be negotiated. Three years from now, they will evaluate all of the providers in the pool and eliminate one from that pool – there will be no new development on that platform and anyone doing a refresh must cost out moving away and going to one of the remaining providers. Licence fees will be renegotiate to bring costs down (best to do this before you eliminate one – you’ll get a much better deal if the threat of obsolescence is hanging over the head of someone). Three years after that, another will be knocked out. After 10 years, there will be at least 3 fewer providers in the mix. More aggressive governments might go with a 2 year cycle. Or you might vary the cycle depending on the service category or the total investment you have in a given category.

Part (b) of the investment choice is that the individual solutions will be provided by competing organisations within government. Competition is rare within government itself but having 4 delivery organisations that sit inside government but not within a single ministry (perhaps at arm’s length) will lead to healthy competition in the provision of IT services. That competition will drive lower prices, better innovation, greater product turnover, easier integration, stronger solutions and better customer service. If I were running an IT function, I’d perhaps want to drive the capability to deliver this within my own organisation so that I could do the same for others in the future.

This post is now final (17/5/08) and additional investment choices are being added as separate posts.


5 thoughts on “Strategic Drinking. Errr Thinking.

  1. Besides the mentioned subjects my \”bet\” would be on a unified/common content management for government. The next ten years we will be confronted with an explosion of digital content in the form of documents, audio, video etc (as noted in a recent IDC report). Every government needs a long term strategy to manage this content. Governments differ from companies with respect to their content. Usually they have quite strict obligations to keep and manage their records coupled with the ability to be able to actually find something. This means also choices have to be made about metadata, semantic web, digital signatures, encryption etc. Ofcourse this subject is also linked with the other two.

  2. Cloud computing” breaks the bonds that tie us to our desks, our PCs and laptops, and gives us the ability to access and work on files any time, anywhere, in the diffuse atmosphere of cyberspace. Instead of having a whirring machine packed with software programs and a hard drive on which data is stored, the web itself turns into your computer. Applications such as word processing, spreadsheets, calendars, contacts and email can be accessed, with data stored securely on remote servers.The concept is not new — Hotmail was one of the first ventures to show the benefits of having information “out there” — and companies including Microsoft, Apple and Yahoo! have flirted with web word processing, but Google has taken a huge step into the future by combining the most used applications in a one-stop shop.What are the advantages? There is no need to spend £200 on a shrink-wrapped box of new software that takes the rest of the day to set up on your PC. The service is free to anyone with broadband internet who opens a Gmail account (Google’s webmail service, similar to Hotmail) which until last week was available only by recommendation from a friend.

  3. ErikI have a long track record in the content management area and, having learned my lessons, i generally stay away from it now. if you were to trawl back in this blog to 2002/3/4 you\’d see quite a lot about common content management. We even built one for the UK but struggled to get the usage that I think it deserved. I agree that it needs revisiting again – perhaps we were ahead of our time, had the wrong idea or just didn\’t do it very well – because records retention policies, version control and freedom of information all come to the fore. search my blog for FoI and you\’ll see a post or 2 on how we might address that too. Nice to have you commenting here. appreciate your insight into what\’s going on where you are. Alan

  4. Anon … golly, sounds like a cutting from a puff piece. hotmail may have exposed it to the masses but all-in-1 (running on VMS) did so perhaps 20 years before. not scoring points here; recognise there\’s a big change coming. I\’m not sure it leads to not buying software ever again- i think there are some interesting issues to resolve before we get there.

  5. everyone is forgetting jeremy paxmann who would be a good conference speaker. he would give a practical view of clouds and google in government as bbc are a lot like govt. I reckon alan should get him as the next keynote speaker because more people watch paxmann now than frost, and take him more seriously.

Leave a Reply