A few days ago I rambled about the relative maturities of any given country’s e-government attempts measured by the number of .gov websites there. I’ve thought this through a bit more and think it might be a fair reflection.
The graph above shows what I think the relationship is. The e-government target is announced – everyone rushes to build websites and, because they are hard, few transactions get added. Websites grow exponentially, transactions arithmetically at best. At some point, someone realises this is dumb and more effort is put into transactions. But only when the effort on websites is actively countered and sites are shutdown is enough effort put into transactions to really make this happen. As the website count gets lower, better and better information is available with each transaction … and e-government success results, i.e. high take-up.
The trick, obviously, is to recognise early that this is going on and take steps to reduce the website count – that’s the bit where good central infrastructure, consistent look and feel, well-researched customer feedback, focused content audits/rationalistion, content tagging (metadata and taxonomy) and RSS-feeds come in. Without those things, I predict that it’s all going to be a bust. Of course, a consultant is an expert who will know tomorrow why the things he predicted yesterday didn’t happen today (borrowed from Laurence J. Peter who, googlism tells me, is the one that first told of the “peter principle”).