More on domains, pages and the meaning of it all

I’ve had the last couple of days off. Well sort of. At home at least but working none the less. I’ve used the time to put to bed some stuff that was long outstanding and to pursue a few new things. A while ago I posted some figures on the page count (and here too) in websites and, since then, there’s been a bit more work done which I thought was worth talking about.

The original list of 800 domain names was, it turns out, a subset of the domain name totals. I have two new totals for you: 2,643 and 3,705. Why the difference? The 2,643 strips out the duplicates (sites that are pointed to by more than one domain name). So, there are 2,643 domain names – pretty much guarantees a hit no matter what you type as a prefix I guess. One site for every 23,000 people, give or take a bit. I suspect that the UK is not at all out of whack versus other countries with this kind of count. Anecdotal evidence puts other countries that I have talked to at between one site per 25,000 and one per 100,000. That’s not a particularly helpful ratio of course – it would be more useful, I imagine, to quote it per government department/entity/body (in which case we’re running at about 1:3.5 – I don’t have enough data to measure that against global norms).

Somewhat frighteningly, the total page count for those 2,643 sites is over 5 million (5,029,855 to be precise, but that was a couple of weeks ago). So that would be one page for every twelve people in the country, near enough.

As before we did some Pareto analysis which shows that about 15% of the total site count owns 80% of the page count. I am sure that there are some remarkably useful niche sites in the remaining 85% of sites, but it does make me wonder what the cost/page or cost/visit they run to might be.

I’m going to try and do some work with other government departments both here in the UK and elsewhere to get a meaingful cost/page number, both in technical terms (bits, bytes and operations staff) and also in business terms (editorial staff, approval process time, marketing etc). If I can get that kind of data I think we can have a serious conversation about the value of the content that exists on the web. Once the cost data is there, tying it up with the visitor count ought to give a metric on the value of any given site – a site with a low cost per page and a high visitor count will rank higher than a site with high visitors and high costs. I can see a league table forming.

The goal of all this ought to be to figure out ways to force the cost per page down – both in business process and technology terms – and drive usage by identifying what it is that makes certain sites more valuable than others. I would say this of course, but for me it should also drive consolidation – fewer sites, fewer brands, fewer navigation styles and so on. And, in turn, that will make it all the more useable. And then once we’ve got that cracked we might well have some sites that are indispensable (I’ll come to that later tonight).

Leave a Reply