ID, ID, ID …

Jerry the Fish, Microsoft’s UK National Technology Officer, published a good piece in the Scotsman the other day – I’ve linked to Kim Cameron’s quote of it. Lord knows why Jerry chose that newspaper; maybe the majors south of the border wouldn’t touch it? Circulation figures are listed as 67,000 in Sep 2005 (with a readership just over 200,000) – You can do better Jerry (and yes, I know that’s 199,999 more people than read this blog)

It’s a good, strong piece that says, essentially, caveat emptor to the government, e.g.

The ID card itself also needs to be carefully designed to ensure it doesn’t add to identity fraud problems by carelessly “broadcasting” personal information every time it’s used. Using the same identifiers wherever we present the ID card is a highly risky technical design. Would you be happy if online auction sites, casinos or car rental company employees are given the same identity information that provides you with access to your medical records? It’s unnecessary: we can already design systems that ensure the disclosure of personal information is restricted only to the minimum information required (a pub landlord, for example, needs only to know that you are over 18). Keeping identity information relevant to the context in which it is used is both good privacy and good security practice.

I’ve long worried about the card issuance process – after all, I see only flaws with the one banks use for credit cards (and, whilst they can create a reserve for credit losses, it’s hard to see government adopting the same for ID losses) – but few seem to talk about the process for approving who can access data on the card. The checkout girl in the supermarket that checks your age presumably needs to have a card that says she’s allowed to look at your age (i.e. I’d like to know that someone’s checked out who she is and made sure that she can only look at my age when she’s at the till, not when she’s out at a bar); the doctor in the practice needs to look at more data, but again, I’d like to know that she’s got a clear process for doing that and that the nurse in the practice surgery can’t randomly look up data. This needs a lot of thought – and there are many on the web who are contributing to that debate. The crucial test is are they being listened to?

After all, Jerry says:

if someone were proposing to build the most ambitious bridge the world had ever seen and engineers could see that it would fail, and suggest ways in which it could be improved, we would expect their views to be taken into account.

We know that Norman (Lord) Foster, for all his skills, still screwed up the Millenium Bridge across the Thames. It can happen to the very best.

Funnily enough, today saw Ian Watmore talk openly about the potential for problems with the ID card in the Independent. There’s a great photo of him, that you won’t see in the online edition (there you have it – a reason to stay with the dead tree press), looking skywards. I think he’s after salvation and divine aid rather but it may be that he was rolling his eyes at the thought that he might carry responsibility if the ID card programme doesn’t go right.

There are a couple of odd lines, like this one

The former managing director of the consultancy firm Accenture made some big changes on taking charge – like deciding to audit how much the public-sector spends on IT.

First, I’m not sure that’s a “big change” and, second, it’s not a change at all – it’s been done before. Perhaps Ian has got a scientific way of doing it now by getting everyone to tell him how much they’re spending on both capital and operational IT across all of government (I can see the paper forms required now). We always struggled in the past to get at “day to day” budget money (i.e. costs that a department or LA could incur without specific outside approval) versus programme money (that was separately itemised to the Treasury). Interestingly, the spend quoted, at £14bn, is around a £1bn more than I quoted when I used to spend time trying to add the numbers up. Remember, a billion here and a billion there and pretty soon you’re talking real money. The estimates were made every couple of years though, based on spending review requests, capital budget allocation and review of deals signed through outsourcers. So not perhaps the first attempt but perhaps the latest and hopefully more accurate.

And there’s this too

“I can’t say anything like I know anything is going to happen.” Will it be delayed in the tradition of all great government IT projects? “I don’t think anyone is naive enough to believe this is an easy project.”

Given there’s not really a start date yet (let alone a contract let) and he’s talking about a pretty broad set of potential pilots, that sounds like a good answer. Let’s hope no one is naive enough. I’m not sure government has built too many bridges recently so maybe we should find some folks who have.

Running With Gadgets

Every run has become a bit of a gadget fest these days. On my wrist is my new Garmin Forerunner 301 GPS tracker and in my pocket is my ipod nano. If I’m running for time, I’ll listen to some up tempo music, if I’m just running for distance, I’ll do a book. The current book is Bill Bryson’s Short History of Nearly Everything – a great book that I have two copies of at home, one in hardback and one in paperback, neither of which I’ve ever got round to reading.

Two weeks ago I ran the Nike 10k in London. There were 3 or 4 races going on in various places, I ran the Battersea Park loop.

Last time I ran with a marked course with the Forerunner, it was the Great North Run. It had me finishing the race a good 1/2 mile before I actually finished. This run was no different. The watch had me down as running 10.4km and me finishing after 47 minutes – I crossed the line at 48m 34s. Motionbased – the website that hosts the graphic I’ve put just above – had the distance at 10.08km – far closer.

These differences bother me just a little – not a lot, it’s about a 4% difference versus distance on the ground so it’s not terrible. After all, who’d have thought a GPS device could fit on your wrist let alone deal with the constant up and down motion of a runner? But, I’ve been trying to figure out the discrepancies, partly so that I know if it’s telling me I’m running faster than I am and partly so that I can understand the distance discrepancies and plan that finishing burst with a little more accuracy.

I figure if it was just GPS errors, they’d cancel out (because I assume that they’re +/- errors so would make one km a bit longer and perhaps the next a bit shorter, evening out over a 10km run). But every run seems a little longer and, consistently, 10km runs seem to come out as 10.4km meaning that I stop running a good 90-120 seconds before I’m supposed to.

After a bit of research, it turns out that there are a few reasons for this:

– Lost signal – where you run under tree cover or between tall buildings
– Elevation – Motionbased looks to map in 3D and measure distance more precisely
– Observations – the watch parses more data as I’m running than it exports to MB
– Algorithms – different, apparently spurious, numbers are thrown away by each

Garmin have just bought Motionbased so perhaps we’ll see some of this sorted out in a new version of the software (the watch plugs in via USB so I’m hoping that I can just download and go). That said, MB works with various devices and I guess they use the same algorithm, so Garmin making it work better doesn’t mean that it would be any better for other devices.

Meanwhile, I need to run the same course a few times and see if the errors are consistent and then I can just add the 1/2km to the distance and adjust for the error in the target speed. Believe me, it’s quite disheartening to know you’re on a 10km loop and think you’re done 90 seconds before you are. On a marathon course, that would have me marked as done around 1km before the end.