Outlook, Exchange & Entourage – Moving from PC to Mac

File this in the “I can’t believe I haven’t done this before” box. I’ve done it now though. Finally. I’ve moved my email to a hosted exchange provider. It was simple; far simpler than I’d have imagined.

I set up the service – I use Sherweb – in about 20 minutes across 3 Macs and an iPhone. That didn’t include the time to install Office 2008 on one of the Macs – so add 30 minutes or so for doing that if you haven’t already done it.

Moving my email from the hard drive of my MacBook Air was as simple as selecting a huge set of it, dragging it and dropping it into my server-based inbox. I had my calendar in iCal so I synced it to the Exchange server and that was done. Contents, from Address Book, ditto.

So if I move Macs later, or even move back to a PC, I guess I’ll never have to go through the whole conversion process again, unless I manage to find something that doesn’t support exchange – and I really can’t see myself doing that anytime soon.

What are the flaws? I haven’t found any so far. It just works. That has to be a good sign given if anyone could have found a way to break it, it would have been me.

Ego Sum Ostium

westminster cathedral

I know that I often link into posts here by the strangest route. I think this hook will qualify as the most unlikely one so far – stranger than wine, Whole Foods Market, poker, Parisian bicycles and so on.

I used to work in Victoria and, most days, walked past Westminster Cathedral. I had never been in. A few weeks ago, walking between offices, I walked past it and thought, “I really should go in.” I had fifteen minutes before my next meeting and so went through the door. As you can see from the photo, it’s quite unlike your “average” cathedral – this is no St Paul’s. St Paul’s is one of my favourite buildings in London; one of the things that makes it particularly special is that it was the only cathedral of its era that was designed and built through to completion by the original architect. Strangely, the architect and builder of Wesminster Cathedral died the year before the first service was held, continuing the rarity of seeing it through.

There are three reasons why Westminster Cathedral was built in this Byzantine style (as opposed to the more usual Gothic style):

1. To be completely different from the Gothic style of Protestant cathedrals and, particularly, to contrast with Westminster Abbey which is at the top of the road

2. The structure is based on domes not arches and so allows for relatively open and spacious areas (the nave is 34m high by 18m wide, the largest in the country) within the church – up to 2,000 people, seated, have unobstructed views of the sanctuary

3. Because it can be built more quickly. In effect, the frame goes up quickly and the decoration is left to those who follow.

There’s a fourth interesting point for me which may be related to the building style or may not – it’s running cost is £1,000,000 a year. That covers all operational costs (not the occasional capital costs for major structural repairs). This church is just over 100 years old and it’s going through a small capital repair project now – new electricity, roofing replacement and so on – and they’re after about £3,000,000 to do that work.

Putting aside the fact that the UK’s Catholic Church is run from this cathedral – a whole religion for a million quid a year! – what got me was two fold: that the operational costs are so low and that they had the foresight, a 100 years ago, to say “We’ll build it and let other people add and modify and decorate it later, incrementally”. Without major modification, it’s stood the test of a 100 years. Show me an IT project that you could say that about even for 5 years.

So I’ll pause at this point and ask that anyone reading puts the religious intro to one side – it really was only a lead in, not a point for debate about the merits of any particular religion (or absence of one) – and concentrates on the IT and e-government thread that I continue with:

£1 million doesn’t sound a lot. Is it just that once you’re in government for a while you start thinking in multiples of £5 million or £10 million? Is it only the true believers – those, say, in MySociety, who can both conceive of, deliver and operate a service for less than £50,000? Is it that a government doesn’t take something seriously if it isn’t priced in the tens of millions? Or is there some weird risk factor that gets added to cater for inevitable delays, requirement adjustments and re-thinking of specifications?

Why I’m on this point is that over the last few years I’ve been brought into several projects – and not just in UK government but other governments around the world and in private sector organisations – or seen projects from a moderate distance, that shared a few characteristics:

  1. Capital spend was largely complete versus the original budget (and, in a few cases, spend was in excess of budget)
  2. Actual scope delivered was some way (often quite some way) from the original expectation – meaning that more money would have to be found to deliver the full scope, or a commercial dispute with the supplier(s) would be needed
  3. Benefits case was starting to look decidedly flaky (and the business units were suffering because of the shortfall in scope, either needing more people or doing less for their customers than they expected)
  4. Ongoing operational costs were being calculated as the live date loomed and they were looking very much higher than had been forecast (putting pressure on future budgets). Sometimes this was because the builder was not the same as the operator – times had changed, contracts had been let separately and so on.
  5. Cost of future upgrades had not been factored, usually on the assumption that such upgrades would each have their own business case, even where the upgrade was necessary just to stay within the support of the various packages

I have no statistics to bring here but it would seem, based on my experience, that projects too often match those characteristics. So, to provoke a debate:

Knowing the cost of change

What if you developed a system / application / solution with a known cost to operate? This would be a set of calculations covering a range of things, such as: cost to add a new customer, cost to add a new user, cost to add 100 product pages, cost to connect to a 3rd party system, cost to add a new tax credit / benefit, cost to add a new taxation profile, cost to delete 100 pages etc. You’d have to come up with the list at the beginning but the idea would be to cover two bases – the first would give you a known operational cost assuming you knew roughly what your business was going to do (note, I’m not saying here that you would set some modelled combination of these as your actual operating base, I’m saying that you would be able to forecast the cost of future change based on these numbers).

Is that even possible?

Some people solve this by allocating a fixed cost for post-live enhancement – a pot of £1 million or £10 million into which all changes go until some future point when a major business case is prepared for a big upgrade. The pot pays for a fixed set of developers who work their way through a hopper of proposed code changes, getting as many done as possible. This approach is as often used in the private sector as the public sector. You need more changes? Add more people and the hopper gets [somewhat] bigger – Fred Brooks’ rules still apply.

What do other people do? What are the approaches?

25 Million Green Bottles Redux

Uhoh. More data has been lost. When it happened the first time (the first BIG time anyway) I suggested the following:200808241944.jpg

1. All of the processes around access to patient, customer, taxpayer, citizen etc data in every department, agency, non-departmental public body and local authority are going to go through a rapid review. New standards will be enforced: senior management sign-off, dual control (keys round the neck and everything), IT supplier held accountable for where data is put and so on. This will take time and still things will be missed and it will happen again – let’s not hope that it’s on this scale, but it will happen again.

* Lock down data exchange now. People come to the data, not the data to the people. Until better processes are in place, this should stop the problem from getting worse.

2. All staff should be taught the “green cross code” of using computers. The very basics need to be re-taught. For that matter, the code should be taught at schools, colleges and libraries.

3. The spooks should lead a review of deploying encryption technology to departments holding individual data so that all correspondence is encrypted automatically in transit using appropriate levels of protection for the job. This will be expensive. The alternative though is to make encryption optional – but because you can choose, sometimes people will choose not to (because it’s too slow or something) and the problem will recur.

4. Systems being architected now and those to be architected in the future will look at what data they really need to hold and for how long and will, wherever possible, make transient use of data held elsewhere. The mother of all ID databases would be a good place to start.

They still seem like good suggestions, especially the one highlighted in bold. This isn’t done yet. Not in the UK and not anywhere else. It may be that the UK is getting the news stories now, but that’s because we rarely hear about those events in other countries.

This site,,Privacy Rights, chronicles more data losses than any other site I’ve yet seen, including those in the USA, the UK and somtimes other countries. It’s not pretty – over 230,000,000 individual records, in the USA alone, lost, stolen, fraudulently obtained or otherwise maladministered since January 2005.

As if to reinforce the “It will happen again, to governments and companies alike” refrain, today’s newspapers bring the story of Best Western Hotels and their IT systems being hacked – with the loss of 8 million guest records. If you’ve stayed in such a hotel in the last 12 months, you’re vulnerable. The press are saying “The details, which included home addresses, phone numbers, place of employment and credit card details, were sold on through an underground network controlled by the Russian Mafia.” Intriguingly, most of the press claim that the person at the heart of this heist was an Indian hacker, I can already hear those against off-shoring re-rehearsing their arguments.

Information Week has correspondence from Best Western refuting the more sensational claims in the press. I wouldn’t take these protestations as a sign that you shouldn’t worry

It’s 1981 All Over Again


Andrew Glaister, Matthew Smith, Malcolm Evans, David Braben, Ian Bell, Chris & Tim Stamper, Jeff Minter, Eugene Jarvis … all names from the early 80s, all famous to varying extents for single/double-handedly writing video games that were the stuff of legend. 1k Space Invaders, Manic Minder, 3d Monster Maze, Elite, Jetpac, Attack of the Mutant Camels, Defender. These were people that I recognised and even hung out with when I had my ZX81 and handcoded, in Z80, my first programmes. 200808241932.jpg

Later, new names came to the fore: Jez San (Starglider), Will Wright (The Sims), Warren Spector (Deus Ex) and, of course, Shigeru Miyamoto (Mario). Try, though, to name the names behind current mega-hit video games – Halo (1,2 or 3), Gears of War, Killzone, Guitar Hero and I’m pretty sure – unless you’re really, really into it – that you’ll draw a blank. And even if you can name one person, you probably know that there are dozens or even hundreds of people working alongside them. Game development – or, for that matter, any development programme, has long been a team sport. A big team sport. As I moved from ZX81 to Spectrum to BBC Micro to Atari ST and then to consoles, many of these names stayed relevant and equally famous; but the solo coders gradually disappeared and became far, far rarer. Production values got sharper, costs rose – but software didn’t necessarily get any better. 1981 seemed like a long way away.

When I was writing code back then, it was common to meet up with people who were single-handedly specifying, developing and distributing their own software – be it games, sports applications or business systems. They’d be working with 16kb or perhaps 32kb of memory and shipping their products on tapes – oh the expectation as you waited for the tape to load, with that peculiar ZX81 interference-like loading screen (when Manic Miner first debuted on the Spectrum, and had a loading screen that didn’t just show seemingly random lines, it was a big deal). Games in 1981 cost a few pounds, perhaps £5 or £10. Games now generally cost £40 or even £50. The consoles, for the most part, shut down the ability for individuals to produce games. The barriers to entry were too high.

And now we have two, independent, but maybe highly related vehicles where individuals can develop, publish and distribute their wares without leaving their arm chairs at home. Xbox’s Live Arcade and Apple’s iPhone. Both of these platforms now allow single people [relatively] easy access to huge markets – in the millions or tens of millions. Xbox is a little harder with all of the certification processes but those barriers look like they’re being lowered as the SDK gets out. The iPhone Applications store looks to have very few barriers, as long as you aren’t trying to break the conditions that you accepted when you signed up to buy an iPhone. Certainly the latter is already populated by dozens or even hundreds of games written by solo coders. And these games and software titles cost from nothing to a few pounds. It’s 1981 all over again; except this time, Apple and Microsoft bring the market to you, handle the distribution, the money and everything else. All you’ve got to do is write the code and deliver the quality – no easy task.

Perhaps the first star of Xbox Live Arcade is Jonathan Blow, author of Braid; Author doesn’t sound too strong a word – there’s a whole story behind the game and the production values are incredible for a solo game. It could so easily have been Jeff Minter – he of Mutant Camels, Gridrunner, Hovver Bovver and numerous others from the 80s – with his awesome but difficult to follow (for non-hardcore gamers) Space Giraffe. It might even have been Chris Cakebread with Geometry Wars although he arguably had the backing of a big studio in developing his game, even if he did all the heavy lifting of coding.

braid.jpg 200808241047.jpg

Who will the first star of iPhone games be? It could already be Nate True with his Guitar Hero-styled “Tap Tap” – reportedly over a million people have already downloaded this game already. Other games are already in the hundreds of thousands – Spinner Prologue for instance. Perhaps a surprise title is consistently at Number 1 for paid downloads, where “paid” means it costs you 59p or about $1 – Koi Pond (by Brandon Bogle apparently, but who knows). Just as back then, there are any number of poorly designed, poorly written, buggy bits of code – but feedback, on the iTunes Store at least, is merciless. And the “big team” developers have been no better historically- think of any game derived from a movie tie-in!

Just a couple of things for those people writing software (ok, ok, games), particularly for the iPhone but these thoughts perhaps apply just as much to Xbox Live Arcade, that would make them leagues better in my eyes:

1) I’m mobile, I have little time. I want the software to start quickly and be playable, if it’s a game, very, very quickly. I don’t much care about your logos, your branding, your studio names and whatever. Maybe you can display that the first time, but please don’t doing it every time. Aurora Feint? 30 seconds to start? I’m already on the tube and off again before you’ve even started. If you must do something in the background, have a loading bar so that I know what’s going on. But better still, load only what you need.

2) Autosave whenever I exit. If a call comes in, or I need to switch out to do a text or just need to hop off the tube, I want the last thing you do to be to save status just as I press the home button. I don’t want to be at the end of a level, at a particular place or whatever, I just want it to save so that I can carry on when I restart.

But these are early days and the potential is enormous; new platforms can take years or more to come into their own – let’s hope with the horsepower being applied to the iPhone and now to Xbox, that time period is massively compressed. In the background I’m downloading the 1.2GB Apple SDK. I haven’t the faintest idea what Objective C is and I probably haven’t got the time to figure it all out, but I wanted to get a sense of how it is, nearly 20 years on, now that PEEK and POKE are relics of the past. Don’t hold your breath for my first code since probably 1983.

Unlucky Fish?

I got more than few emails and one or two comments asking what on earth I was talking about with the previous post on “unlucky fish.” So here’s a closer look:


Does that help? Like I said, not photoshopped (apart from the red ring and the “look here” of course) or edited in any way.