Unavailable

Tried to logon to my online bank account today to check the balance, having had the debit card turned down in a store today. The message from the bank read:

“I’m sorry. We are having some temporary delays. Please try again later”

With the obligatory “ok” button. No, not ok. It would be nearly ok if you told me when it would be back up or maybe a little more about the issue, or perhaps even that the reason the online service is down is the same reason that my card was turned down. But this message from a major international bank is not ok at all.

Nearly three years ago we took the online self assessment service down from some routine maintenance one Friday evening. We put a note up a week in advance, flagged the outage on the entry screen and took the service down when we said we would. We even brought it up when we said we would. It made the BBC 6 o’clock news (yes, mainstream television news). Plainly that was absurd.

It seems that everywhere there are double standards around what is acceptable and what is not. Kablenet was castrophically down the other day – by using the word “catastrophic” I mean DOWN, i.e. no message, no redirect, no nothing. COLT telecom took the blame.

Stuff goes down in this space. Putting things online is still new for lots of companies. Design flaws are hidden until something happens that you didn’t expect (think electricity outages in London last week). Even the smartest people get hit by outages. The trick is to have a good plan for recovery, communicate with your customers whilst you’re down and then get the service back as fast as you can.

And remember, judge others as you would wish to be judged yourself. Next time you’re down, I’ll remember and I’ll mention it when the time is right 😉 Because, your service will go down just like ours will.

MS Blog n.0

Struck today by just how many Microsoft bloggers there are. In a world where many expect Microsoft to be the evil empire and doubtless to have made all of their staff kill goats on the altar, it’s refreshing to see so much obviously unvalidated and unapproved commentary. Could this be a fascinating model for the MPs blogs that the VoxP folks champion so regularly?

Some of them have disclaimers, although not perhaps in the sense you might expect, e.g. (from Tim Ewald)

“LEGAL STUFF BECAUSE I WORK FOR A BIG COMPANY: In case it wasn’t clear to anyone, these posts are provided “as is”. They are not guaranteed to be useful or even correct – though I do the best I can – and they confer no rights”

Some of them even have code there. Not that I follow a line of it anymore. Been far too long since I had to write some code.

And perhaps my favourite, from Becky Dias:

“Who cares about technology? It’s all a matter of business”

I wonder if the policy is laissez-faire at Msft or whether it’s just too hard to track it and crack down. After all, if you were told not to do it, you could just put another one up somewhere else as an anonymous feed and deny all knowledge. Plausible deniability? Probably.

All in, can only be a good thing.

Web services and copy databases

Whilst I’ve been doing this work on Enterprise Architecture, I’ve caught up on a lot of reading on web services. I’ve read around the subject using the usual suspects and their blogs, John Gotze, Jon Udell, Phil Windley and so on. I’ve also looked at the aggregation sites, like Looselycoupled. All of those are available from the links at the left on my blog. There’s a lot of wisdom out there.

I particularly liked a piece (from April 2003) on looselycoupled that notes that many organisations are using web services to deliver short-term, tactical value on a purely point to point basis. It also notes that what’s going to happen soon is that today’s pilots will be added to with new pilots and yet more pilots until the situation becomes close to impossible to manage.

What seems a long time ago, maybe 1992 or 1993, Citibank fronted its key legacy systems with a copy database – all changes by call centre staff, front facing systems and operations folks were made to this database. The reason was that the old system only processed things in batch and, increasingly (even then!), customers wanted to see things applied in real time – we even had an online banking application then, although it was via a proprietary dial-up network, which increased the need to see things as they were posted so that corporate treasurers could properly manage funds. The nice thing about doing such a database in relatively modern technology is that the issues that might otherwise occur around record-locking, multiple updates etc can be managed much more simply.

I was wondering how many people are still doing this. Just wrapping up your legacy system as a web service and offering it up may not solve some of the old problems (I wondered, for instance, how people using the legacy application through old-style services would know what was happening to data around them). So creating this copy database to which all changes are made intra-day, whilst storing up the transactions for posting to the legacy each night, and then refreshing the copy might be something that we should do. That way customers and staff see their accurate data all the time, we don’t have to fit in the confines of the old technology for management, we can still use web services and we have a 24 hour accessible service. Is that how it should work or have I missed something?

Online tax forms that are just like paper

Sounds a daft idea … an online form that’s just like paper … but, actually, it probably has a lot of merit. People are familar with the tax form (or the benefit form or whatever) and transposing it as is online, but adding validation and checking is pretty much what everyone has done to date with their online services. That process has, however, involved the creation of vast amounts of technical infrastructure and, in most cases, even requires whoever is completing the form to sit in front of their PC, online, while filling it in. No problem if you’re a broadband user or on a tariff, but might be a problem if you aren’t.

Confronting that, Adobe have come up with some changes to PDF that allow anyone with the reader software (version 5.1 or later) to inline edit a modified PDF file. So the tax folks can ship a PDF file to people who can then work on it in their own time, online or offline. When they’re done, the PDF wraps up the answers in an XML file and then sends them back to government. Their version of the US tax form is online here. It turns out that Adobe folks don’t make the best presenters of their own wares as I found out at a session this week, but the product needs to be explored.

Whilst replicating a paper form online isn’t what I call e-government, it’s a way to get some services available that are value-added versus the paper versions (error checking, context sensitive help, electronic input for government etc) and without needing quite the same infrastructure that you would otherwise need (you’ll probably still have to have people write javascript or VB but it will all be embedded in the PDF file rather than needing a complex hosted environment) and, it will be secure because all the data is in the form that the client has rather than in a server somewhere in the cloud. And whilst those are online, we can crack on with the hard work of stripping out the redundant processes and making the necessary changes to business logic to support the delivery of transformed government. Surely even the swingometer crowd would count such a plan as progress?

On standards and guidlines

Whilst I was writing the EntArch paper, I was looking for stories that would illustrate the problems. One of the big (real big!) problems is the standards to adopt – not the actual standards themselves, but how to get them agreed, how to get people to adhere to them and how to roll them out fast enough so that they aren’t seen as a delay in the process.

Whilst thinking about how to describe an integration backbone, I came up with the following text:

To try and paint a picture of how an integration backbone works, let’s try this example. Imagine a railway turntable where trains arrive from different directions, but that each track has a different gauge. The turntable lifts the body of the train from its existing gauge (leaving the wheels) and moves it to a set of wheels waiting at the next gauge. Every time a train arrives at the turntable, this approach is repeated.

To put this matter in real terms, the gauge of trains in Ireland today is 1600mm, in the UK it’s 1435mm. Were we to create the equivalent of a Channel Tunnel to link the mainland and Ireland, we would need to lay new track at one end or the other to allow a train to pass through, or create the turntable-idea that I note above. Both are probably impractical and although standards on railway gauges were imposed in 1846, it was by then too late to solve the problem as the railways were already built.

In technical terms, we’re in the same place. Some standards do exist, but not enough. Imposing them now is far too late for any system already built that does not conform to the standard (and that means almost every system we have, even ones in the same department). So, what is needed is a device to do the heavy lifting and make the necessary “gauge changes” every time they are needed.

Ideally, there is a defined standard for incoming and outgoing messages (see the pages earlier covering standards), so there is only one change needed per query – from the outward standard to one of the various inward standards.

Sticking with the railway analogy for a little longer, when the Union Pacific and Central Pacific railroads were joined in 1869, the 2,000 mile journey from coast to coast that had previously taken up to four months was reduced to a mere six days. Standards and integration drive significant customer benefit!

There are, occasionally, compromise methods that can be used to forward the integration path – Britain’s own railways had competing standards early on, the Brunel gauge of 7’ ¼” and the Stephenson gauge of 4’ 8 ½”. The incompatibility was solved by adding a third rail to hundreds of mile of track to allow trains of either gauge to pass – a complicated but workable solution. Far better though, surely, to agree a standard and stick to it.

As the older systems retire, they are built using the new (outward facing) standard and the integration backbone does less work. But that will take time. Until then, the backbone is a vital part of putting services online, of offering joined up services that are transparent to the customer and of buying time to replace the legacy architectures.

What worries me is that everyone will build an integration backbone with different standards and then I’ll need another integration backbone to link the integration backbones. Agh!

P.S. Anyone who doubts the idea that there are places out there where they lift carriage bodies up and down, swapping between gauges, need only do a bit of research on trains between Russia and the rest of Europe during the time of the Tsars (there are still places where it happens now I believe). Lots of stories about why the Russians adopted a different gauge – some to do with them wanting invading armies not to be able to use their own rolling stock and others that are far more bizarre. I owe my Uncle, Paul, for that bit of research.

EntArch

I’ve just finished draft one of my Enterprise Architecture paper for government. I’m going to circulate it quite widely within government and to some of the supplier community so that I can get some feedback on it before going to draft two. I haven’t even got close to an executive summary yet, I’ll do that when I get to the next draft.

I’ve got it in my head that the document is one of two things. It’s either a paper that serves to stop you getting indigestion from eating an elephant (two rules: know how big the elephant is so that you can pace yourself and have a big knife) or it’s a “you can’t get there from here” story. I’m tending towards the latter.

The essence of my paper is that before technology, the data we had was our asset. We looked after it – there was one master source for the books of a company, one customer list and so on. When technology came along we quickly created many copies of our data, manipulated them in different ways, allowed the marketing people to talk to the customers one way, the sales people another way, the product people another way. We added products that needed new systems that didn’t work the same way as the old systems. We kept the programmes and the data closely coupled and didn’t share anything. Before long, the data wasn’t the asset, even the systems weren’t the asset. If we had an asset, it was probably the few people who had been around long enough to understand what had happened, what we had originally and how the systems interacted – every company has a few of those people. We need to go back to the data being the asset.