Am I Being Official? Or Just Too Sensitive? Changes in Protective Marking.

From April 2nd – no fools these folks – government’s approach to security classifications will change.  For what seems like decades, the cognoscenti have bandied around acronyms like IL2 and IL3, with real insiders going as far as to talk about IL2-2-4 and IL3-3-4. There are at least seven levels of classification (IL0 through IL6 and some might argue that there are even eight levels, with “nuclear” trumping all else; there could be more if you accept that each of the three numbers in something like IL2-2-4 could, in theory, be changed separately). No more.  We venture into the next financial year with a streamlined, simplified structure of only three classifications. THREE!  

Or do we?

The aim was to make things easier – strip away the bureaucracy and process that had grown up around protective marking, stop people over-classifying data making it harder to share (both inside and outside of government) and introduce a set of controls that as well as technical security controls actually ask something of the user – that is, that ask them to take care of data entrusted to them.

In the new approach, some 96% of data falls into a new category, called “OFFICIAL” – I’m not shouting, they are. A further 2% would be labelled as “SECRET” and the remainder “TOP SECRET”.  Those familiar with the old approach will quickly see that OFFICIAL seems to encompass everything from IL0 to IL4 – from open Internet to Confidential (I’m not going to keep shouting, promise), though CESG and the Government Security Secretariat have naturally resisted mapping old to new.

That really is a quite stunning change.  Or it could be.

Such a radical change isn’t easy to pull off – the fact that there has been at least two years of work behind the scenes to get it this far suggests that.  Inevitably, there have been some fudges along the way.  Official isn’t really a single broad classification.  It also includes “Official Sensitive” which is data that only those who “need to know” should be able to access.   There are no additional technical controls placed on that data – that is, you don’t have to put it behind yet another firewall – there are only procedural controls (which might range – I’m guessing – from checking distribution lists to filters on outgoing email perhaps).

There is, though, another classification in Official which doesn’t yet, to my knowledge, have a name.   Some data that used to be Confidential will probably fall into this section.  So perhaps we can call it Official Confidential? Ok, just kidding.

So what was going to be a streamlining to three simple tiers, where almost everyone you’ve ever met in government would spend most of their working lives creating and reading only Official data, is now looking like five tiers.  Still an improvement, but not quite as sweeping as hoped for.

The more interesting challenges are probably yet to come – and will be seen in the wild only after April.  They include:

– Can Central Government now buy an off-the-shelf device (phone, laptop, tablet etc) and turn on all of the “security widgets” that are in the baseline operating system and meet the requirements of Official?

– Can Central Government adopt a cloud service more easily? The Cloud Security Principles would suggest not.

– If you need to be cleared to “SC” to access a departmental e-mail system which operated at Restricted (IL3) in the past and if “SC” allows you occasional access to Secret information, what is the new clearance level?

– If emails that were marked Restricted could never be forwarded outside of the government’s own network (the GSI), what odds would you place on very large amounts of data being classified as “Official Sensitive” and a procedural restriction being applied that prevents that data traversing the Internet?

– If, as anecdotal evidence suggests, an IL3 solution costs roughly 25% more than an IL2 solution, will IT costs automatically fall or will inertia mean costs stay the same as solutions continue to be specified exactly as before?

– Will the use of networks within government quickly fall to lowest common denominator – the Internet with some add-ons – on the basis that there needs to be some security but not as much as had been required before?

– If the entry to an accreditation process was a comprehensive and well thought through “RMADS” (Risk Management and Accreditation Document Set) which was largely the domain of experts who handed their secrets down through mysterious writings and hidden symbols

It seems most likely that the changes to protective marking will result in little change over the next year, or even two years.  Changes to existing contracts will take too long to process for too little return. New contracts will be framed in the new terms but the biggest contracts, with the potential for the largest effects, are still some way from expiry.  And the Cloud Security Principles will need much rework to encourage departments to take advantage of what is already routine for corporations. 

If the market is going to rise to the challenge of meeting demand – if we are to see commodity products made available at low cost that still meet government requirements – then the requirements need to be spelled out.  The new markings launch in just over two months.  What is the market supposed to provide come 2nd April?

None of this is aimed at taking away what has been achieved with the thinking and the policy work to date – it’s aimed at calling out just how hard it is going to be to change an approach that is as much part of daily life in HM Government as waking up, getting dressed and coming to work. 

Adequately Appropriate? Acceptably Appropriate? Thoughts on Cloud Security Principles

It was with some trepidation that, over the Christmas break, I clicked on links to the newly published Government Cloud Security Principles.  Trepidation because my contact with such principles goes back a long way and, in government, principles tend to hide more than they reveal. 

Some three years ago whilst looking at G-Cloud in its early days, I proposed that, as part of the procurement process, we publish a detailed set of guidelines that explained not only what was meant by IL0, IL2 and IL3 (I skipped IL1 on the basis that in over a decade, I have never heard anyone use it) but also what would be required if, as a vendor, you were trying to achieve any of those accreditation levels.  My thinking was if government was truly going to encourage new players to get involved, few would commit to building out infrastructure if there wasn’t specific guidance on what they would need to do.

I produced a short document – some 4 pages – which I thought would act as a starter.  I’ve published it on Scribd so that you can see how far I got (which wasn’t all that far, I admit – I’d say it’s a beta at best).   Some weeks later, after chasing to see if it could be developed further, in partnership with some new suppliers so that we could test what they needed to know, I was told that such a document would not be viable as, and I quote, “it would encourage a tick box attitude to security compliance”.  Some thing in me tells me that would be no bad thing – definitely better than a no box attitude, no?

So here we are, in early 2014, and someone else – perhaps some brave person in Cabinet Office – has had another go.  Is this just a tick box exercise too?  Or would I find seriously useful principles that would help both client and supplier – the users – achieve what they both need?

Sadly, the answer is that these principles do not help.  Perhaps in a desire to ensure that there was definitely no encouragement of a tick box approach, they say as little as possible using words that are unqualified and without any context or examples that would help.  It strikes me as unlikely that any security experts in departments will find a need to refer to them and that any supplier seeking some clues as to the fastest route to an accredited service will linger on them no more than a moment.

For instance:

– The word “adequate” or “adequately” is used four times.  As in “The confidentiality and integrity of data should be adequately protected whilst in transit”.  Can’t disagree with that. Though, of course, I don’t know what it means in delivery terms

– “Appropriate” crops up three times.  As in “All external or less trusted interfaces of the service should be identified and have appropriate protections to defend against attacks through them”.  Excellent advice, everything should always be appropriately protected.  No more and no less.  But how exactly?

– Or how about this: “The service should be developed in a secure fashion and should evolve to mitigate new threats as they emerge”.  No one would want you to develop an insecure service but what exactly is meant by this?

– Or this one: “The service provider should ensure that its supply chain satisfactorily supports all of the security principles that the service claims to deliver”; so now the service provider needs to decide what is meant by the principles and ensure that anyone it users also complies with their very vagueness.

Of course, there’s a rider at the front of the document, which says:

This document describes principles which should be considered when evaluating the security features of cloud services. Some cloud services will provide all of the security principles, while others only a subset. It is for the consumer of the service to decide which of the security principles are important to them in the context of how they expect to use the service. 

So not only do we have to decide what is adequate and appropriate, we have to decide which of the principles we need to adopt adequately and appropriately so that we have adequate and appropriate security for our service, lest it be seen as inadequate and/or inappropriate perhaps.  How appropriate.

This is hardly academic.  If you want commodity services, then you need to provide commodity standards and guidelines.  Leaving vast areas open to interpretation only furthers the challenges for new suppliers (and entrenches the capabilities of those who already supply) and means that customers are unable to evaluate like for like without detailed (and likely continuing) reviews.

To give an example, I recently sat with great people from three different government departments to look at the use of mobile devices.   One was using WiFi freely throughout their building (connected to ADSL lines) to allow staff with department issued iPads and Windows tablets to access the Internet.  Another had decided that WiFi was inherently untrustworthy and so insisted that staff use the 3G or 4G network, even issuing staff with Windows tablets a dongle that they needed to carry around (and pair via bluetooth – which is, I assume, for them more secure than WiFi) to access the Internet.

If three departments can’t agree on how to configure an iPad so that they can read their email (this wasn’t about using applications beyond Office apps), what hope is there for a supplier offering such a service?  Where is the commodity aspect that is necessary to allow costs to be driven down? And how would a new supplier, with a product ready to launch, know how it would be judged by the security experts so that it could be sold to the public sector?

Principles such as these encourage – perhaps even direct – departments to come to their own conclusions about what they need and how they want it configured, just as they have done for the last three decades and more. 

With today’s protective markings – IL0, IL2 and IL3 etc – that is one thing.  With tomorrow’s “OFFICIAL”, there is a real need for absolute clarity on what a supplier needs to do and that can only come from the customer being clear about what they will and won’t accept – it cannot be that one department’s OFFICIAL is another department’s UNACCEPTABLE. 

Fingers crossed that this pre-Alpha document is allowed to iterate and evolve into something that is useful.

Are You Integrated

I came across this sign this week – on a hoarding just opposite Parliament.

200901171701.jpg I was in intrigued by the spelling of their name – “intergrated” rather than the expected “integrated”

And also by the, I imagine, deliberate change of colour for the “i” and the “r” – perhaps some play on the word “infrared” when applied to matters of security?

I looked them up today – if you type in “integrated security” on google, you get over 15 million hits (as well as the obvious “did you mean ‘integrated security’?” question at the top of the response list). Putting the two words in quotes reduces the count to 1.2m.

The first on the list is “Intergrated Security Management“, who whilst plainly being Integrated, have managed to have a list of Integrated Partners. Partners Integrated With Intergrated one assumes.

The company from the sign has only a placeholder website promising an update soon – although like the “back in 10 mins” sign in a shop window, it’s unclear when the page was first put there.

So are we to think that all of these hundreds of thousands of people can’t spell? I couldn’t find a trace of a confession – something like “hey, when we registered the company name, we were drunk and mistyped it and so we’re stuck with it now.” Some would say I have too much time on my hands I guess.

e-Government Goes Mainstream (again)

Over two years ago the first story on e-government phishing story broke, It wasn’t a big story and it took me a while to blog about it – some 4 months.

200901091342.jpg

So imagine my surprise on boarding the tube yesterday to find it strewn, as ever, by copies of the Metro, with this headline.

e-Government is front page news, of a kind. Plainly any time the taxman offers you something for nothing, you’d imagine it’s a fraud so I’d be intrigued if anyone had actually fallen for the emails that are apparently being sent in their tens of thousands.

The roll out the barrel/Nigerian 419-type scam has been around for decades, moving from letters to faxes to emails and losses have been real and genuine through that time, so doubtless some have indeed fallen for this. And we need to do something about that. But it’s an old story and not an easy fix.

Back in 2000, we quickly learned in the Inland Revenue that there really was no such thing as bad publicity. An outage of the Self Assessment service – whether planned or unplanned – quickly became mainstream news (I have less than fond memories of front page features in the FT and top of the hour stories on BBC news), traffic went up. Too few people knew of the option to use the web to engage with government and so this free PR was actually helpful in getting the message out – even though the content of the story was generally negative.

So here we are again – in the run up to the end of January – the biggest peak in tax return filing – a headline story about e-government … and maybe a boost in visitors to government websites. It sounds like I was suggesting there was an agenda there – I wasn’t. The spammers are certainly not in league with the government – my point is that it is rare for e-government to make the headlines, even rarer when it’s actually a good news story (i.e. the news here is “take care folks, there are bad people about, here’s what you need to do” as opposed to “another bloody government disaster.”)

Meanwhile … what to do about the underlying problem, that it’s all too easy for a fake website or an email leading to a fake website to capture what ought to be confidential details? I’ve posted here more than a few times about my suggestions – back in 2004, I talked about AOL and its keyfob plans and in 2003 referred to a piece by Simon Moores, and I even proposed some answers. We’re still not there. My bank uses credit-card sized keypads that generate numeric keys that need to be tapped in every time, some sites use pictures that if they don’t appear tell you that the site isn’t genuine. We need something for government too.

Viral Distribution

News yesterday that several London hospitals had been shutdown because of the outbreak of a virus would perhaps make you pause briefly and think of MRSA or some new anti-biotic resistant strain of Staph. So far, so not news – although, thankfully, far less common recently because of, I imagine, Herculean efforts by hospital staff. To hear that it was actually a computer virus makes you pause longer.

The mytob virus, apparently responsible for the shutdown, is more than 3 years old. It’s easy to protect against and well understood. Symantec describe it’s threat level as:

200811211052.jpg

When was the last time you heard of a computer network being shut down by a virus? Well, not that long ago. Along with the hospitals, we have this news today

200811211047.jpg

It seems we’re approaching the annual peak for computer virus infection

Computer users have been warned to take extra special care next Monday as it has been predicted to be the worst day of the year for computer viruses. Security experts PC Tools has forecast the bleak outlook for computer fans on November 24th, as figures from 2007 show that it was the peak for malicious software last year.

But seriously … an entire network shutdown now? In late 2008?

Shortly after I started work in UK government, a series of departments were shutdown for 2 or 3 days, some longer, because the Melissa virus infected their email system. Chaos reined as all email servers were shutdown and nothing could be sent or received. How quickly we had come to be reliant on email. In a hospital where it wasn’t just email but seemingly everything, it must be much worse.

Not long after that, the OGC piloted an anti-virus solution that was hosted “in the cloud” – i.e. was not on local PCs but that filtered every incoming (and later outgoing) email from any government email address that was set up. We took that pilot on, probably mid-2002, and extended it to every single government email address that wanted to use. It wasn’t cheap – but measure that cheapness against the cost of an infection, whether in clean-up time, risk to the operation or any other metric you care to use. Since then, as far as I know, there hasn’t been a single virus infection in a government department using the service. The company, MessageLabs, at the time a tiny company, has since gone on to be a world-leader in anti-virus (and was then bought by Symantec for some $700 million)

What’s my point? I guess it’s the frustration that these lessons have been learned already – and the solution is available at a relatively nominal fee. It’s been well tested and well used for 5 or even 6 years. And hundreds of thousands of email accounts across government are already protected.

For a hospital to be exposed to this kind of risk, with everything else that they have to deal with on a day to day basis, is just shocking.

And, as for the Pentagon, they should already know better – but they should also be reading my blog. Ban USB sticks now.

Gateway In The News

200811021026.jpg

It’s been a long time since the Government Gateway was in the news. Today there are 259 related items in Google News. And, of course, it has gone international with mutliple languages evident in even the first page of news. And most of them aren’t good news stories, rather retelling of what must be a form news story now

“Memory stick containing details of millions of customers/patients/armed forces members/taxpayers/benefit recipients/credit card holders lost. Fears over identity theft/terrorist action/confidentiality breaches reach fever pitch”

Of course it hurts all the more so when it’s something that I was intimately involved in, albeit I haven’t been near it organisationally for 4 years, but the relentless and unending series of data loss fiascos is taking a huge toll on public confidence. It isn’t just government organisations that lose data (see my post, 25 million green bottles, from almost exactly a year ago and the follow up about 3 months ago) but when governments do it (and, again, it isn’t just the UK government) the potential impact and the surrounding noise are orders of magnitude larger.

What someone was doing with a memory stick containing customer login details I have no idea. Why would anyone need such a thing? And why would he or she be in a pub carpark? On second thoughts, don’t answer that last question.

I suspect that there are elements of truth and untruth in the Mail on Sunday’s front page story – oh the times we used to hope for headline news for e-government, but not this kind of headlines – and that the real story is perhaps quite different. But it doesn’t matter; the damage is done. It’ s another incompetence of IT story to add to the seemingly infinite list.

It seems, to me at least, that the actions I put forward a year ago are just as valid:

1. Lock down data exchange now. People come to the data, not the data to the people. Until better processes are in place, this should stop the problem from getting worse.

2. All staff should be taught the “green cross code” of using computers. The very basics need to be re-taught. For that matter, the code should be taught at schools, colleges and libraries.

3. The spooks should lead a review of deploying encryption technology to departments holding individual data so that all correspondence is encrypted automatically in transit using appropriate levels of protection for the job. This will be expensive. The alternative though is to make encryption optional – but because you can choose, sometimes people will choose not to (because it’s too slow or something) and the problem will recur.

4. Systems being architected now and those to be architected in the future will look at what data they really need to hold and for how long and will, wherever possible, make transient use of data held elsewhere. The mother of all ID databases would be a good place to start.

Where I work, memory sticks don’t work. Plug one in and it just doesn’t work (and we’re using Windows XP rather than anything fancier). So perhaps the next actions are:

5. Any contractor or third party working with or alongside government agencies must deploy a standard desktop and server build that disables memory sticks when they are inserted into a USB slot. For good measure, they should perhaps ensure that if a memory stick is even inserted, it is securely and irrevocably wiped. Such third parties would have 90 days to implement this capability across their entire organisation or would be banned from working on government contracts – existing and new – until they had completed the task

6. Any member of such an organisation found to be carrying a memory stick during the period from now until the redeployment of USB countermeasures was complete would be prevented from entering any government building or using any government IT. This would be enforced through random searches, x-raying of bags on entry into buildings and so on.

Extreme? Possibly. But it seems that all measures apart from this are not working and that short of opening up all of the firewalls and setting server passwords to default, any public or private sector organisation – and I mean that in the widest sense as whilst we in the UK see our own examples more frequently, everyone else has the same problem too – couldn’t do a worse job of securing data.

Phorget Phishing?

When you see news stories breaking claiming that over 45 million people have had their credit card details stolen, a reasonable first reaction would be to ask why you bother protecting your data on your home PC if some faceless corporate is going to make it available to anyone who checks in. We might as well all change our banking passwords to “slartibartfast” and be done with it.

When that many people find their finances suddenly put at risk, in one go, there’s bound to be news coverage. Google is carrying over 1,000 reports on the problem. Mainstream newspapers all over the world are reporting. It’s not helped when the company spokesperson says “These figures only relate to what we do know. There is a lot more we do not know and may never know. We have identified two [computer] files that were removed from our UK system but we still do not know precisely what was in them” – otherwise known as “we haven’t a clue.” The BBC was told “that 100 files were moved from its UK computer system in 2003, and two files were later stolen.”

Even when the information, whatever it was, was stolen is less than clear: The company confirmed that information had been stolen from 45.6 million cards used in Britain and North America between December 31, 2002, and November 23, 2003. It did not know how many had been stolen for transactions made between November 24, 2003 and June 28, 2004. (both quotes sourced from the Times Online.

According to the comapany’s own SEC filing, they’re unable to say “whether there was one continuing intrusion or multiple, separate intrusions.” Maybe the login details were put on warez.ebuy.com and made available to everyone?

Yet, happily, they are able to say, with a degree of certainty that is out of line with their earlier uncertainty, “Of the details stolen in both Britain and America, 30.6m came from cards which had expired at the time of the breach, while 15m were unexpired. Of those still valid, 3.8m had “masked” or encrypted information but 11.2m had clearly accessible data.”

The banking industry will reassure us, of course, by saying that the new Chip and Pin technology prevents this information being useful any more – but that’s why increasing amounts of cardholder not present fraud and overseas use of stolen credit cards are being seen.

Such news is certainly enough to make you wonder whether the fuss over home PC security is worth it.

The Anti-Phishing Working Group reports 280,179 known phishing attacks in the 12 months to January 2007 with average monthly growth of about 6%. This is, of course, “reported” attacks. Who knows how many go unreported? Perhaps a better piece of data is the number of actual phishing sites (i.e. illegitmate, say, banking sites masquerading as the real thing) which ran to over 27,000 in January, down from a peak of 37,000 in October, but still up three fold from the total a year ago.

December saw the first government branded phishing attack with an email, supposedly from HM Revenue and Customs, suggesting that you were due a tax refund (of either £70 or £170, reports vary). Indeed, there may be another circulating today (although given the date I’m wary of anything published today) that offers a refund of “J140“, however much that may be.

When I was first shown a phishing demo by SimonF, sometime around mid-2001, I was stunned by both the brazenness and simplicity of the process. A spoof Government Gateway website, cloned from the HTML of our very own, type in your userid and password, see a failure message (your details have been captured somewhere in the background) and you’re bounced back to the main Gateway site – where you enter your details again, this time on the real site. With government userid and password details being necessarily complicated (long story), mis-typing them is incredibly common – it probably happens 1 time in 3 even now (the stats are tracked but I can’t remember the exact ratio). At the time it wasn’t that important – government didn’t pay money out via the web and we figured you were unlikely to want to file my tax return (there was some concern about the potential for de-stabilising e-government by harvesting lots of account details and then sending random tax returns, either to cause a denial of service attack or just to cause extra work behind the scenes, but it seemed unlikely). Since that early example, attacks have become far more sophisticated, notwithstanding that many still don’t manage the basics of grammar.

Digital certificates were one answer to this problem, although browser incompatibilities, issuance difficulties and stability problems prevented them being part of the solution then. Physical tokens – USB devices – were another but many of the problems that afflicted digital certificates were apparent: did the user have USB (at the time it wasn’t as widespread as now), was the port accessible (the idea of ferreting behind a desktop PC in a library or internet cafe wasn’t seen as part of the e-government experience), issuance and so on. Both are now viable but rarely used, at least at the consumer level, solutions.

Instead many financial services companies have gone for simpler solutions – pull down menus with, say, letters 2 and 5 of your secret word, multiple challenge questions (what is your dog’s name, what is your favourite film and so on) with any 2 of 6 picked to allow logon. Bigger banks with richer customers have opted for DES-gold style one time password (OTP) devices. The hackers will work through these and will find ways to get the information they need. It’s easier to see how they work the latter than the former – if they can capture the OTP and then use it right away, perhaps they can make a transaction happen before the customer knows; challenge questions that only give part of the password would seem harder, perhaps requiring multiple passes (although doubtless some people if asked to enter the entire word will still do so).

But can we Phorget Phishing? It seems unlikely. A google search for the single word “phishing” gives over 23,000,000 results. Wikipedia says that losses are large:

It is estimated that between May 2004 and May 2005, approximately 1.2 million computer users in the United States suffered losses caused by phishing, totaling approximately $929 million USD. U.S. businesses lose an estimated $2 billion USD a year as their clients become victims.[38] In the United Kingdom losses from web banking fraud — mostly from phishing — almost doubled to £23.2m in 2005, from £12.2m in 2004,[39] while 1 in 20 users claimed to have lost out to phishing in 2005.[40]

Getting accurate, current information is still challenging. But phorgetting phishing still seems out of the question – 6 years of questionable technology advance seems to have been matched by 6 years of better advance in the world of hackers, coupled with many, many more inexperienced users added to the internet mix. Technological, legal and educational responses will all have to work together to move this forward.

At Simon Moores’ e-Crime conference a year or so ago I challenged the security product industry to take a less technology and marketing centric view. I put up this slide (it’s an ad from Wired magazine, probably December 2005 or January 2006)

Firewall and security products always seem to be sold like this razor – 5 times more protection than you had before, 1 special new blade that chops out left-handed viruses with impunity, new breakthrough technology to do all sorts of things that you won’t understand so we won’t explain them to you. Product names have gone from version numbers (1.5, 2.0, 3.0) to annual updates (95,97,2000) to video game console lables (the new Norton 360 seems to have copied Microsoft’s Xbox, though doubtless it means something clever like “all round protection”, like some new kind of deodorant).

What I want is to know that whatever product I have will “kill all known germs dead” – I don’t care what they are, I just want to know that they’re protected. And if I’m going to pay for daily, monthly or yearly updates, I’d like the vendor to take on some of the liability – if I get infected, whether for my own stupidity or because the product hasn’t worked the way it’s supposed to, then I’d like to be repaid for the damage. Insurance companies don’t say, “sorry sir, you should have seen that the slope was steep and that the route was clearly marked as a black run and declined to descend; when you did and you broke your leg, you failed to comply with our policies”. Why should I pay the fee but get no real coverage? Having seen, only a few weeks ago, a perfectly good (and accredited) up-to-date bit of virus protection software get tricked by a particularly malicious bit of trickery causing widespread damage, it does happen.

Sadly, even Domestos has had to abandon its 50 year old slogan “kills all known germs dead” to say that it kills only “99 per cent of known germs”. That will be advertising standards for you.

At the rate of technology change, we need kills “all germs dead, known or unknown”. But, in the meantime, with Vista vulnerabilities reportedlly being sold on the internet for $50,000 and up, we’re going to have to pay more than a little bit of extra attention.