Agile Meets Waterfall

This morning I’ve been looking at the release schedule for a major project launching next year. Chances are that you will be affected by it somehow. No names disclosed though. It’s a replacement for an existing capability and will use an off the shelf, quasi-cloud product (that is, software that is already in use by others and so can, in theory, be configured but that, in practice, will need some development to make it do what is really needed).

The top level project plan looks like this

Spring 2019Discovery
Summer 2019Design and Build
Spring 2020Testing
Summer 2020Go-live
Spring 2021Transition completed
ThereafterContinuous improvement

Plainly the team want to be agile. They want to iterate, adding features and capabilities and seeing how they land before improving them – going round the discovery / alpha / beta / live process repeatedly. But, at the same time, they have a major existing system to replace and they can’t replace just part of that.

They also, I suspect, know that it’s going to be harder than everyone expects, which is why they’re sticking to those nebulous seasonal timeframes rather than talking about particular months, let alone actual dates. The rhythmic cadence of Spring/Summer milestones perhaps suggests an entirely made up plan.

Ever since I first joined the Inland Revenue and heard a team saying that they “hoped to be up to speed by the Autumn” I’ve laughed whenever seasons are mentioned in the context of delivery. It’s as if there are special watches issued with 4 zones marked on them – one for each season.

What do you do when meshing the need to replace an existing widely used system that is no longer doing what it needs to do and that needs radical upgrades, with a new capability that can’t be introduced a bit at a time. This is not a start up launching a new product into the market. They’re not Monzo who were able to start with a pre-paid debit card before moving into current accounts, credit cards etc.

These kinds of projects present the real challenge in government, and in any large organisation:

How do you replace your ageing systems with new ones whilst not losing (too much) capability, keeping everyone happy and not making an enormous mistake?

At the same time, how do you build a big system with lots of capability without it going massively over-budget and falling months or years behind plan?

There are some options to consider:

  • Is a NewCo viable? Can a new capability be set up in parallel with the old, and customers, users or businesses migrated to that new capability, recognising that it doesn’t do everything. This is ideal MVP territory – how much do we need to have to satisfy a given group of customers?Ideally have less complicated needs than the full set of customers, and we can shore up the NewCo with some manual or semi-automated processes, or even by reaching into the old system
  • Can we connect old and new together and gradually take components away from the old system, building them in the new world? This might particularly work where a new front end can be built that presents data far more clearly and adds in data from other sources. It might require some re-engineering of the old system which perhaps isn’t used to presenting data via APIs and has never been described as loosely coupled.
  • Is there standalone capability that will make for a better experience, perhaps using data from the old system (which can be extracted in lots of different ways from cumbersome through to smooth), with that new capability gradually expanded?

None are easy, of course, but they are all likely better than a big bang or the risk of a lengthy project that just gets longer and more expensive – especially as scope increases with time (people who have time to think about what they want will think of more things, and industry will evolve around them giving them more things to think about).

There are, importantly, two fundamental questions underpinning all of these options (as well as any others you might come up with):

1. What do we do today that we don’t need to do tomorrow. Government systems are full of approaches to edge cases. They are sedimentary in nature – built over the years (and decades) with layer after layer of policy change. Today’s system, and all of the capability it has, is not the same as what is needed. This allows a jackhammer to be taken to the sedimentary layers so that the new capability isn’t trying to replicate everything that went before.

2. What do we need to do tomorrow that we don’t do today. This gets the policy arm the chance to shape new capabilities, including simpler policies that could have fewer edge cases, but still cater for everyone. It will also allow new thinking about what the right way to do something is, considering everything that has gone before but also what everyone else is doing (in other industries, other government departments and other countries).

Asking those questions, and working the answers really hard, will help ensure that the solution reflects the true, current need, and will also help get everyone off the page of “we need to replace what we have” when, in reality, that’s the last thing that anyone should be trying to do.

There is, though, one giant caveat to all of this which I will take a look at tomorrow.

Diagnostic Parsimony. Or Not.

Occam’s razor says, roughly anyway, the simplest solution is likely the correct one. Hickam’s dictum says, roughly, and particularly in the medical world, don’t simplify too quickly as there may be multiple causes of your problem.

We have a tendency to be overwhelmed by the latter – so many things to look at, understand and do – and so reach quickly for the former: be agile. Go to cloud. Adopt product X. Do a reorganisation.

The real trick is to apply both, teasing out what the main contributors are, and then applying the solution that makes most sense and that will give the best return for effort expended.

The big problem comes when Occam’s Razor and the oxymoronic phrase “quick wins” are applied together. There are few simple solutions, they’re not quick and there isn’t much to win. If there was, they’d already have been done.

Withered / Weathered Technology

Whilst Shigeru Miyamoto, the public face of Nintendo, is rightly regarded as the leading light of the video game industry, there is another, unsung, hero, also of Nintendo: Gunpei Yokoi.

He pioneered what we loosely translate as “lateral thinking with withered (or possibly weathered) technology” – taking electronic components (be they chips, LCD screens or whatever) that were no longer leading edge and were, in fact, far from that position, and using them to create affordable, mass produced gadgets.

Gunpei Yokoi was behind some extraordinary hits including Game and Watch and then the Game Boy (an 8 bit, black and white, low resolution handheld gaming console released at a time when every other company, including Atari and Sega, was already moving to colour, high resolution displays – if you know the story, you know that the Game Boy dominated the market for years; total unit sales of some 120m and 26 million copies of Tetris, alone).

Arguably that very thinking is behind more recent products – perhaps Nintento’s Wii and Apple’s iPod shuffle.

In the modern rush to harness new technology capability – be it blockchain, machine learning and artificial/augmented intelligence, new databases, new coding languages, new techniques, voice recognition etc – we sometimes forget that there are proven technologies and capabilities that work well, are widely understood and that could be delivered at lower risk.

Real delivery in government requires large scale systems that are highly reliable – you’re front page news if it goes down after all – and that do what’s needed.

Is there, then, a case for putting the new and shiny to one side whilst experimenting with it (of course) to assess its potential, but not relying on it to be at the core of your new capability until it’s ready.

The core systems at the heart of government are definitely both withered and weathered; they’ve been there for some decades. They need to be replaced, but what should they be replaced with?

Technology at the very leading edge, where skills are in short supply and risks are high, or something further back from the bleeding edge, where there is a large pool of capability, substantial understanding of performance and security, and many existing implementations to compare notes with?

Dealing With Legacy Systems

Legacy, that is, systems that work (and have worked for a couple of decades or longer in many cases) both do the lion’s share of the transactional work in government, but also hold back the realisation of many policy aspirations.

Our legacy systems are so entwined in our overall architecture with dozens (even hundreds) of interfaces and connections, and complicated code bases that few understand, that changes are carefully handled and shephered through a rigorous process whenever work needs to be done. We’ve seen what goes wrong when this isn’t handled with the utmost care. There were the problems at the TSB, or at Natwest, RBS and Tesco Bank, for instance.

The big problem we are facing looks like this:

Our policy teams, and indeed our IT teams, have much bigger aspirations for what could be achieved than the current capability of systems.

We want to replace those systems, but trying to deliver eveything that we can do today, as well as even more capability, is a high risk, big bang strategy. We’ve seen what goes wrong when we try to do everything in a single enormous project, whether that be the Emergency Services Network, the e-Borders programme, Universal Credit etc.

But we also know that the agile, iterative approach, results in us getting very much less than we have today with the promise that we will get more over future releases, though the delivery timetable could stretch out some time with some uncertainty.

The agile approach is an easy sell if you aren’t replacing anything that exists today. Monzo, the challenger bank, for instance, launched with a pre-paid debit card and then worked to add current accounts and other products. It didn’t try and open a full bank on day one – current accounts, debit cards, credit cards, loans, mortgages etc would have taken years to deliver, absorbed a fortune and delayed any chance to test the product in the market.

It’s not, then, either/or but somehow both/and. How do we deliver more policy capability whilst replacing some of what we have and do so at a risk that is low enough (or manageable enough) to make for a good chance of success?

Here’s a slide that I put up at a conference in 2000 that looked at some ways that we might achieve that. I think there’s something here still – where we build (at the left hand end), some thin horizontal layers that hide the complexity of government … and on the right hand side we build some narrow, top to bottom capabilities and gradually build those out.

It’s certainly not easy. But it’s a way ahead.

Computer Says No

The FT has a front page story today saying that Ulster Bank is absorbing the cost of negative interest rates (on money it has deposited at the ECB) because its systems can’t handle a minus sign. Doubtless whoever wrote the code, maybe in the 80s, never thought rates would fall below zero.

We had a similar problem at a bank in the 90s when our COBOL based general ledger couldn’t handle the number of zeros in the Turkish lita; we wrote to their central bank and PM to see if they wouldn’t mind looping a couple off so that we could continue to process transactions. History does not record the answer, but I suspect there came none.

Legacy systems were in the news in government IT this week as it was stated that there was no central register of such systems, that they are blocking data sharing and that there’s no plan to move off them. GDS, says Alison Pritchard, the interim leader, will be looking for money in the next spending review to deal with the problem.

This is, of course, an admirable aim. The trouble is, departments have been trying to deal with these systems for two decades – borders, immigration, farm payments, student loans, benefits, PAYE, customs etc all sit on systems coded in the 70s, 80s and early 90s. Legacy aka stuff that works. Just not the way we need it to work now.

Every department can point at one, and sometimes several, attempts to get off these systems … and yet the success rate is poor. Otherwise why would they still be around?

The agile world does not lend itself well to legacy replacement. Few businesses would accept the idea that their fully functional system would be replaced in a year or two with a less functional MVP. What would make the grade? How would everything else be handled? Could you run both in sync?

In the early 2000s a few of us tried to convince departments to adopt an “Egg” model and build a new business inside the existing business – one that was purely internet facing and that would have less capability than the existing systems but that would grow fast. Once someone (business or person) was inside the system, we would support them in that new system, whatever it took – but it would be a one way ticket. We would gradually migrate everyone into that system, adding functionality and moving ever more complicated customers as the capability grew.

It’s a challenging strategy. It would have been easier in the 2000s. Harder now. Much harder. But possible. With commitment. And a lot of planning.

Venture Capital Project Management Model

Projects fail. Big projects, arguably, fail more often and with bigger consequences. The more projects in the hopper, the more failures you will see, in absolute if not percentage terms. Government, by its very nature, has thousands of projects underway at once. Back in the 2000s when I worked on mission critical projects with the good folks at Number 10 and (Sir) Peter Gershon, I think we had 120 or so in the first list we came up with. I would be surprised if the count is any smaller now.

Venture capital (VC) investments fail. Perhaps big investments fail more often. The important differentiator is that VC investments are usually made little and often – projects receive small amounts early on (usually from Angel investors which precede VCs) and then more money is gradually invested at higher valuations, as the company/idea grows and reaches various proof points. The last few years have seen this model strained as huge investments can be put into late stage companies (think WeWork … or maybe not).

This is quite different from how most projects are run. Projects go through lengthy due diligence phases up front, sometimes lasting a year or more (longer still when the project concerns physical infrastructure – railways, bridges, nuclear power plants etc). The output of that DD is a business case – and then the go button is pressed. Procurement is carried out, suppliers are selected, contracts are signed and govenrnment is on the hook, for 5, 10 or even more years.

Agile projects can be different in that contracts are shorter, but the business case generally supposes success (hence “optimism bias” as a key metric – if it goes wrong, then it just needs more money). But they can still carry a momentum with them which means that they carry on long after failure was inevitable.

VC companies are more ruthless. They know, after years of meaurement across the industry, that only one or two of their investments in any given period will count for the vast bulk of their returns. They call this “the hit rate” (Fred Wilson writes brilliantly about this, and many other things). Poor performers are culled early on – they don’t get additional funding. Sometimes the investment is in the team, and they are able to change their business idea (that is, “pivot”) and get to continue, buf often the company is shuttered and the team scatter and move on to new ideas.

This brings a tendency to look for huge winners (or the potential for them) – the VC knows that they need to win big, so they look for ideas and teams who will produce those big returns. If they strike out, well, perhaps 8 out of 10 were going to break even or lose money anyway.

  • Data from Correlation Ventures suggests about half of investments made by VCs fail, and about 4% generate a return of 10x or greater

Is there, then, a case for treating government projects the same way. Perhaps we could back multiple, competing projects in the same space, and fund the ones that were proving the most successful? We would have to change the contracting model and include break clauses (not termination clauses as that, in the current vernacular, implies failure – we know that projects are going to fail, we just don’t know which ones).

Sure, we would “waste” some money doing it this way. But we already do – we think we are wasting it right near the end when the project has consumed all of its budget and has nowhere left to go, but, in reality, we’ve been pouring money into something that wasn’t going to success for months or years beforehand.

We could also copy the way some VCs back teams – that is, find teams who have successfully delivered and work to keep them together, moving them onto the next idea, because it may just be that success breeds success. Teams who have proven capable at £10m of project spend should get to play with £50m. or £100m and £200m. We could rotate new team members in to give them exposure to what success looks like, before splitting successful teams and giving them more to run.

In a VC-style mode:

  • Projects would receive initial funding based on their outline thinking – enough to get them through discovery
  • Senior leaders from unrelated projects would be appointed to the board of the project to help navigate early issues and think around corners
  • Additional money would be released stage by stage, with the size of the investment increasing as the project reached predefined proof points
  • Pivots – changes in approach – would be embraced as providing recognition that there was a different way to achieve the same, or a related, outcome, even if there was some loss of investment

There is an obvious challenge here. Moving away from IT, let’s say we were trying to build bridge. It’s hard to fund that in stages (once you’re past feasibility and construction planning). It’s even harder to pivot – once you’re halfway across the chasm, you can’t change from suspension to cantilever, or from bridge to tunnel. Suppliers and partners to government like to know how big the funding envelope is and how long the project will last so that they can plan resources, notify the markets, invest in new capabilites etc. Departments like to do the same – they have team costs to cover after all. This will require some negotiations across government and industry, some changes to procurement thinking and the establishment of a portfolio where the funding envelope is the portfolio, but there can be transition (of funds, people, suppliers and scope) between projects within the portfolio.

We should’t be afraid of losing money becuase, just as in VC portfolios, not every plan is going to succeed; but we should be afraid to keep losing money if the plan isn’t working.

The current model, even with all the agile changes in the last decade, isn’t working as well as it could. There’s a reason that VC companies manage a portfolio – they know that they have to spread their capital quite wisely. Our project management approach feels more like passive investment in an index, rather than active management of a portfolio. We need to make some changes.

Wading Through Treacle

Long ago, a wise and experienced civil servant cautioned me upon my arrival at the Inland Revenue that my job would often feel like “wading through treacle.” He wasn’t wrong, though he posessed the supreme skill of somehow skating over that treacle and so got things done.

I often think back to that quote. It’s not unfair to say that the civil service is heavily driven (some would say constrained) by bureaucratic and cumbersome processes that, if anything, are designed to contain change rather than enable it. “Governance” is the thing – slow down the ability to do things and certainly the ability to spend money and thus keep everything on an even keel. Of course, all of that governance and process designed to slow the spending of money has not prevented eye watering project failures and associated write-offs.

More recently the move to more agile methods has resulted, in some places and in some ways, those processes being thrown away. More lightweight processes have replaced them with the aim of giving product owners and developers more freedom to get things done.

The trouble is, the new and old processes usually come together at key points – most obviously at the business case sign off stage. There, it’s more like two strips of velcro coming together and interlocking so perfectly that nothing moves – forwards, backwards or sideways.

I recently wrote a standard “5 case” business case for an iterative and highly experimental project (at a relatively low spend – certainly one of the smallest projects I’ve ever worked on, in public or private sector). It’s tough to write such a thing when you’re not sure what the final outcome will be – on the basis you’re running an experiment and will certainly have to change course during the project, and probably several times. It’s also tough to evaluate competing options in such an environment. And yet such documents are part of the rite of passage for a project.

“We must demontrate Value for Money” is the mantra. Technically, you can only do that in the past tense – show what value you have delivered, not show what value you will deliver.

Business cases for such projects can’t be done of course, at least not in the expected supposedly gold standard 5 case way. Efforts have been made to update the templates, but they are still far from adequate where spend certainty increases with the time spent on the project (subject to good management) but scope certainty can remain high throughout.

I fear whilst this is a process that is ripe for, ahem, transformation, it’s not likely to be on the list of processes successfully transformed any time soon.

Projects as Films

Today’s Financial Times notes

2% make it to the cinema and perhaps 1/3rd of those are profitable. Those are long odds.

There are likely 3 rules of cinema

1) Make something you, or someone else has already done (hence why we see so many remakes and sequels … Toy Story 4, endless Marvel movies)

2) Produce a film with people you’ve worked with before (which is why proven directors and actors get repeat work)

3) if you’re going to do something completely new, start small and don’t risk a lot (hence low budget independent films)

Projects, which have perhaps the same, or maybe even a worse, success rates, have the same rules

1) Do what someone else has already done and stick as closely to the script as possible (cloud technologies and configurable apps versus custom projects)

2) Keep the same team around you, as long as they have been successful, because you know how they work and they know how you work; trust the team to solve the problems they know how to solve … bring in new people to keep things fresh but don’t go for wholesale swaps

3) If you really want to do something novel and different, start small and don’t spend a lot of money … especially if you’re working with people you’ve never worked with before

Those 3 rules, a version of which I wrote on this blog many years ago, could massively boost your project success rate.

Delivery and Performance Down Under

An interesting read, via @paulshetler, today, covers the setting up of a new piece of governance in New South Wales. the “Delivery and Performance Committee” (DaPCo).

The best quote from the release, by Victor Dominello, NSW Customer Minister, is easily:

This reform – is cultural and it is whole-of-government – it is the hard stuff, the messy and complex innards of government that nobody likes to talk about. It’s not shiny but it’s one of the biggest enablers for digital transformation and service delivery – which is why we’re committed to getting it right.

The committee plans to ask the hard questions on delivery, drive adoption of a Netflix-like approach (“test and tweak services in short delivery cycles based on customer feedback”), improve the “tell us once” functionality.

Importantly, they say that the model will be replicated at the Federal level, with “Services Australia”, previously known was the Department of Human Service.

and then this

With our counterparts in the federal government, we’re making big advances in designing services around complex life events – we’ve already launched a prototype to help people through the end–to–end journey at pivotal moments in life, like what to do when somebody dies, so you don’t have to go to 10 different government departments

For a moment I thought it was 2001 all again and UKonline had moved down under.

It all sounds good. Just a couple of thoughts

  1. A committee? That sounds like a challenging way to manage an agile, fleet of foot, iterative delivery cycle based on customer feedback
  2. What’s the first project or policy it will start with and is it policy focused, technology focused, solution focused, delivery methodology focused or all of the above?
  3. What’s the lever by which the work gets done once the committee pronounces? How will they tell everyone what the new guidance is so that people don’t waste time preparing the wrong solution that the committee then reject?

Interesting stuff. Would be good to compare before and after, if it can be asssessed transparently. Lots of effort has gone into making a similar switch in the UK, of course, but the translation into real improvement for transactional services is hard to see except in a few really strong cases.