Digital transformation

Digital transformation is everywhere.  There are blogs dedicated to it.  Companies like Forbes, Harvard Business Review, Accenture, Forrester, Deloitte and EY have all published articles, surveys and commentaries about it.  But what actually is it?

Although definitions of Digital Transformation differ between companies, executives, industries and sectors, the common thread seems to be:

“Digital transformation uses technology as a means, not an end.” – Forbes

And, for a working definition:

“Digital transformation is capitalising on the power of technology to revisit business models, acquire customers through new channels and create essential user experiences.” – EY

Of all the definitions I’ve seen, and there are many, this is one of the best.


Because at the core of it lies the real business objective: Acquire customers through new channels.

And that’s where the technology comes in, because “new channels” tends to mean via the cloud, through an app, on your mobile device, at home, while on a bus, etc.

But, and here’s the thing, you can’t just elbow in new technology and boom – announce to the world that you can now make a mid term adjustment on your Apple Watch.

It doesn’t work like that.

Before you can new technology to acquire customers through new channels, you need to revisit your business models.

And what does that mean?

It means revaluating your goals and objectives, building a Target Operating Model for your new way of life, mapping out and prioritising change initiatives, at a portfolio level, and starting on a journey to deliver organisational change, process change, technology change, cultural change and – hopefully – fortune change.

So the Digital Transformation is not driven by the technology, per se, it’s driven by the company’s desire to acquire customers through new channels.


Because there is a perception that improving the ease with which a company’s customers can do business with them, making doing business more convenient, more joined up with other related products and services (in other words creating essential user experiences) – will be the discriminating factor in the (very) near future.

And, as such, a potentially large, untapped market to boot.

So really, you can’t do a Digital Transformation without doing a normal, run of the mill, standard business transformation.

And a Business Transformation, let’s face it, is Business As Usual (BAU) to a Change team.

It’s what we do.

We just need to remember that “Digital Transformation” is not just about the technology.

Far from it.

The Sunk Cost Fallacy

Individuals commit the Sunk Cost Fallacy when they continue a behaviour or endevaour as a result of previoulsy invested resources.” (Arkes & Blumer, 1985)

If we’re honest, I suspect that we would all like to think that given a situation and presented with a choice, we would make a rational decision, based on the future value of the endeavour in question.

Consider climbing Mount Everest.

  • Situation: After 6 months of planning, £70,000 investment a 6 weeks of acclimatisation, and 3 weeks of climbing, the summit is only another 2 hours away.   However we must get back down to the High Camp within 3 hours from now, otherwise we’ll be stuck overnight in the Death Zone, which sounds nasty.
  • Rational decision: Go down.  Do not attempt the summit.  If it’s 2 hours up, and 2 hour’s down, we won’t make camp in time, and we’ll be straded in the Death Zone – with no hope of rescue.  It’s suicide.
  • Sunk Cost Fallacy:  I’ve invested so much of my life over the past year to get here, I’ve spent £70,000 and I’m within 2 hours of standing on the top of the world – I’m not going to throw all that away when I’m so close!  I’m continuing up.

It seems such an obvious choice to make when you see it in print, and when you’re not 25,000 feet up the tallest mountain in the world!

But the Sunk Cost Fallacy strikes when the mounting emotional investments that you have made over time begin to taint your decision making ability – because the more you emotionally invest in something, the harder it is to abandon it.

In economics, a sunk cost is a cost that has already been incurred and can not be recovered.  It’s gone.  Spent.  No longer in the equation.  Since it has gone, it should no longer feature in a company’s decision making processes for the future.

In the Everest example, the fact that you’ve spent £70,000 is irrelevant when you are at 25,000 feet deciding on whether to risk your life trying to summit the mountain.   The £70,000 has been spent.  The cost has been incurred, and can not be recovered.   Since it has gone, it should not feature in your decision making processes for the future – which in this case seems quite short.

There are many examples of companies sticking to their guns, when it would have made more sense to rethink their strategy.  Nokia and Kodak are examples:

  • Nokia.  Smartphones appear on the market at a time when Nokia were riding high in the mobile phone market.   Apple launches iPhone with IOS, and Google launches Android OS.   Nokia decides to invest heavily in Symbian – it’s own proprietry mobile OS.  The result is that Nokia suffers hughe financial losses, and sells out its mobile division to Microsoft.   Nokia got caught in the Death Zone.
  • Kodak.  Digital cameras launch in the market, replacing film with removeable memory.   Kodak decides to invest in film, and to market film as the superior product for producing quality images.   The result is that Kodak files for bankruptcy.  Kodak got caught in the Death Zone.

There have been experiments to test the human propensity to cling to our emotional investments, regardless of the current adverse conditions.   In one experiment, a $5 bill was auctioned with the following rules:

  1. The highest bid secures the $5 bill.
  2. The second highest bidder, loses their bid.

The bidding started at 2 cents, with several people placing bids.   Clearly people felt that it was worth trying to purchase the $5 bill for less that its face value.   As the bidding got closer and closer to $5, more and more people dropped out.  Remember, if you are the second highest bidder, you lose your bid.

The bidding continued and increased to beyond $5.  The two bidders left became locked – neither wanted to come second, and lose their bid.  They had invested emotionally.   The winner would now have to spend more than $5 to win the $5 bill, and therefore make a small loss, but the bidder who came second, would lose everything.

Eventually, the bidding broke $10, reaching $10.25. The winner made a loss of $5.25, and the bidder who came second made a loss of $10.00.  The experiment was repeated over and over again with different groups, and in every case, the same thing happened.

The more people invested emotionally, the more they found if difficult to abandon the bidding – regardless of the non-sensical and irrational bids they were making.

Sunk Cost Fallacy can also be found in the world of Change Management.   Consider a project which has been running for some time.   There have been several change requests along the way, asking for more funding because things have changed.  The market, the requirements, the cost, the time required to deliver due to unexpected complexity, etc.

The diligent project manager pulls together a change request, to present to the Change Board.  More funding required…

As I said at the top of this blog, I suspect that we would all like to think that given a situation and presented with a choice, we would make a rational decision, based on the future value of the endeavour in question.

But how many times has a Change Board ended up bidding $10.25 for a $5 bill? Or continuing to invest heavily in film instead of digital?  Or continuing to climb, when the only real rational choice is to cut the losses and head down to camp?

The mythical Lessons Learned Log

How many times have you heard someone say “Stick it on the lessons learned log”?

And how many times have you actually known someone to put something on it? What is the lessons learned log anyway?

Improving the way we do things is important. Critical even.

But often, particularly with a waterfall approach, learning the lessons of the past can be made more difficult because by the time we’re ready to take heed of such incites, we’ve forgotten the original context.

And more to the point, it’s not just about “sticking it on the lessons learned log”. It’s about learning the lessons. It’s about adapting behaviours. Changing the way we do things

A lesson isn’t learned simply because it’s on the log. A lesson is learned when we take heed, and do things differently.

With agile it’s different, because it’s built in to the methodology itself.

Kaizen, the Japanese word for “good change” (Kai = Change, Zen = Good)” is used to describe the concept of continuous improvement through the technique of “inspect and adapt” – often run as a Retrospective.

Scrum, for example, has four points where Kaizen is written in to the script: the Sprint Review, the Sprint Retrospective, Sprint Planning and the daily standup.

If you assume that a Sprint is, say, 3 weeks long – that’s a lot of time and effort dedicated to working out how to get better. And then actually getting better.

Continuous improvement is a useful mantra, but “relentless improvement” drives the message harder, which I prefer.


Because when timescales are tight and the pressure is on, it is (ironically) the Retrospective which is often sacrificed.

Better stick that on the lesson learned log.

Risk = Threat x Vulnerability

I once came across an equation that described Risk in terms of Threat and Vulnerability. The equation is:

Risk = Threat x Vulnerability

From this equation, we can see that if there is no Threat (i.e. Threat = 0) then there can be no Risk.   It doesn’t matter how vulnerable you are, if there is no Threat, you are not at risk since Risk = (0 x Vulnerability) = 0.

Similarly, if we are not vulnerable to a threat (i.e. Vulnerability = 0), then there will be no Risk either.  It doesn’t matter how big the threat is, if you are not vulnerable to it, then you are not at risk since Risk = (Threat x 0) = 0.

We can use this equation as a core element of managing project risk.

For example:  If there is a Threat of rain, and I am Vulnerable because I don’t have an umbrella, then there is a Risk of me getting wet.


If there is a Threat of rain, and I have the biggest and best umbrella on the planet, then I am not Vulnerable and hence there is no Risk of me getting wet.


If there is no Threat of rain, then even if I have the biggest umbrella on the planet I am not Vulnerable, and hence there is no Risk of me getting wet.

This gives us the basis for providing a solid structure for describing project risk.

And why do we want to do that?  Because in my experience, the general level and quality of project risk descriptions is lamentable.  I’ve seen plenty of organisations in which risks might have been written as “It may rain” or “We may lose a key resource”.  Neither of these risks are complete, because it leaves us wondering what the impact is.  We are drawn to ask “So what?”.   “The policy wordings may not be completed on time”.  So what?  Who cares?  Why should we worry?  It doesn’t tell us.  We are none the wiser.

So we want to structure risk description in such a way that we don’t need to ask “So what?” using Threat, Consequence, Vulnerability, Impact and Outcome.

[There is a possibility that it may rain] (Threat).  If so, [I may get wet] (Consequence) because [I have no umbrella] (Vulnerability).  This may [ruin my hair] (Impact) the outcome of which would be [increased cost and delays] (Outcome).

So the formula is:

[Threat].  If so, [Consequence] because [Vulnerability].  This would [Impact], the outcome of which would be [Outcome].

Now we can revisit the policy wordings risk:

“There is a possibility that the policy wordings may not be completed on time. If so, the project will run late because the policy wordings are on the critical path. This would mean we would miss the regulatory deadline, the outcome of which would be financial loss, repetitional loss and penalty sanctions imposed by the regulators.”

So, what is the risk?

Is the risk that the policy wordings may be late?  or is the risk that we might suffer financial loss, repetitional loss and penalty sanctions?

I would expect to see the risk short-name written as:

“Penalty sanctions due to policy wordings not completing on time”.

In other words the Risk is Outcome due to Threat.  And, as we saw earlier, if you reduce the vulnerability (i.e. the policy wordings are not on the critical path), then there will be no impact (we won’t miss the regulatory deadline), and hence no adverse outcome (we won’t get fined).

These may be trivial examples, but they illustrate the elements of a risk sufficiently well.

Of course, there may be other attributes we want to capture about a risk, to help us manage it.

For example:

Impact Score (1 to 5) – a range which describes the “impactfulness” or Scale of Impact of the risk. You might also attribute statements which are meaningful to your business against each value. For example: “5 – Has significant compliance or regulatory impact”.

Probability Score (1 to 5) – a range which describes the probability of an impact (due to a risk) occurring: “1 – very low chance”, “5 – Very high chance”, etc.

Scope – an indication of how far-reaching the impact might be, were it to occur. For example, Local (Impact might be absorbed by the project), Project (Impact may affect other projects), Programme (Impact may affect projects in other programmes), etc.

Impact Date – the specific date on which the impact may occur, if known (e.g. “Y2K”).

These attributes can be used to organise, categorise, filter, but the real question is:

What are we going to do about it?”

What can we do about it?

Well that depends on the risk, of course, but generically speaking we can mitigate the risk by a) reducing our vulnerability; b) reducing the threat.

Reducing our vulnerability

How do you “reduce vulnerability”?

In the rain example, we were vulnerable because there was a threat of rain and we didn’t have an umbrella. Chances are we’d get wet, ruin our hair, be late and incur unplanned cost.

So to reduce our vulnerability, we can buy an umbrella. That too will incur a cost.

And we need time out to buy it, undergo training, testing maybe.

In other words, this mitigation strategy may need some governance of its own: Requirements, a plan, procurement, testing, implementation in order to reduce our vulnerability to the threat.

Another mitigation strategy may be to “Stay inside”. No procurement, no testing, no plan, no requirements.

So in order to mitigate the risk by reducing our vulnerability to a threat, we may need to launch a change initiative – with all the governance that implies – or it may simply mean reconsidering our plans and “not going out”. The choice is down to the project leadership team.

Reducing the Threat

How do we reduce the Threat? Can we reduce the threat?

Again, in the rain example, “reducing the threat” translates as “reducing the chance of it raining.”

Which is probably a little tricky.


Because it’s an External threat, over which – as mere mortals – we have little or, let’s face it, no control.

So what about Internal threats?

Internal threats are those over which we have some control.

The wordings example states that the “policy wordings may not be completed in time.” Why is that? Is there anything we can do to reduce the likelihood of this situation arising?

Well, we could allocate more staff to the job of writing the wordings? We could reduce the burden of other non-critical work so that the wordings team can concentrate on the critical activities.

And there may be other things we can do too to reduce the internal threat. Why? Because we have some control over them.

Of course, there are many other aspects to managing risk which I have not touched on here.

The point is though, that unless you understand all of the elements of a risk (internal or external threat, vulnerability, impact, outcome, consequence), it’s difficult to justify your strategy to mitigate against it.

And it’s difficult too to really understand what you’re mitigating against.

It may rain” doesn’t hack it.

RAG is not Black and White

Most of us are familiar with the technique of describing the status of a project by using the three traffic light colours: red, amber and green. The RAG Status.   But Although the use of RAG is ubiquitous, the way it’s calculated and the meaning of the colours is not. Nor is the intended impact and likely reaction from senior leadership.

In fact, were you to cut a portfolio of projects from Company A and paste them into Company B and then reassess the RAG for each project using the prevailing method – things could get messy.

At a simplistic level, of course, green indicates that everything is ok, amber indicates that there are concerns and red indicates that something’s gone wrong.

Using a RAG status is a quick and visual way of indicating the current status of things.

So long as everyone interprets the colours the same way.

Some organisations use complex algorithms to calculate a RAG status, based on underlying data from the project in question, often calculated automatically by some sort of Portfolio Management System.

Others use a threshold based approach: a project which is x% over a pre-stated threshold is Amber, y% over is Red, etc..

I’ve known some companies that refer to a “RAG status”, but use yellow instead of amber.  A RYG status, more accurately.

Some other companies I’ve worked for use blue too.  The blue representing “nearly red”.  A RBAG status.

However a company uses colours to represent the status of a project, the critical thing is that everyone understands what the colours represent, that the method of “calculation” is consistent throughout the organisation (across borders, geographies and timezones), and that the leadership response to each is also consistent.

You wouldn’t want, for example, one director to dismiss a Red status as irrelevant and unimportant and another to go into DEFCON 3.

I know one organisation, for example, where the US parent company has a fixed threshold approach to calculating a RYG status, and their European counterparts use a different system

On the US side, the financial threshold for going into a Red status is set at 10% over budget.  In other words, if a change initiative is 0 to 9% over budget it is Yellow, 10% and over it is Red.

On the European side, however, a project goes into a Red status if the project manager thinks it should, and the Sponsor agrees – all based on a rational consideration of the current state of play and underlying metrics of the project.

In this case, the definitions in use were (I paraphrase):

  • Green – Everything is cool and groovy, hunky dory, no issues, going as planned.
  • Amber – Although we’ve gone off piste, we have spotted a way to get back on course. Things are in hand. Leadership is comfortable. Remain alert and vigilant.
  • Red – We’re off piste, and have not yet got a plan to get back on course. All hands on deck, look at options, decide best approach forward, regroup, cancel lunch.

There is nothing wrong per se with either approach. Or to put it another way, both are equally valid.

The problem is one of interpretation. Of reaction. Of intent.

Consider a project in which the year-to-date cost is 10% over budget. Let’s say that according to the automatic threshold approach, it’s Red. And let’s also say that we knew we were going to go over budget at this point due to various cash flow factors and risk mitigation strategies we have had to deploy. Sponsor is comfortable. Senior leaders are comfortable. Project Manager is comfortable. Finance is comfortable.

The point is, we knew we would go over budget by 10% for a short period. You could say – in a sense – don’t all shout at once – we even planned to go over budget for a short period of time.

Are we really Red then?

The system says Red because we crossed a threshold. But what are we going to do about it? What is our intent?

Nothing, because things will right themselves next month?

The danger is that we end up spending lots of time and energy explaining that we’re not really Red, and things will fall back into line, and it’s just a cash flow issue, and we knew it would happen, etc.

So in effect, ignore the Red status for this month.

That sounds all wrong.

Now consider the same scenario, except that the RAG status is agreed manually between the project leadership and the Sponsor.

“Are we Red?,” asks the Sponsor.

“No,” says the Project Manager.

“Why not?”

“We’re off-piste for sure, but we knew we would be, and we know it’s just a cash flow issue which will resolve itself next month. There is no other action to take. We have a plan. Let’s monitor things until it rights itself.”

“Amber then?,” suggests the Sponsor.


The difference here is that the RAG status has been discussed by the project leadership and the Sponsor, and they have agreed a considered approach to leave it Amber.

And how do we make sure that there is consistency across other teams, projects, programmes, Sponsors? We make sure that the PMO acts as the Conscience of the business. The normaliser. The referee. The quality assurance team.

This approach provides a framework for a project team and sponsor to discuss the current state of play of a project, to consider all of the underlying issues and metrics, and to come to a considered opinion.

RAG is not black and white. There are exceptions, there are nearlys, there “yes, buts”.

Actively deciding that a project is Red carries much more weight than an automatic calculation.


Because you’ll know we mean it. And if it means preparing for DEFCON 3, then prepare for DEFCON 3.

%d bloggers like this: