Showing posts with label Risk. Show all posts
Showing posts with label Risk. Show all posts

Sunday, April 27, 2025

When is a risk not?

When someone pretends that an uncertain benefit is a 'risk'.

So, let's get this straight, risk represents the probablity of a threat to value and  the cost of that threat materializing.

A risk is a negative thing. Not a positive, no matter what the Taylorist fantasy world of PMBOK says.

An unexpected beneficial effect may come or not, but action to absorb the benefit might be THE risk. Change of WBS, change of activity network, change of skills needed, change of procurement approach, change of finance/funding/budget. That's the problem with an unexpected benefit.

Then, what does the ConOps have to say about this new 'benefit'? What functions does it serve or enable?  What are its strategic/value implications? What is its net value?

Saturday, December 3, 2022

Learning from flying

I enjoy watching a TV series "Air Crash Investigations".

Initially, without having seen it, I thought it would be a sort of tabloid sensationalized program to attract eyeballs for advertisers; nothing wrong with that, of course, that's how TV works.

But it is not. It is a seriously crafted documentary series with excellent production values and carefully developed analysis of crash investigations.

From most episodes a project manager can learn, or use it to illustrate lessons about aspects of good project management practice.

Various episodes have taught:

Welcome Bad News

Importance of good crew communication (vital) with no member being unable to bring an error, a risk or a threat to the attention of the other crew members: bad news has to be welcomed by everyone.

Use Checklists

Checklists: make them, use them, and keep revising them as they are tested in use.

Communicate reliably

Before landing the flight crew does a landing briefing for the particular airport. Every time. One crew skipped it; confused roles on the flight deck, made a wrong decision and crashed into a mountain. All dead.

Disambiguate language

The famous Tenerife disaster seemed to arise in part from ambiguity in terms. The captain of the plane that crashed into the other plane on the same runway and the ATC confusedly used the term, 'clear for takeoff'. The captain appeared to interpret that as 'off you go'; the ATC meant it as 'standby for take-off'. He should have said 'hold for departure' as he know another plane was on that runway. The captain would have confirmed 'holding for departure', and awaited the clearance to 'clear for take-off' which would signal a clear runway as well as clearance for the flight plan post departure.

Use fail-safe markers

A couple of episodes featured problems with pitot tubes being blocked and giving altitude errors, leading to crashes.

One case was blockage by insects during storage, the other, covering by tape during a maintenance session. These are critical to aircraft operation. On both occasions the openings should have been covered by a large marker with a tag reading 'Remove before flight'.

Do not 'multi-task' because you cannot!

The futility (and fatal consequences) of 'multi-tasking'. In an environment demanding focused attention one cannot split attention between disconnecting streams of 'flow'. In one episode an air-crew broke the 'sterile cockpit' rule at takeoff. While working down the takeoff checklist, a flight attendant, a personal friend, entered the cockpit and they all chatted about dinner at the destination. Then the checklist was completed...but it wasn't; critical items had been omitted. The 'plane then proceed for takeoff without flaps being extended. It crashed with almost total loss of life.

Always use units when giving quantities

An aircraft was re-fueled at an airport during a transition from imperial to metric units. A flight requiring 1000 kg of fuel was provided 1000 lbs. No units were used on the documentation, leading to the miscommunication. Plane fell from sky.

Never assume/don't break the rules

A crew minimised fuel load routinely on a particular flight. They then defaulted to the risky routine and used the same load on a flight on the same route, but the reverse direction. Due to the altitudes of the airports involved, starting at a lower altitude airport than normal meant that more fuel was needed because of an increased climb height. Plane ran out of fuel and fell from sky. All dead.

Confirmation bias

Another fuel story. A flight lost its bearings due to instrument failure, but sought to regain them by trying to catch a particular radio station. The crew picked up a broadcast they took to be from the station they assumed and planned from that. They were wrong. It was a station hundreds of miles from where they thought they were. They didn't cross check the assumption, didn't have a 'devil's advocate'. They crashed into unexpected terrain: a mountain. All dead.

It's always the system/drive out fear

The aim of crash investigations is not to level blame, but to find causes. The fault needs to be found and eliminated; and the faults are one of:

  • technical: equipment or performance failure
  • systems: processes don't connect with each other adequately or are internally deficient
  • management: recruitment, training, coordination, resourcing

On only the rarest of occasions people are taken to court. The aim of investigations is to learn to increase safe performance, so while participants might be concerned that they will be responsible for an error, they seem assured that honesty is essential and only the most egregious personal negligence might bring consequences.

As Deming says: 'drive out fear'. Fear in an organisation or process leads to stifled communication, error, deceit, concealment, and inevitable failure. See the first lesson: Welcome Bad News. As Deming also says, if there's a problem in an activity or organisation, first examine the system that produces the problem...it is probably also its cause.

The Flaw of Averages

Never assume that an average multiplied by the number of units will be accurate. Remember the Normal Curve. Remember Standard Deviations. It can all go horribly wrong if you apply an average to a small sample/population.

One small aircraft was load assessed based on the average mass of a passenger and the average mass of per person luggage. That might have worked if there were 200 passengers, but there were only 16. The aircraft was overloaded and crashed on take off. Everyone died.

Another factor was that the average was out of date and didn't take into account the gormandizing tendency of too many modern Americans.

Fixation

If you concentrate too hard on the one thing (over-focus) you could die. One airline captain concentrated too hard on his altitude and forgot to pay attention to his airspeed. I think they all died too.

You  need a few systematic preventatives: one project rule: anyone can bring bad news at any time to anyone...and be rewarded for doing so (i.e. 'Thanks Kevin, I'm glad you spotted that.). Anyone who disparages bad news leaves the project. It could be fatal.

Similarly to Fixation error is continuation bias. With this a person is inclined, sometimes sub-consciously, to continue on a course of action that is indicated by objective signs to be the wrong course of action. The tip here, is when things change check all the antecedent conditions and consequential possible results.

Fatigue, lack of sleep.  No one can 'tough out' tiredness. Attention fails, reactions slip, critical evaluation goes out the window. Your are more useless than a drunk, because at least a drunk is obvious. In one episode fixation on the part of the captain, and attention failure by the co-pilot to even be aware of three separate obvious signs of danger led to a crash.



 


Wednesday, April 12, 2017

United we fly, united we fall

If you have missed it, United Airlines is now famous for roughly hauling an elderly passenger off a plane that they overbooked.

For those interested in how business handles risk, this is an interesting case: one 'difficult' passenger, a routine request (I would surmise that the  'bumping' requirement is included in the ticket contract), staff with poor PR skills, lack of ability to increase the value of the alternative offer, a CEO who doesn't know the business and regards passengers as freight...and it all unravels in share price and ramifications in China...they picked the wrong passenger. How a small cascade of minor errors potentially wipes huge value off the company. I bet they didn't cover that one in their pretty risk matrix.

One has to chuckle.

Saturday, March 25, 2017

About Risk

I accompanied some junior colleagues to a short course on risk management.

It was, unsurprisingly, the same old same old: it culminated in the presentation of a 5X5 matrix with some lovely colours and labels from low to high.

What was missed was the unavoidable error bands around any point on the map, which meant that it was impossible to differentiate between a medium and a high risk on the map. The error bands overlapped.

The result would potentially be misallocation of resources, and the inability to respond to actual risk in the real world appropriately. This is 'the risk in risk matrices'.

An excellent corrective to the reflexive reliance on the rough stepped matrix is Matthew Squair's blog Risk and the matrix. I commend it to you.

Risk is better mapped, in my view, as a series of 'isorisk' lines to guide the analyst in their determination of a risk response. Risk is not cut and dried.

Isorisk Map



Wednesday, January 18, 2017

What do you mean, 'value'?

In risk analysis we sometimes multiply the probability of occurrence with the event loss to obtain an expected event loss for the risk. Thus, if the the probability of occurrence of cladding collapsing is 1%, and the cost of the collapse (clean up,  insurance premium, make good) is $10m, then the expected event loss is $100k. No much, and across a portfolio of risks it indicates the overall budget risk....of course you include that in a Monte Carlo analysis to introduce some objectivity.

Another page from Matthew's book, Chancing it bears consideration in this context.


Sunday, October 23, 2016

Risk Criticaility

We've all been down the risk ceremony path: where risk management starts with a 'workshop', descends into a matrix, then disappears.

Risk has a number of dimensions and Shenhar's book is a great start to thinking about risk in an organised manner; after Shenhar risk should be assesed in terms of the vulnerability of dependencies to failure events (and failure modes become important) on a probabilistic basis. These should then be assessed for affect on schedule, investment and performance to produce actions that will mitigate if not avoid the risk.

You probably know the near-pointless and potentially misleading 'matrix' that both Eight to Late and Cox bubble prick.

The outcome of basing project management on a mature understanding of risk should be the criticality of events to completion, budget or technical performance. This then drives mitigating actions: abatement and avoidance, or if minor, ignoring (or buffering in schedule or budget).

Saturday, January 30, 2016

Balustrade height

I've always wondered about balustrade heights on tall buildings. I've been on apartment balconies that have what looks like a 900mm balustrade -- 20 stories up! This might be the minimum BCA height, and it might even be a reasonable height for a balcony a couple of metres above ground; but the consequences of a fall from height is fatal; the balustrades should be designed for safety, not at a comfortable hand height, as though they are access hand rails.

In retail projects I've been involved in the balustrades to internal voids were about 1200mm. Safer, but imagine an adult carrying a child, even at that height the child is entirely above the balustrade. Not good.

One of the craziest low balustrades I've seen is on top of the "Cheese Grater" in London. Over 200 metres above ground and the balustrade looks like its below the centre of gravity of the man on the right. A gust of wind could present a danger as could a moment of unsteadiness on the part of the man. Here the balustrade should be 1500mm.

Cheese Grater Balustrade

Saturday, August 1, 2015

Old time

Sorting out my library recently I bumped into a little known classic in project management: Modern Project Management by Claude Burrill and Leon Ellsworth.

Flipping through the pages I was a little surprised to see that the sound principles that this pair laid down over 35 years ago still have made little headway in project management.

For example, they wrote about probabilistic risk estimating; most people are naive about this and plan their projects absent of any realistic incorporation of either schedule risk or budget risk. So, of course, too many project slide over one or both to the surprise of the project team and the anguish of the investor.

The corollary of this is that most people embark on a project with a risk management 'ceremony'. They have a chat about risks (no fault mode analysis, no dependency risk analysis based on the project WBS, if there is one) involving making a cute matrix of coloured cells that pretend to represent risk appraisal. Of course, it does not.

For your risk management edification, a useful post on an analytic approach that can provide useful case study input to considering risk.

Saturday, June 20, 2015

Guiding Principles 7: Risk management

Enable informed tradeoffs between project and portfolio risks and potential rewards.

Risk management: Enable informed tradeoffs between project and portfolio risks and potential rewards.

It’s right to couple risk and reward. Once goes with the other. But let’s not get led astray by the financial market’s conceptualisation of risk-reward. Across an investment portfolio this works, but the project risk environment is not like an investment portfolio. You can’t sell stock in part of the project and buy a bit of another project to balance the risk being faced. That would just under-resource the first project and head it to failure.

Risks in a project are at the heart of project management. This extends from risks in the project environment (what we normally call risks) to risks created by the project’s organisation. K has a useful post on internal risks that teases out this concept.

Managing risk is not done by dreaming up a list of risk around the table, ‘weighting’ them  -- usually without statistical reference -- and placing them in a matrix. That’s not risk management, that’s a ceremony; but organisations do love their ceremonies, thinking that they achieve something over here in the real world.

The management of risk has to show up on the schedule and budget, and be aligned with the dependencies of WBS elements and have calibrated probabilities attached. This helps structure the project to avoid risk, but also to deal with it when it occurs, even if it is by insurance; and insurance is the last refuge of the scoundrel project director in many contexts.