Wednesday, February 11, 2009
7 Habits of Highly Effective Risk Managers
It is on effective risk management, titled "7 Habits of Highly Effective Risk Managers".
As you can expect from my material it is a set of rules, that risk managers can easily apply to their jobs. It might also help if you are NOT the risk manager in your endeavor, but want to help someone who is.
Here's the condensed version:
R1: Be responsible.
R2: Analyse in depth.
R3: Use hard facts.
R4: Do more than one thing.
R5: It's not fire and forget.
R6: Integrate.
R7: Don't change the scales of measurement.
In the blogpost itself I explain every rule/habit in detail. Thanks Dick Karpinski for nitpicking and thanks Mr. Covey for the title.
Enjoy!
Tuesday, July 29, 2008
Why and How to State the Credibility of Estimations
Type: Principles
Status: final
Version: 2008-09-08
Sources: T. Gilb's work on Impact Estimation Tables, see http://www.gilb.com/; my own experience
Gist: Have you seen presentations suggesting a project or other endeavour with the purpose of convincing someone that the project would be the right thing to do? More often than not these presentations include shiny numbers of relatively low cost and relatively high benefit (of course). But how often does the presenter tell something about the Credibility of the numbers?
I believe every decision maker needs some assessment of credibility. If you don't provide him with how believable your evidence is, the decision maker might turn to other sources, like "what's this guy's success record?", "I don't believe her so I will resort to a very tight budget in order not to risk too much", etc.
Same problem with any estimate you prepare and present. These Principles explain why you should always give credibility information, and the Rules explain how you can do it.
" An estimate that is 100% certain is a commitment." -- Unknown
Credibility information DEFINED AS any evidence or number that expresses how certain we believe a statement is
Principles
P1: Estimates are assumptions about the future, therefore they must be uncertain.
"Unfortunately, tomorrow never gets here." - Leo Babauta
P2: Predictions are a difficult matter. The farther away the state we predict, the more difficult, the riskier.
P3: Credibility information is a measure of how certain we are about a predicted state.
P4: Decision makers need some assessment of the risk involved in the decision. What can happen in the worst case? (It is equally valid to ask for the best case, but seriously, the best case happens far too seldom to take it into account)
"Most people really like to be able to sleep at night." -- Heuser's Rule
P5: The clearer you state uncertainty,
* the easier it is to distinguish between different strategies
* the more likely other people believe you
* the clearer you yourself see the risk involved
* the better you can argue about a strategy
* the easier it is to learn something about the risk involved and to do something about it
* the clearer you give or take responsibility for an endeavour
Note: While the decision maker obviously accepts a risk involved by seeing uncertainty numbers, there are decision makers who don't like the idea at all. I guess this is a reason why credibility information isn't requested very often.
P6: If credibility information is missing, we by default assume the number or the source is not credible at all.
Note: Maybe the presenter just forgot to provide the information, and then it shouldn't be a problem. We can send him to get it.
Rules
R1: Any number that expresses some future state should be accompanied by credibility information.
R2: If you predict a value on a scale, like "end date", "targeted budget", "performance improvement", or "maintenance cost", give a range of that number.
Notes:
* It's useful to do this in a plus/minus X% fashion. The percentage is a clear and comparable signal to the audience.
* It is not mandatory to give the best/worst case estimates an equal range. How about "we are sure to complete this in 10 weeks, minus 1 plus 2"?
* In Rapid Development, Steve McConnell actually suggests that you communicate estimates in a range that gets smaller over time, with the larger number first. "Six to four months" sounds strange - but if you say the smaller number first, people tend to forget the larger one.)
R3: If you provide evidence, say where you have it from, or who said so, and what facts (numbers, history, written documentation) lead to this conclusion.
Note: Do this in the backup slides for example. At least you should be able to pull it out of the drawer if requested.
R4: Consider using a safety margin, like factor 2 (bridge builders), factor 4 (space craft engineers).
Notes:
* The margin is an expression of how much risk your are willing to take. Thus, it is a way of controlling risk.
* Use margins whenever you find a number or source is not very credible (like < 0.4), you don't have any historic data, or if there's no economic way of achieving higher credibility.
* Safety Margins do not necessarily increase real costs (but planned costs)
R5: Always adjust your cost/benefit-ratios by a credibility factor.
Notes:
A simple scale would be {0.0 - guess, no real facts available, 0.5 - some facts or past experience available, 1.0 - scientific proof available}
A more sophisticated scale would be
{0.0 Wild guess, no credibility
0.1 We know it has been done somewhere (outside the company)
0.2 We have one measurement somewhere (from outside the company)
0.3 There are several measurements in the estimated range (outside the company)
0.4 The measurements are relevant to our case because <fact1, fact2> with credibility <x, y> (don't get trapped in recursion here)
0.5 The method of measurement is considered reliable by <whom>
0.6 We have used the method in-house
0.7 We have reliable measurements in-house
0.8 Reliable in-house measurements correlate to independent external measurements
0.9 We have used the idea on this project and measured it
1.0 Perfect credibility, we have rock solid, contract-guaranteed, long-term, credible experience with this idea on this project and, the results are unlikely to disappear}
Tuesday, May 13, 2008
Why Managing Too Many Risks is Too Risky
Risk management is a source of unusual human behavior: euphoria or excessive gambling when risk is underestimated, and panic attacks or depression when we predict that things are riskier than they really are. Both are risks by themselves.
In the post I'll address a risk that is also closely related to human behavior and presents a risk: Overly extensive risk registers, i.e. a list of several dozends of risks for a given project.
Read more
If it feels too troublesome to leave a comment there, why not leave it here?
Friday, December 21, 2007
Decomposing Goals
Type: Principles
Status: draft
Version: 2007-12-21
Source: D. Doerner ('The Logic of Failure'), own thoughts
Gist: to explain why - for a complex, dynamic, interrelated endeavour - it is paramount to decompose global, complex, vague goals into specific goals.
Also see: Specifying Goals, Courses of Action
Definitions:
A global goal is a goal which has only few (one) evaluation criteria.
A vague goal is a goal which has only few (none) measurable evaluation criteria.
A complex goal is one that can be decomposed in two or more goals that are more specific.
An implicit goal is one you're not conscious of now.
Note: Quite often it is possible to say "problem" for "goal", as a goal is a problem turned upside down.
P1: Using a single word for a goal seems to imply that there is one thing the word stands for. Example: "User-friendlyness" Looking closer, we find that there ist no such one thing, it is composed of several other things (subgoals), like "percieved performance", "nice GUI" etc.
Note: This decomposition is subjective.
P2: The subgoals may contradict each other, like "percieved performance" and "wealth of functions". This is why we need to decompose the goal: otherwise we have a single goal which is in itself contradictory. This makes people feel uncertain and insecure.
Note: Often people don't know why they feel uneasy when confronted with such a goal.
P3: Subgoals usually will be handled in one to three different ways:
- dissection of central and peripheral goals, to be able to focus on the central ones
- dissection of important and urgent goals, also to be able to focus, but dependent on outside constraints like "deadline"
- delegation of the goal, to get rid of the difficult goal
P4: Conflicting (sub)goals get handled in two ways:
- finding a balance in reaching these goals, i. e. compromise
- ignoring one of the goals completely for the good of the others
- eupemistically re-arrange the whole system of goals, for the contradictions to dissappear ("if we had 'new' citizens...")
P5: Implicit goals can be uncovered by asking "What do I want to keep?" while aiming at some change.
P6: Confronted with a complex, global, vague goal, we usually find a (wrong) subgoal either because it seems apparent or because we are able to solve it. After a while this makes us feel even more uncertain, because we seem to not get closer to the goal.
Note: An apparent goal may not be the important nor urgent, and may be perpheral.
Note: In the IT world, this behavior surfaces as the confusion of means and ends. E. g., people think gathering data is the end, while it is a means to empower decion making. We usually do so because we feel secure in what we do.
P7: Confronted with a complex, global, vague goal, we proceed "ad hoc" in the sense we don't care for future problems caused by our problem solving. More uncertainty.
Note: "Ad hoc" has its advantage, it's better than doing nothing. This of course also is dangerous if we feel pressed to do something.
P8: Confronted with a complex, global, vague goal, we proceed "ad hoc" in the sense we ignore implicit goals. More uncertainty.
P9: Ultimately, feeling really uncertain and unsecure, we tend to dive deep into one of the goals and ignore the rest. This goal is the new holy grail.
Tuesday, September 04, 2007
Do the Right Things
Type: Process
Status: final
Version: 2007-09-04
Gist: to provide verification for a feature of a product, a plan of actions concerning the development of a product, or concerning the development of real options.
Note: you can apply this process on various levels of problems and or solutions.
Note: This process optimizes the success in the success equation
stakeholder differentiating: a function or property of the product (or an action of a plan) adds to the overall capability to satisfy the main stakeholders
Note: This means you have to have some notion who the product's main stakeholders are. The stakeholders will accept the product more if the product helps them reaching their goals.
Note: YOU could be the main stakeholder. Then differentiating means options for a wealth of future situations.
mission critical: a function or property of a product (or an action of a plan) is mission critical if the stakeholders will reach one or more of their goals only if the product has it (or the plan caters for it)
Note: This means you have to have some notion of the stakeholder's goals. The stakeholders will accept the product more if the product helps them reaching their goals.
Note: careful, you might presume that some way is the only way.
S1: produce a 2-dimensional space using the above dimensions (mission critical/not mission critical, stakeholder differentiating/ not stakeholder differentiating). This gives you 4 quadrants.
S2: for each feature or sub-feature of the planned product (or action of your plan, or possible step towards your personal goal), ask and answer the following questions: "is it stakeholder differentiating? y/n" and "is it mission critical= y/n".
S3: place it in the quadrants of the space you created in Step 1. You may express more continuous answers than y/n along the axises of your space.
S4: The verfification rules for the four quadrants are:
stakeholder differentiating AND mission critical:
invest and excel. you should do lots of things. put your main effort here. provide the most inner quality.
Note: this is the place where the rewarding things to do yourself are. It is nice to harvest the options you
stakeholder differentiating AND NOT mission critical:
"good enough" will do. use Pareto. do cheap and easy things.
Note: Will you have the option anyway, whether you do a lot now or not?
NOT stakeholder differentiating AND mission critical:
find a quality partner. use your partner's services. don't do it on your own
Note: Do not expect that your partner gives you many options. However, it is nice to share success (with a business partner or some other partner). Be honest and thankful.
NOT stakeholder differentiating AND NOT mission critical:
do nothing about it. don't waste time and or money
Note: You can use the options that lie here anyway.
Tuesday, July 10, 2007
Risk and Uncertainty
Type: Principles
Status: final
Version: 2007-07-27
Gist: Provide a set of principles on how to tackle risk and uncertainty
Source: Matthew Leitch's Online Survey on 'Indivudual differences in risk and uncertainty management' (http://www.internalcontrolsdesign.co.uk/rumaresults/index.html)
P1: Objectivity and rationality are desirable.
P2: We usually face risk and uncertainty and need to acknowledge this and act in accordance with it.
P3: Focus on single future outcomes is usually at the expense of an objective view of the future, which requires a recognition of uncertainty.
P4: It is better to be honest.
P5: These considerations are very important and usually outweigh others.
Note: Sven Biedermann offers the notion of 'a project without risk and uncertainty isn't a project'.
Wednesday, April 11, 2007
Principles of Clear Thinking
Status: Draft
Version: 2007-12-20
Quelle: Principles of Clear Thinking. <- Blog on http://www.blogger.com/www.gilb.com 2007-03-27 (R1 to R10); rest: own thoughts
R1. You have to have a clear set of objectives and constraints, to evaluate proposed solutions or strategies against.
R2. You have to have a reasonable set of facts about the benefits and costs of any proposed idea, so that you can relate it to you current outstanding requirements.
R3. You have to have some notion of the risks associated with the idea, so that you can understand and take account of the worst possible case.
R4. You have to have some ideas about how to test the ideas gradually, early and on a small scale before committing to full scale implementation.
R5. If there are more than very few factors involved ( 2 to 4) then you are going to have to use a written model of the objectives, constraints, costs, benefits, and risks.
R6. If you want to check your thinking with anyone else, then you will need a written model to safely and completely share your understanding with anyone else.
R7. You will need to make a clear distinction between necessities (constraints) and desirables (targets).
R8. You will need to state all assumptions clearly, in writing, and to challenge them, or ask ‘what if they are not true?’
R9. You will want to have a backup plan, contingencies, for the worst case scenarios – failure to fund, failure for benefits to materialize, unexpected risk elements, political problems.
R10. Assume that information from other people is unreliable, slanted, incomplete, risky – and needs checking.
R11. Assume that you models are incomplete and wrong, so check the evidence to support, modify or destroy your models.