Tuesday, July 29, 2008

Why and How to State the Credibility of Estimations

Name: Why and How to State the Credibility of Estimations
Type: Principles
Status: final
Version: 2008-09-08
Sources: T. Gilb's work on Impact Estimation Tables, see http://www.gilb.com/; my own experience

Gist: Have you seen presentations suggesting a project or other endeavour with the purpose of convincing someone that the project would be the right thing to do? More often than not these presentations include shiny numbers of relatively low cost and relatively high benefit (of course). But how often does the presenter tell something about the Credibility of the numbers?
I believe every decision maker needs some assessment of credibility. If you don't provide him with how believable your evidence is, the decision maker might turn to other sources, like "what's this guy's success record?", "I don't believe her so I will resort to a very tight budget in order not to risk too much", etc.

Same problem with any estimate you prepare and present. These Principles explain why you should always give credibility information, and the Rules explain how you can do it.

" An estimate that is 100% certain is a commitment." -- Unknown

Credibility information DEFINED AS any evidence or number that expresses how certain we believe a statement is

Principles
P1: Estimates are assumptions about the future, therefore they must be uncertain.
"Unfortunately, tomorrow never gets here." - Leo Babauta

P2: Predictions are a difficult matter. The farther away the state we predict, the more difficult, the riskier.

P3: Credibility information is a measure of how certain we are about a predicted state.

P4: Decision makers need some assessment of the risk involved in the decision. What can happen in the worst case? (It is equally valid to ask for the best case, but seriously, the best case happens far too seldom to take it into account)

"Most people really like to be able to sleep at night." -- Heuser's Rule

P5: The clearer you state uncertainty,
* the easier it is to distinguish between different strategies
* the more likely other people believe you
* the clearer you yourself see the risk involved
* the better you can argue about a strategy
* the easier it is to learn something about the risk involved and to do something about it
* the clearer you give or take responsibility for an endeavour
Note: While the decision maker obviously accepts a risk involved by seeing uncertainty numbers, there are decision makers who don't like the idea at all. I guess this is a reason why credibility information isn't requested very often.

P6: If credibility information is missing, we by default assume the number or the source is not credible at all.
Note: Maybe the presenter just forgot to provide the information, and then it shouldn't be a problem. We can send him to get it.

Rules
R1: Any number that expresses some future state should be accompanied by credibility information.

R2: If you predict a value on a scale, like "end date", "targeted budget", "performance improvement", or "maintenance cost", give a range of that number.
Notes:
* It's useful to do this in a plus/minus X% fashion. The percentage is a clear and comparable signal to the audience.
* It is not mandatory to give the best/worst case estimates an equal range. How about "we are sure to complete this in 10 weeks, minus 1 plus 2"?

* In Rapid Development, Steve McConnell actually suggests that you communicate estimates in a range that gets smaller over time, with the larger number first. "Six to four months" sounds strange - but if you say the smaller number first, people tend to forget the larger one.)

R3: If you provide evidence, say where you have it from, or who said so, and what facts (numbers, history, written documentation) lead to this conclusion.
Note: Do this in the backup slides for example. At least you should be able to pull it out of the drawer if requested.

R4: Consider using a safety margin, like factor 2 (bridge builders), factor 4 (space craft engineers).
Notes:
* The margin is an expression of how much risk your are willing to take. Thus, it is a way of controlling risk.
* Use margins whenever you find a number or source is not very credible (like < 0.4), you don't have any historic data, or if there's no economic way of achieving higher credibility.
* Safety Margins do not necessarily increase real costs (but planned costs)

R5: Always adjust your cost/benefit-ratios by a credibility factor.
Notes:
A simple scale would be {0.0 - guess, no real facts available, 0.5 - some facts or past experience available, 1.0 - scientific proof available}
A more sophisticated scale would be
{0.0 Wild guess, no credibility
0.1 We know it has been done somewhere (outside the company)
0.2 We have one measurement somewhere (from outside the company)
0.3 There are several measurements in the estimated range (outside the company)
0.4 The measurements are relevant to our case because <fact1, fact2> with credibility <x, y> (don't get trapped in recursion here)
0.5 The method of measurement is considered reliable by <whom>
0.6 We have used the method in-house
0.7 We have reliable measurements in-house
0.8 Reliable in-house measurements correlate to independent external measurements
0.9 We have used the idea on this project and measured it
1.0 Perfect credibility, we have rock solid, contract-guaranteed, long-term, credible experience with this idea on this project and, the results are unlikely to disappear}

Thursday, July 24, 2008

10 Critical Requirements Principles for 0-Defects-Systems

Name: 10 Critical Requirements Principles for 0-Defects-Systems
Type: Principles
Status: final
Version: 2008-07-24

Gist: It might sound esoteric to you, but it is actually possible to build non-trivial systems with (close to) zero defects. At the very least, we should try to get close to zero defects as early in the development cycle as possible, because of the relative defect removal costs (factors 1-10-100-1000, remember?).
I had the luck to be part in such a system development (a multi-user database system with about six releases over a ten-year period with strictest requirements on data integrity and data availability. We detected 5 (five) defects in total during user acceptance testing. The system still works with no interruptions and obviously perfectly according to the requirements). These are the top ten rules we (un)consciously followed, as we found out in a Lessons Learned earlier this month. I added principles P7-10 because of recent and not so recent experience in other projects. However, following these later principles (and ignoring the principles P1-6) did not lead to zero-defect systems. I guess all these principles represent necessary not sufficient conditions. After all, this is anecdotal evidence.

P1: Customer and software developer who like each other.
P2: Small and easy scope of your requirements, increasing in small steps.
P3: High-value requirements are the initial focus.
High-value DEFINED anything that is more important to a primary stakeholder than other things (relative definition intentional).
P4: Acceptance criteria are paramount.
P5: Templates rule.
P6: Quality control for your requirements (inspections).
Requirements inspection DEFINED AS a method of process control through sampling measurement of specification quality.
P7: Top management's requirements (aka objectives).
Top management DEFINED AS every manager that is higher up in the food chain than the project manager and has impact on project success or failure
P8: Problems first (then the requirements).
P9: Performance requirements first.
Performance requirement DEFINED AS A specifies the stakeholder requirements for 'how well' a system should perform
P10: Replacing requirement-based estimation with planning.
I. e. use some capacity throughput metric and promote the concept of variation and fluctuation.

Friday, July 04, 2008

7 Useful Viewpoints for Task Planning Oriented Towards Quick Results

Name: 7 Useful Viewpoints for Task Planning Oriented Towards Quick Results
Type: Rules
Status: final
Version: 2008-07-04

Gist: While planning tasks or activities, and at the same time focusing on quick results, it is a good idea to describe each task by means of different viewpoints. Here's my suggestion, derived from Tom Gilb's work, the BABOK and my own experience.

R1: Each task should be described in terms of
Purpose. Why are you doing this? This is the single most important category, because it may lead you to the understanding that this is not the right task.
Entry. What do you need in order to begin and complete the task? This prevents you from doing things without the proper basis, which would most likely causes unnecessary re-work.
Process. Which steps do you follow? This is a little plan for your task. It may be only one step, but most of the times there are more than one, even for small tasks. Splitting tasks into steps allows you to return to previous steps in case you learn something new.
Exit. What are the results of the task? What will be the benefit? This is what counts and what can give you the beautiful chance of realizing benefits 'along the way' some greater set of tasks.
Stakeholders. With whom will you have to speak? Whose requirements will you have to take into account? This prevents re-work caused by ignoring influential people.
Methods/Techniques. How can you do it? Note anything that may help you completing the task.
Time-Limit. How long do you plan to work on it? If you need substantially longer, you will have a clear indicator that something is wrong. Short deadlines help you focus (80/20 principle).

Thursday, July 03, 2008

7 Rules for High Effectiveness

Name: 7 Rules for High Effectiveness
Type: Rules
Status: final
Version: 2008-07-03

Gist: Effective problem solvers develop a mind set of effectiveness. Here are the 7 Habits of Highly Successful People, from Stephen R. Covey, Simon & Schuster, Inc., 1989

R1: Be Proactive - take the initiative
R2: Visualize the end from the start - know where you're going ' it is better to do this quantitativly
Note: A good technique for doing R2 and R3 is the Kepner-Tregoe-Method, which is quite well described here. Better, still, is using Impact Estimation Tables (because of their much better handling ob quantitative information.)
R3: List priorities ' it is better, still, to do this quantitativly
R4: Think WIN/WIN
R5: Understand - listen, listen, listen / learn, learn, learn
R6 Synergize - make the whole more than the sum of the parts
R7: Seek Personal Renewal in various areas:
- Physical: exercise, nutrition, stress management
- Mental: reading. Thinking
- Spiritual: value clarification, meditation
- Social/Emotional: empathy, self-esteem