Thursday, December 25, 2008

Software Development Practice in the 21st (22nd?) Century

A couple of months ago I published an article about 10 Critical Requirements Principles for 0-Defects-Systems (on Ravens Brain). Anecdotal evidence on how I once achieved zero defects in one of my projects.

If you like the thought of reliable software (sic!) you might as well like Charles Fishman's article on the On-Board Shuttle Group, the producer of the space shuttle's software. Yes, they have a lot of money to spend, but many lives and expensive equipment are at stake.
This software never crashes. It never needs to be re-booted. This software is bug-free. It is perfect, as perfect as human beings have achieved. Consider these stats : the last three versions of the program -- each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. -- Charles Fishman on the software product the Group puts out


BTW, they seem to use three key pracices: very detailed requirements, inspections and continuous improvement.
The way the process works, it not only finds errors in the software. The process finds errors in the process. -- Charles Fishman on the On-Board Shuttle Group's process
Happy New Year! (Note that 2009 is a year of the 21st century. Can't we do any better?)

PS: I'm going to go on vacation from Dec 28 to Jan 20. More interesting topics still to come, please be patient.

Monday, December 22, 2008

Lewin on Theories

Just this cute little quote here, for all of you who need to be assured of their "sometimes too theoretical work":

There's nothing as practical as a good theory. - Kurt Lewin

Merry Christmas, and sorry for all the clutter around short posts ;-)

News in the Field of Systems Engineering

For a VERY comprehensive monthly newsletter on Systems Engineering go to the Project Performance International website. A quote from the site:
Project Performance International is proud to produce a monthly Systems Engineering newsletter, named "SyEN". The newsletter presents in-depth coverage of the month's news in systems engineering and directly related fields, plus limited
information on PPI's activities and relevant industry events.

The nice thing is, advertising is REALLY limited. I find the newsletter do be both interesting and easy to read. They will send you and E-Mail with an table of contents, and you can then go to the website and read it. PDF format ist also provided.
From this months contents:
  • Featured Article: The ISO Way - Alwyn Smit
  • Featured Society: Association for Configuration and Data Management (ACDM)
  • Systems Engineering Software Tools News
  • Systems Engineering Books, Reports, Articles and Papers
  • Conferences and Meetings
  • Education
  • People
  • Related News
  • Systems Engineering-Relevant Websites
  • Standards and Guides
  • PPI News
  • PPI Events

Sunday, December 21, 2008

Why Re-work isn't a Bad Thing in all Situations.

Software Development has many characteristics of production. I'm thinking of the various steps that as a whole comprise the development process, in which artifacts are passed from one sub process to another. Requirements are passed on to design, code is passed on to test (or vice versa, in test driven-development), and so on.

Alistair Cockburn wrote an interesting article on what to do with the excess production capability of non-bottleneck sub-processes in a complex production process. In the Crosstalk issue of Jan 2009, he explores new strategies to optimizing a process as a whole. One idea is to consciously use re-work (you know, the bad thing you want to avoid like the plague) to improve the quality of affairs earlier rather than later.

In terms of what to do with excess capacity at a non-bottleneck station, there is a strategy different from sitting idle, doing the work of the bottleneck, or simplifying the work at the bottleneck; it is to use the excess capacity to rework the ideas to get them more stable so that less rework is needed later at the bottleneck station.


Mark-up is mine. You see the basic idea.
In conclusion he lists the following four ways of using work that otherwise would have spent idling around, waiting for a bottleneck to finish work:

* Have the workers who would idle do the work of the bottleneck sub-process.

Note: Although this sometimes is mandated by some of the Agile community, I have seldom seen workers in the software industry that can produce excellent quality results in more than one or two disciplines.

* Have them simplify the work at the bottleneck sub-process.

Note: An idea would be to help out the BAs by scanning through documents and giving pointers to relevant information.

* Have them rework material to reduce future rework required at the bottleneck sub-process.

Note: See the introducing quote by A. Cockburn. Idea: Improve documents that are going to be built upon "further downstream" to approach a quality level of less than 1 major defect per page (more than 20 per page is very common in documents that already have a 'QA passed' stamp).

* Have them create multiple alternatives for the bottleneck sub-process to choose from.

Note: An example would be to provide different designs to stakeholders. Say, GUI designs. This can almost take the form of concurrently developing solutions to one problem by different engineering teams, for the best solution should be evaluated and consciously chosen.

Saturday, December 13, 2008

Lack of Engineering Practices Make 'Agile' Teams Fail

James Shore wrote a blog post that is among my TOP 5 in this quarter. Go read it! He argues that the industry sees so many agile projects fail because most teams that call themselves 'agile' mistake the real idea with the wrong set of practices. Maybe this quote says best what his article is about:
These teams say they're Agile, but they're just planning (and replanning) frequently. Short cycles and the ability to re-plan are the benefit that Agile gives you. It's the reward, not the method. These psuedo-Agile teams are having dessert every night and skipping their vegetables. By leaving out all the other stuff--the stuff that's really Agile--they're setting themselves up for rotten teeth, an oversized waistline, and ultimate failure. They feel good now, but it won't last.

This speaks from my heart. It's exactly what happened when I tried to turn a project 'agile' in a super waterfall organization (picking up this metaphor I'd say it's an Angel Falls organization). We will have technical debt to pay off for at least a decade. However, we found that out quite quickly ;-)
If you're looking for some method that is strong both in the management and engineering field, check out Evo by Tom Gilb. I know I recommended Mr. Gilb's work over and over again in the past 2 years, and it almost seems ridiculous. Be assured, I'm not getting paid by Tom :-)
To get a very clear perspective on iterative planning check out Niels Malotaux' Time Line.

Wednesday, December 10, 2008

Use Case Content Patterns Updated

Martin Langlands incorporated "my" DESTRUCTOR pattern into one article, now version 2 of his Use Case Content Patterns description. Have a look here, the whole thing is easier to handle now.
Hint: Click 'files' at the bottom of the wiki page to see the pattern file. Sorry for the inconvenience!

Monday, December 01, 2008

Patterns for Use Case CONTENT, anyone? Yes, finally!

While modelling use cases, have you ever thought 'I have modelled something very similar before.'? Here's a solution to re-occuring modelling tasks. Martin Langlands has developed a bundle of patterns for the content of use cases. He developed these with an extensive backround in banking and insurance systems. After reading his well written article it I found myself in a very different field but still the patterns were applicable, to say the least. I think they are ready to use in most areas of concern, and the idea should set the use case modelling world on fire!

Visit Planet Project to see the respective Process.

Note: I proudly contributed the idea of another pattern to Martin's, the DESTRUCTOR pattern ;-) Go have a look.

Thursday, November 27, 2008

Failure by delivering more than was specified?

I stumbled upon a puzzle. Please help me solve it.

It strikes me as odd, but there might be situations where a provider fails (i.e. the system will not be accepted by the customer), because he delivered MORE than was specified. I'm not talking about the bells and whistles here, that (only) wasted resources.

Imagine a hard- or software product that is designed to serve more purposes than required by the single customer. Any COTS product and any product line component should fit this definition.

Which kind of requirements could be exceeded, scalar and/or binary requirements? I think scalar requirements (anything you can measure on some scale) cannot be exceeded, if they do not constrain the required target on the scale on two sides. Haven't seen that (It's always "X or better", e.g. 10.000 transactions per second or more.
Even if it was constraint on two sides, this simply would mean a defect.

But there can be a surplus of binary qualities, i.e. functions. A surplus function can affect other functions and/or scalar qualities, I think.
Say, as a quite obvious example, the system sports a sorting function which was not required. A complex set of data can be sorted, and sorting may take some time. A user can trigger the function that was not required.
- This might derail overall system availablity (resonse time), a user required quality.
- It might open a security hole.
- It might affect data integrity, if some neighboring system does not expect the data to be sorted THAT way.
- It might change the output of another function, that was required, and that does not expect the data to be sorted THAT way.
(First fantasy flush ends here.)

So, if you find a surplus function in a system, what do you do? Call it a defect and refuse to accept the system?

Eager for your comments!

Monday, November 24, 2008

All you want to know about Stakeholder Management

Alec Satin of the "Making Project Management Better" weblog has a brilliant list of links to Stakeholder management resources.

From his explainatory text:

In this list are articles to get you thinking about some of the less obvious issues in pleasing stakeholders. Primary selection criteria: (a) quick to read, (b) good value for your time, and (c) of interest whatever your level of experience." Markup is mine.


Enjoy!

Saturday, November 22, 2008

Engineers discover Planet Project!

I'm proud to provide you with nearly all the content of this blog in a wiki form. Please check out this blog's sister website Planet Project.
A couple of people asked me to choose a wiki, so people can contribute. Of course! You are very welcome, please populate the Planet Project!
A couple of helpful links to the Planet:

While I will still write this blog regularily, all new processes, principles and rules will be published on the Planet only.

Enjoy!

Friday, October 31, 2008

Templates available in GERMAN

I translated two kinds of templates to German, in case you need them in that language.


Both is Tom Gilb's work, although I took the table from Ryan Shriver.

Monday, October 13, 2008

How to Tell Principles from Paradigms

Name: How to Tell Principles from Paradigms
Type: Principles (or Paradigms :-?)
Status: final
Version: 2008-10-13
Source: Covey, 7 Habits of Highly Effective People

Gist: If you - like me - like to be picky with words sometimes, here's a nice explanation on two words we use over and over again. It started me thinking about what I give the 'principles' tag here on my blog. I will put more work on all the existing principles here, in order to better distinguish principles from other things.

Principles
Principles describe the way things are, like the law of gravity.
Principles represent natural laws, the reality
Principles are sustainable and timeless
Principles are self-evident & self-validating
Principles can't be faked, you're not in control of them
Principles will remain constant and universal
Principles ultimately govern

Paradigms
Paradigms are mental images of the way things are. ´Hence, a paradigm can be some one's mental image of a principle.
Paradigms represent implicit assumptions of our lives
Paradigms are projection of our background
Paradigms reveals my world view, my autobiography
Paradigms can change with my thinking

Wednesday, October 01, 2008

Impact Estimation Table Template for free download

Courtesy of Ryan Shriver I translated his Impact Estimation Table template into GERMAN. Go here to download it.

The above link to the german table will also give you a glimpse on the Wiki project I'm doing: All the posts, i.e. principles, process and rules will be available in a wiki format shortly.

Wednesday, September 10, 2008

How about a wiki?

Dear readers,

today I want to ask you a question. I'm thinking about transforming this blog to a wiki. All the principles, processes, rules shall be available in a form that is modifyable for content, by anybody.

What would you prefer, the blog or the wiki? And if you'd like a wiki, should I give authorship to anybody, without any restrictions (like wikipedia)?

Please let me know. Use the comment function of this post or email me at rolf(dot)goetz(at)gmx(dot)de. If you have any special requirements, I'd be happy to hear them!

Thanks a million!

Rolf

Monday, September 08, 2008

Update on the Credibility-Info for Estimations

I updated this post with a few more interesting quotes and most of all the beautiful idea of giving ranges backwards, like "the project will be finished in 6-4 months", as the first number of a range seems to stick in people's minds.

Basic Structure for Writing Acceptance Criteria / Scenarios

Name: Basic Structure for Writing Acceptance Criteria / Scenarios
Type: Rules
Status: final
Version: 2008-09-08
Sources: Dan North's Behavior Driven Development (RSpec). I was introduced to this by Chris Matts.

Gist: Specify scenarios in a human (and machine) readable, yet compact way.

R1: Use the following sentence structure:
GIVEN <context>
WHEN <event>
THEN <expected behavior>

Example:
GIVEN the user has proper credentials
WHEN he selects an order for deletion
THEN the system accepts the selection


Example:
GIVEN the system is running under stress profile XY
WHEN interface A sends a service request
THEN the system responds to A in 3 seconds or less


R2: If you need to specifiy more complex conditions, use AND, OR, NOT
GIVEN <context 1>
OR <context 2>
WHEN <event 1>
AND <event 2>
THEN <behavior 1>
NOT <behavior 2>

4 Strategies Towards Good Requirements

Name: 4 Strategies Towards Good Requirements
Type: Rules
Status: final
Version: 2008-09-08

Gist: Left with the 3 Most Powerful Requirements Principles, what should you do? The following 4 rules can help you adhere to them.

R1: You should get to the point.
- find the priority stakeholders and use their point of view
- don't write requirements that arbitrarily constrain the scope
- seperate strategic requirements from basics, and from solutions or designs
- ask 'why' a couple of times (root-cause analysis)

R2: You should quantify requirements. 'don't be weary, it's less work than you think, once you tried
- every quality requirement can be expressed in numbers (by definition)
- every functional requirement is associated with at least one quality requirement (just ask 'how good?')

R3: You should give your requirements time to evolve, nobody really knows them completely and consistently, now.
- you will find the real requirements by frequent delivery of your product to the stakeholders, e. g. by feedback
- don't get trapped by the idea someone knows all the requirements just because you specify an existing system ' the world kept turning in the meantime, yielding 2-10% requirements creep per month.

R4. You should analyse the value of your requirements and then focus on the ones with best ROI.
- the real requirements are the profitable ones
- it is economic to focus on the top ten (max) requirements

The 3 Most Powerful Requirements Principles

Name: The 3 Most Powerful Requirements Principles
Type: Principles
Status: final
Version: 2008-09-08

Gist: A lot has been written about how requirements should look like, on what they are and what they are not, on what you should do to capture them etc. These 3 principles take you 80% of the way. They are grounded in sound systems engineering.

P1: My requirements are re-useable.
This means
- every requirement is unique. No copying please! 'I. e. I write requirements so that they have a unique name, and use the name in order to re-use the requirement in a different context, instead of copying the requirements text to the other context. Dick Karpinsky calls this the DRY rule: Don't Repeat Yourself.
- every requirement (not the whole document) provides information on version, author, stakeholder, owner, and status
- every requirement refers to s. th. and is referred to by s. th., e.g. designs, tests, suggestions/solutions, use cases, ...

P2: My requirements are intelligible.
This means
- every requirement has a type, e. g. one of {vision, function requirement, peformance requirement, resource requirement, design constraint, condition constraint} (courtesy of T. Gilb's basic requirement types)
- every requirement has a clear structure, i. e. unique, unchanging number or tag, scale, goal (= core specification attributes)
- complex requirements have a clear hierarchy, not some cloudy abstract
- no requirement in any non-commentary part uses words that are just fraught with significance, like 'high performance', 'good reliability', 'reduced costs', 'flexible'

P3: My requirements are relevant.
This means
- every requirement clearly indicates owner, author, stakeholder (in order to be able to judge relevance)
- every requirement clearly indicates its sources (in order to be able to check them)
- every requirement clearly indicates its relative priority (for a specific release, for a specific stakeholder)
- every requirement clearly indicates its constraints and goals
- every requirement clearly indicates what happens if it is not satisfied
- no requirement states a design to some problem, but every requirement declares a future state

Suggested Reading:

4 Strategies Towards Good Requirements

Friday, August 01, 2008

5 Steps Towards Simple and Effective Project Estimation

Name: 5 Steps Towards Simple and Effective Project Estimation
Type: Process
Status: final
Version: 2008-07-31
Sources: my practice, Niels Malotaux (see http://www.maloteaux.nl/), www.iit.edu/~it/delphi.html

Gist: Every once in a while we start projects, or are about to start one, and somebody asks "how long will it take? " (The better question would be "how long will it take to have benefit A and B?", but that's another story.) The following simple 5 step process provided sufficient precision while not being too time-consuming. Additional benefit of the process: A group of people did it, most commonly the group that as to work on it anyway.

Entry1: You need to have clear objectives.
Note: If you don't have clear objectives, you will notice in Step 4.

S1: Sit down - alone or better with one colleague - and break the work down to one or two dozens of task-chunks. List them.
Notes:
* If it's a well-known problem go find someone who did it before!
* If it's a rather small project you can break it down to single tasks, but you don't have to. Roughly 10-25 list items.
* Best effort. Don't worry too much about forgetting things or about the uncertainty because you don't know every constraint.

S2: Distribute the list to the people who will work on the project or know enough about the work to be done. Ask for their best estimates for each chunk. Allow for adding items to the list.
Notes:
* It's a good idea to limit the time for the estimations, especially if you have external staff that is paid by time and material.
* Don't go for ultimate precision, you won't achieve it anyway. Variances will average.
* You're much better off with an initial, not so precise estimate and early, constant calibration.

S3: See where estimates differ greatly. Discuss or further define the contents, not the estimates.
Notes:
* Everyone's an expert here. It is not about right or wrong estimates, but about understanding the work to be done.
* In the original Delphi process there's an element of anonymity. I can't follow that advice, for the purpose of this process also is to build consensus.

S4: Iterate S2-3 until sufficient consensus is reached.
Notes:
* Niels says, not more than one or two iterations are necessary.
* I say, it sometimes needs a dictator approach.
* Careful here, make sure to discuss the content of the work. Also helpful to further specify the objectives!

S5: Add up all task-chunk's estimates: that's how much work the project is.
Note: The single most useful approach to fitting the work in a tight schedule is to work hard to NOT do things that later will be superfluous.

Related Posts:
Earned Value Analysis for Feature-Burndown in Iterative Projects
Measuring and Estimating one's Velocity
Calculate Process Efficiency
Quantum Mechanics, Buddhism and Projects

Tuesday, July 29, 2008

Why and How to State the Credibility of Estimations

Name: Why and How to State the Credibility of Estimations
Type: Principles
Status: final
Version: 2008-09-08
Sources: T. Gilb's work on Impact Estimation Tables, see http://www.gilb.com/; my own experience

Gist: Have you seen presentations suggesting a project or other endeavour with the purpose of convincing someone that the project would be the right thing to do? More often than not these presentations include shiny numbers of relatively low cost and relatively high benefit (of course). But how often does the presenter tell something about the Credibility of the numbers?
I believe every decision maker needs some assessment of credibility. If you don't provide him with how believable your evidence is, the decision maker might turn to other sources, like "what's this guy's success record?", "I don't believe her so I will resort to a very tight budget in order not to risk too much", etc.

Same problem with any estimate you prepare and present. These Principles explain why you should always give credibility information, and the Rules explain how you can do it.

" An estimate that is 100% certain is a commitment." -- Unknown

Credibility information DEFINED AS any evidence or number that expresses how certain we believe a statement is

Principles
P1: Estimates are assumptions about the future, therefore they must be uncertain.
"Unfortunately, tomorrow never gets here." - Leo Babauta

P2: Predictions are a difficult matter. The farther away the state we predict, the more difficult, the riskier.

P3: Credibility information is a measure of how certain we are about a predicted state.

P4: Decision makers need some assessment of the risk involved in the decision. What can happen in the worst case? (It is equally valid to ask for the best case, but seriously, the best case happens far too seldom to take it into account)

"Most people really like to be able to sleep at night." -- Heuser's Rule

P5: The clearer you state uncertainty,
* the easier it is to distinguish between different strategies
* the more likely other people believe you
* the clearer you yourself see the risk involved
* the better you can argue about a strategy
* the easier it is to learn something about the risk involved and to do something about it
* the clearer you give or take responsibility for an endeavour
Note: While the decision maker obviously accepts a risk involved by seeing uncertainty numbers, there are decision makers who don't like the idea at all. I guess this is a reason why credibility information isn't requested very often.

P6: If credibility information is missing, we by default assume the number or the source is not credible at all.
Note: Maybe the presenter just forgot to provide the information, and then it shouldn't be a problem. We can send him to get it.

Rules
R1: Any number that expresses some future state should be accompanied by credibility information.

R2: If you predict a value on a scale, like "end date", "targeted budget", "performance improvement", or "maintenance cost", give a range of that number.
Notes:
* It's useful to do this in a plus/minus X% fashion. The percentage is a clear and comparable signal to the audience.
* It is not mandatory to give the best/worst case estimates an equal range. How about "we are sure to complete this in 10 weeks, minus 1 plus 2"?

* In Rapid Development, Steve McConnell actually suggests that you communicate estimates in a range that gets smaller over time, with the larger number first. "Six to four months" sounds strange - but if you say the smaller number first, people tend to forget the larger one.)

R3: If you provide evidence, say where you have it from, or who said so, and what facts (numbers, history, written documentation) lead to this conclusion.
Note: Do this in the backup slides for example. At least you should be able to pull it out of the drawer if requested.

R4: Consider using a safety margin, like factor 2 (bridge builders), factor 4 (space craft engineers).
Notes:
* The margin is an expression of how much risk your are willing to take. Thus, it is a way of controlling risk.
* Use margins whenever you find a number or source is not very credible (like < 0.4), you don't have any historic data, or if there's no economic way of achieving higher credibility.
* Safety Margins do not necessarily increase real costs (but planned costs)

R5: Always adjust your cost/benefit-ratios by a credibility factor.
Notes:
A simple scale would be {0.0 - guess, no real facts available, 0.5 - some facts or past experience available, 1.0 - scientific proof available}
A more sophisticated scale would be
{0.0 Wild guess, no credibility
0.1 We know it has been done somewhere (outside the company)
0.2 We have one measurement somewhere (from outside the company)
0.3 There are several measurements in the estimated range (outside the company)
0.4 The measurements are relevant to our case because <fact1, fact2> with credibility <x, y> (don't get trapped in recursion here)
0.5 The method of measurement is considered reliable by <whom>
0.6 We have used the method in-house
0.7 We have reliable measurements in-house
0.8 Reliable in-house measurements correlate to independent external measurements
0.9 We have used the idea on this project and measured it
1.0 Perfect credibility, we have rock solid, contract-guaranteed, long-term, credible experience with this idea on this project and, the results are unlikely to disappear}

Thursday, July 24, 2008

10 Critical Requirements Principles for 0-Defects-Systems

Name: 10 Critical Requirements Principles for 0-Defects-Systems
Type: Principles
Status: final
Version: 2008-07-24

Gist: It might sound esoteric to you, but it is actually possible to build non-trivial systems with (close to) zero defects. At the very least, we should try to get close to zero defects as early in the development cycle as possible, because of the relative defect removal costs (factors 1-10-100-1000, remember?).
I had the luck to be part in such a system development (a multi-user database system with about six releases over a ten-year period with strictest requirements on data integrity and data availability. We detected 5 (five) defects in total during user acceptance testing. The system still works with no interruptions and obviously perfectly according to the requirements). These are the top ten rules we (un)consciously followed, as we found out in a Lessons Learned earlier this month. I added principles P7-10 because of recent and not so recent experience in other projects. However, following these later principles (and ignoring the principles P1-6) did not lead to zero-defect systems. I guess all these principles represent necessary not sufficient conditions. After all, this is anecdotal evidence.

P1: Customer and software developer who like each other.
P2: Small and easy scope of your requirements, increasing in small steps.
P3: High-value requirements are the initial focus.
High-value DEFINED anything that is more important to a primary stakeholder than other things (relative definition intentional).
P4: Acceptance criteria are paramount.
P5: Templates rule.
P6: Quality control for your requirements (inspections).
Requirements inspection DEFINED AS a method of process control through sampling measurement of specification quality.
P7: Top management's requirements (aka objectives).
Top management DEFINED AS every manager that is higher up in the food chain than the project manager and has impact on project success or failure
P8: Problems first (then the requirements).
P9: Performance requirements first.
Performance requirement DEFINED AS A specifies the stakeholder requirements for 'how well' a system should perform
P10: Replacing requirement-based estimation with planning.
I. e. use some capacity throughput metric and promote the concept of variation and fluctuation.

Friday, July 04, 2008

7 Useful Viewpoints for Task Planning Oriented Towards Quick Results

Name: 7 Useful Viewpoints for Task Planning Oriented Towards Quick Results
Type: Rules
Status: final
Version: 2008-07-04

Gist: While planning tasks or activities, and at the same time focusing on quick results, it is a good idea to describe each task by means of different viewpoints. Here's my suggestion, derived from Tom Gilb's work, the BABOK and my own experience.

R1: Each task should be described in terms of
Purpose. Why are you doing this? This is the single most important category, because it may lead you to the understanding that this is not the right task.
Entry. What do you need in order to begin and complete the task? This prevents you from doing things without the proper basis, which would most likely causes unnecessary re-work.
Process. Which steps do you follow? This is a little plan for your task. It may be only one step, but most of the times there are more than one, even for small tasks. Splitting tasks into steps allows you to return to previous steps in case you learn something new.
Exit. What are the results of the task? What will be the benefit? This is what counts and what can give you the beautiful chance of realizing benefits 'along the way' some greater set of tasks.
Stakeholders. With whom will you have to speak? Whose requirements will you have to take into account? This prevents re-work caused by ignoring influential people.
Methods/Techniques. How can you do it? Note anything that may help you completing the task.
Time-Limit. How long do you plan to work on it? If you need substantially longer, you will have a clear indicator that something is wrong. Short deadlines help you focus (80/20 principle).

Thursday, July 03, 2008

7 Rules for High Effectiveness

Name: 7 Rules for High Effectiveness
Type: Rules
Status: final
Version: 2008-07-03

Gist: Effective problem solvers develop a mind set of effectiveness. Here are the 7 Habits of Highly Successful People, from Stephen R. Covey, Simon & Schuster, Inc., 1989

R1: Be Proactive - take the initiative
R2: Visualize the end from the start - know where you're going ' it is better to do this quantitativly
Note: A good technique for doing R2 and R3 is the Kepner-Tregoe-Method, which is quite well described here. Better, still, is using Impact Estimation Tables (because of their much better handling ob quantitative information.)
R3: List priorities ' it is better, still, to do this quantitativly
R4: Think WIN/WIN
R5: Understand - listen, listen, listen / learn, learn, learn
R6 Synergize - make the whole more than the sum of the parts
R7: Seek Personal Renewal in various areas:
- Physical: exercise, nutrition, stress management
- Mental: reading. Thinking
- Spiritual: value clarification, meditation
- Social/Emotional: empathy, self-esteem

Monday, June 02, 2008

Link Fest on Non-Functional Requirements

Name: Link Fest on Non-Functional Requirements
Type: References
Status: final
Version: 2008-06-02

Gist: As there are no posts as of June 2007 I could remind you of, I decided to direct you to material which proved to be useful for me to propagate non-functional requirements. It is nearly all within the Gilb family's website, that's solely because there are few people with in-depth knowledge on the topic.


To start with, here's some highly recommended source of structural information on requirements in general, processes, work units etc., the Open Process Framework by Don Firesmith. Among a useful taxonomy of non-functional requirements there are a lot of examples which can be used as ideas for your own requirements writing.
Unfortunately I haven't figured out how to link to specific pages within the Framework Repository itself, so I give you directions: In the left column, under 'OPFRO Repository' expand 'Work Products', then 'by type', and click 'Requirements'.
On the page shown scroll down a bit and click 'Quality Requirement'.

And here's the Gilb stuff:


And on this page:
  • Making Metrics Practical
  • Designing Maintainability in Software Engineering
  • How to Quantify Quality

Here are some slides etc.:
  • Quantifying Security
  • Designing Maintainability in Software Engineering
  • Making Metrics Practical: 20 Principles
  • Quantifying Quality
Links to social knowledge-gathering sites:

Friday, May 30, 2008

6 Ways to Propagate Non-Functional Requirements

Name: 6 Ways to Propagate Non-Functional Requirements
Type: Rules
Status: final
Version: 2008-05-30

Gist: Most BAs and a lot of other project people would immediatly agree to "nonfunctional requirements (NFR) are important in almost any software development." However, most often these people also would agree to sentences like "but it is hard to do", "we have a special sitiuation here, we can't do it", "I haven't managed to convince the others", or the exceptionally honest "I don't really know where to start", or "Why should we invest in NFRs?". I want to show how to convince others to REALLY do nonfunctional requirements, and how to REALLY start with it yourself, if that is your concern.

R1: Don't do it alone. At least get some like-minded chap to talk to regularly. Who are the opinion leaders? Who are early / quick adopters in your area? Does it help to get somebody elso on board? How about somebody from higher up the food chain? From my own experience I'd say it's likely that you know what goes wrong and what to do about it, but that someone else is better at marketing ideas.
Note: I saw people hype an unimpressive topic so much that everybody started talking about it. (I'm not talking about 'offshoring' or 'SOA' here, but smaller, company-internal things.) However, there is a certain risk you then cannot control it anymore :-)

R2: Use all available official means, e. g corporate idea management, any SPI programme, your regular talk with your superior, the chair you own on any related board. Which department is responsible?
Note: You don't want to fight department borders here, especially in large organizations. Maybe it's better to get a job at this other department.

R3: Get outside help. "A Prophet Hath No Honour in His Own Country". Maybe some external expert can spark the fire within your organization. These people specialize in quick conviction! Get them in and find a way to make more senior management listen to them.
Note: I could recommend the Gilb family, but that would be surreptitious advertising. However, Tom Gilb's book 'Competitive Engineering' is full of examples, his website full of papers that could help you out.

R4: Do baby steps (ok, that one shows up in nearly all how to-guides nowadays). Can you "just do a pilot" on some project? Or start with a few requirements you really feel comfortable about. Advance with the one most commonly used, like customer satisfaction, availability, maintainability.
Note: While there are good maintainability metrics to apply on systems, it seems nobody can show a measurable relationship between these metrics and, say, the future cost of maintenance. Maybe that is because the metrics focus on the code too much (analyzability), and maintenance has to be seen as a more holistic process.

R5: Let (bad) examples speak for you. Where are the bad examples? I'm sure you will find some quickly. Can you show missing or bad NFRs being a root cause for some problem? Does anyone else think so (see R1)? Really get into a few examples so you know them inside-out.

Note: I suggest you use examples that show that it does not help the user a single bit if these requirements are met. Example: Some detailed requirement saying it should take x seconds from the button-click until something is persistenty saved. I bet the user is not really interested in saving in 8 out of 10 cases. He cares about the time he needs to accomplish some task greater than saving, or about some way to reuse work.

R6: Talk about it. Once you started your endeavor to push NFRs, maybe with a pilot, make a public commitment towards your goal and keep updating an audience about your progress. Talk about your proceedings at the coffee machine, in the cafeteria or the casino, in order to keep you focused and the audience listening.

Monday, May 19, 2008

How to Personally Get the Most Out of Finished Projects

Name: How to Personally Get the Most Out of Finished Projects
Type: Process
Status: final
Version: 2008-05-19

Gist: Dustin Wax wrote an awesome article at the Lifehack web log. Topic: Getting Past Done: What to Do After You've Finished a Big Project
As it is significantly more stuff than fluff I will essentially post a link only, for once. However, you might like the following definition.

reflective thinking DEFINED AS active, persistent, and careful consideration of any belief or supposed form of knowledge in the light of the grounds that support it and the further conclusions to which it tends [that] includes a conscious and voluntary effort to establish belief upon a firm basis of evidence and rationality <- Dewey, J. 1933. How We Think: A Restatement of the Relation of Reflective Thinking to the Educative Process. Lexington, MA: Heath.

S1: Read: Getting Past Done: What to Do After You've Finished a Big Project <http://www.lifehack.org/articles/productivity/getting-past-done-what-to-do-after-youve-finished-a-big-project.html> and extract the questions.

S2: Answer the questions in a quiet moment. You could also use them for a guideline in a lessons learned meeting.
Note: Reflection is method of learning with scientific proof of its effectiveness.

S3: Prepare a checklist for future use on finished projects (or link here).

Friday, May 16, 2008

8 Useful Tips to Pick an Analyst

Name: 8 Useful Tips to Pick an Analyst
Type: Rules
Status: final
Version: 2008-05-16

Gist: Every now and then I run into the task to pick a business analyst or systems analyst for a project or customer. Sometimes this starts with writing a job offer, sometimes with a colleague asking if I could take part in a job interview with some prospect. What should you look for?

R1: Clarify what tasks the candidate will have. 'Ok, this is a no-brainer

R2: Clarify what the challenge in this particular project or this particular job will be.

R3: Clarify what (analyst-specific) professional skills will be needed, now, and in the future, in this particular job.

R4: Clarify what soft skills will be needed, now, and in the future, in this particular job.

R5: Make sure the candidate knows these things (and has a sound approach to dealing with them):

  • Stakeholders seem to change their mind on a regular basis. 'I think they don't, it just looks like a change
  • Goals are seldom properly understood.
  • Demand and Supply speak two different sociolects.
  • Stakeholders (among other people...) have trouble communicating about future, i.e. the benefits, or the system-to-be.
  • Other team members, including fellow analysts, architects, and QA-folk, deserve proper appreciation.
  • A problem is a solution to a problem is a solution to... (also see the means-and-ands post)
  • Many people (the prospect, too? Would be a bad sign for an analyst.) have a well developed answer reflex, so finding out what the problem really is can be quite a challenge.
  • Any required product quality can be described in a meaningful, quantified way. (Refer to Tom Gilb's extensive and highly useful wokr on the subject.)

R6: Clarify what testing skills will be needed. 'An analyst that does not know how to design acceptance criteria?

R7: Look for someone who uses words like 'now and then', 'generally', 'frequently', 'some', 'somewhat', 'arguable', 'among other', 'on the other hand', 'also', 'able', 'allow'.

Note: The language gives you hints on the prospect's experience level. Experts tend to have many answers.

R8: Avoid people who use words like 'steady', 'always', 'every', 'all', 'absolute', 'totally', 'without doubt', 'nothing', 'only', 'entirely', 'without exception', 'must', 'have to'.

Tuesday, May 13, 2008

Update on Being Agile on Offshore Projects

I updated

Being Agile on Offshore Projects,

with some information advocating NOT to do offshoring. Research shows that team productivity could be doubled if the team is in one room. While this seems like an argument often heard, the point is that there's proof.

Why Managing Too Many Risks is Too Risky

I wrote a guest post over at the Raven's Brain weblog. The topic is 'Heaps of Risks - Why Managing Too Many Risks is Too Risky'.

Risk management is a source of unusual human behavior: euphoria or excessive gambling when risk is underestimated, and panic attacks or depression when we predict that things are riskier than they really are. Both are risks by themselves.
In the post I'll address a risk that is also closely related to human behavior and presents a risk: Overly extensive risk registers, i.e. a list of several dozends of risks for a given project.

Read more

If it feels too troublesome to leave a comment there, why not leave it here?

Monday, May 05, 2008

A year ago in May on Clear Conceptual Thinking

Use cases

Stuff

Calculate / Measure

Citations

9 Rules for Requirements as Means of Communication

Name: 9 Rules for Requirements as Means of Communication
Type: Rules
Status: final
Version: 2008-05-05

Gist: What are the requirements for? From time to time I run into a project where the business analyst in charge (or requirements manager, or what ever they call him) does not know the answer. These projects do requirements because some governing company rule says they should. To my mind, requirements as all about transferring ideas from one person to another, or one group of people to another group. Here's what a BA should think about concerning communication.

R1) Always identify the readers. Make sure you sort out primary and secondary readership. Go for the group that benefits from the requirements NOW, if in doubt.

R2) Don't forget to ask your primary readers how they want to read requirements. Spoken or written prose, pictures, tables? Do they know UML? What's the common language of all your readers? Go check the SOPHIST's material on language and requirements if 'plain natural language' is the answer to the latter question.

R3) Always check if you can reduce the need to write things down by including people in requirement workshops with your stakeholders.

R4) Challenge arguments like 'we need the requirements after the project's finished, for maintenance'. Will there be an organization maintaining the system? Will they use your requirements?
Note: I know this rule hurts. However, I haven't seen a single specification that is still useful years (or even months) after the project's end, if this spec isn't maintained as well.

R5) Don't overestimate your memory. If you will be on a project for several months you yourself will need some aid for remembering that darn detail.

R6) Adjust your requirements style to what you're specifying. Is your system user interface-heavy? Database-strong? Functions? Constraints? Highly interactive? Real-time? Product family?

R7) Think of it: can the team (of designers, programmers, testers, users) be allowed to ask for the information they need, in contrast to you providing them 'one-size-fits-all' requirements.

R8) make sure everybody understands that perfect communication is not possible. The best thing you can do is speak with everybody a lot. The worst thing you can do is write a specification from beginning to end and throw that piece of paper over the fence (also see BUFR)

R9) You don't need requirements at all? You cannot not communicate! (Watzlawick)

Friday, April 25, 2008

A Personal Matter

If you like this blog or me, or just want to have good karma, please help:

Click in my charity widget on the right or visit my Firstgiving website (see photo of me!) and support my non-profit donation project. I'm raising funds for the school education of a child in the third world. Room to Read is a super efficient organization that provides under-privileged children with opportunities to gain the lifelong gift of education, in order to break the cycle of poverty and provide the means for self-determination.

12 Helpful Rules for Refactoring a System

Name: 12 Helpful Rules for Refactoring a System
Type: Rules
Status: final
Version: 2008-04-25

Gist: to provoke 2nd thoughts if you are to take some part in a refactoring. Refactorings seem to come about more frequently these days, maybe because even the stolid of the business people start to talk about 'agile'. Like all these now-buzzwords, there's a lot of potential misunderstanding hiding in the very word.

Refactoring DEFINED AS modifying something without changing its externally observable behaviour with the purpose of making maintenance easier. Note that you can refactor many things, ranging from systems to architectures to design to code. However, relevant literature suggests that Refactoring is meant to relate to Design and Implementation. See Martin Fowler's Website. Also see Code Refactoring.

R1) Whenever you hear the word refactoring, run! 'just joking...

R2) Precisely understand what the various stakeholders think a refactoring is. You'll be surprised. Correct the wording if needed.

R3) Precisely understand what purpose the refactoring has, from each of the stakeholder's point of view. Align expectations before you start.

R4) Assume a great deal of resistance from the people who pay. Refactoring for them is another word for "Give me $$$, I'll give you what you already have! " Find out who will pay for maintenance.

R5) By all means reduce the risk of having a system with new bugs after the refactoring. This can be done by rigorous test driven or even behaviour driven development for example. Any change in behaviour should be treated as a bug.

R6) Challenge the business case (if any ...) of the refactoring. At least, do proper cost-benefit analysis. For example, the system may not live long enough to amortize refactoring.

R7) Prepare for the situation that nobody can provide exact and dependable information on how future budgets will be affected by the refactoring

R8) Back off from refactoring if significant portions of the code will be both refactored and worked over due to new requirements. Do one thing at a time and bear in mind R5.

R9) Prepare for 3 different points of view if you ask 2 people for what specific refactorings should be done.

R10) Think of short-term and long-term effects.

R11) The improvement must be both quantified beforehand and measured afterwards, at least for non-local changes to the system.


R12) Be suspicious if someone suggests only one or two specific refactoring techniques will suffice to reach a project-level goal.

Monday, April 21, 2008

5 Powerful Principles to Challenge Arguments

Name: 5 Powerful Principles to Challenge Arguments
Type: Principles
Status: final
Version: 2008-04-21

Gist: to provide a general checklist for the suspicious. The 5 principles show behavioural patters people use to make you do things in their favour, but not necessarily in your favour. Use this on (in alphabetical order) advisors, colleagues, consultants, counsels, managers, salespersons, vendors.

Sources: Scott Berkun: How to Detect Bullshit; and my all-time-favourite checklist: 12 Tough Questions by Tom Gilb.

P1) People are uncertain and tend to ignorantly stretch the facts.

  • How do you know?
  • What are your sources? How can I check them?
  • Who told you that?
  • Can you quantify the improvement?

Note: Carefully watch the answerer. If he needs a while, maybe uncomfortably shifting position, there's a good chance he's either making something up or needs time to figure out how to disguise a weak argument.

P2) People with weak arguments often have not made their homework on the topic.

  • What is the counter argument?
  • Who else shares this point of view?
  • What are the risks of this, and what will you do about it?
  • Can you quantify the improvement?
  • How does your idea affect my goals and my budgets?
  • What would make you change your mind?
  • Have we got a complete solution?

Note: As from any facts, one can draw a set of reasonable interpretations, not just one. Everyone with intimate knowledge won't have great difficulties taking a different point of view for a while.

P3) People tend towards urgency when you are asked to make a decision with some hidden consequence.

  • Can I sleep on this?
  • When do we need to have a decision made? Why?
  • I'd like to consult Person A first.
  • Expert B, what do you think?

Note: People pressing ahead may try to throw you off your guard.

P4) People without a clear understanding of their point of view tend to inflate the language used.

  • Please break this in smaller pieces, so I can understand.
  • Explain this in simpler terms, please.
  • I refuse to accept this until me, or someone I trust, fully understands it.
  • Are you trying to say <...>? Then speak more plainly next time, please.

Note: Mark Twain once wrote in a letter to a friend: 'I'm sorry this letter is so long; I didn't have enough time to make it shorter.'

P5) People tend to have stronger arguments if they know someone is present who is hard to deceive.

  • Use your network.
  • Invite colleagues who have worked with these people.

Note: Simply help each other, like your family would do.

Friday, April 18, 2008

6 of the Dalai Lama's Leadership Principles

Name: 6 of the Dalai Lama's Leadership Principles
Type: Principles
Status: final
Version: 2008-04-18

Gist: to reflect upon the behavior of the 14th Dalai Lama when appearing publicly. It seems to me the following principles can also be applied in anyone's work in the position of some kind of leader. May be a project leader, a company's boss, the head of a department, or 'just" a thought leader within a group of people.

Sources: inspired by Cheri Baker at the Enlightended Manager Weblog; the 'Seeds of Compassion Webcast' from an event in Seattle, USA; some of the books the 14. Dalai Lama wrote; what people told me who had seen the Dalai Lama live.

P1: Don't direct, facilitate instead.
Note: This way, you find delight in the success and happiness of others.

P2: Make sure you want to learn something, so inquire rather that advocate. Listen, explore.
Note: In doing so, you'll show appreciation.

P3: Don't assume you know everything on the topic you're discussing. If someone brings something up that you think is wrong, show equanimity.
Note: There's the word 'equal' in equanimity.

P4: Be (and stay) humorous. Don't get angry, show compassion instead. Compassion means communication from your heart.
Note: This will facilitate creativity among the people you talk to.

P5: Encourage people to find their own answers, instead of giving all the answers you might have.
Note: also a good way of supporting P2, Learning.

P6: Be accountable.
Note: That means, you should pay attention to your actions and understand their consequences.

Monday, April 14, 2008

The Ominous Bill of Rights

Name: The Ominous Bill of Rights
Type: Principles
Status: final
Version: 2008-04-14

Gist: To present the Bill of Rights for a project development team (or contractor, originated by Tom Gilb), and to contrast it with a Bill from the customer / client point of view. I intend to balance Tom's view. PLEASE NOTICE, that these principles does not represent any official terms or principles used by my employer. This is, as all the other posts, a completely private matter.
Find a German version below.

Sources: The Contractor's Bill is from Gilb, Tom. 1988. Principles of Software Engineering Management. Wokingham and Reading, MA: Addison-Wesley. Page 23.
The Customer's Bill is out if my head.
Note: Tom did not call it 'Contractor's Bill of Rights', but only 'Bill of Rights'. It's clear what he had in mind if you read the specific principles.

Contractor's Bill of Rights:
P1: You have a right to know precisely what is expected of you.
P2: You have a right to clarify things with colleagues, anywhere in the organization.
P3: You have a right to initiate clearer definitions of objectives and strategies.
P4: You have a right to get objectives presented in measurable, quantified formats.
P5: You have a right to change your objectives and strategies, for better performance.
P6: You have a right to try out new ideas for improving communication.
P7: You have a right to fail when trying, but must kill your failures quickly.
P8: You have a right to challenge constructively higher-level objectives and strategies.
P9: You have a right to be judged objectively on your performance against measurable objectives.
P10 You have a right to offer constructive help to colleagues to improve communication.

Customer's Bill of Rights:
P11: We have a right to set the objectives for the endeavour.
P12: We have a right to change our minds about objectives at any time.
P13: We have a right to set the criteria by which we will measure your success.
P14: We have a right to be informed about exactly what you have accomplished at any time.
P15: We have a right to hear your arguments if you plan to change a decision that effects our objectives.
P16: We have a right to be informed of the consequences of a planned change, in measurable, quantified formats.
P17: We have a right to constuctively debate your planned change.
P18 We have a right to get clear documentation about any changed decision.

German Versions:
Rechte des Auftragnehmers:
Sie haben das Recht, genau zu wissen, was wir von Ihnen erwarten.
Sie haben das Recht, Dinge mit Kollegen aus der gesamten Projektorganisation zu klären.
Sie haben das Recht, klarere Definitionen von Zielen und Strategien anzustoßen.
Sie haben das Recht, Ziele in messbarer, quantifizierter Form präsentiert zu bekommen.
Sie haben das Recht, unsere Ziele und Strategien konstruktiv zu hinterfragen, um insgesamt ein besseres Ergebnis zu erzielen.
Sie haben das Recht, objektiv anhand messbarerer Ziele bewertet zu werden.
Sie haben das Recht, zu entscheiden über die Maßnahmen zur Erreichung der Ziele und ihre Reihenfolge.

Rechte des Auftraggebers:
Wir haben das Recht, die Ziele für das Vorhaben vorzugeben.
Wir haben das Recht, diese Ziele jederzeit zu ändern.
Wir haben das Recht, die Kriterien vorzugeben, anhand derer wir Erfolg messen.
Wir haben das Recht, jederzeit darüber informiert zu werden, was Sie bereits erledigt haben.
Wir haben das Recht, Ihre Argumente zu hören, falls Sie vorhaben Entscheidungen zu treffen, die sich auf die Zielerreichung auswirken.
Wir haben das Recht, über die Auswirkungen informiert zu werden, und zwar in objektiv überprüfbarer Art und Weise.
Wir haben das Recht, Ihre Entscheidung konstruktiv zu hinterfragen.
Wir haben das Recht, getroffene Entscheidungen klar dokumentiert zu bekommen.

Sunday, April 13, 2008

Quantum Mechanics, Buddhism, and Projects

I'm pleased to announce a guest post I wrote over at the pm411.org Project Management Podcast / Weblog. Ron Holohan was broad-minded enough to let me share my thoughts on the holy project trinity (Time, Budget, Scope). In essence, it's about Quantum Mechanics, Buddhism, and Projects

Get over there if you like to read about my more fluffy thoughts. Thank you!

Friday, April 11, 2008

11 Rules for Prioritizing Features in a Customer-Oriented Organization

Name: 11 Rules for Prioritizing Features in a Customer-Oriented Organization
Type: Rules
Status: final
Version: 2008-04-11

Gist: While prioritizing features, people people quite often take a ranking approach in order to balance different stakeholder's perspectives. This really may prevent satisfied stakeholders (or a shot time to market), and may lead to a mediocre release plan. Why? The math used in most ranking approaches is wrong. Here are 11 Rules for proper prioritization.

Source: http://tynerblain.com/blog/2008/04/09/improved-prioritization, thanks to Scott Selhorst for yet another great idea. Go there if you need a more visual explanation. I added my bits and pieces.

Prerequisites:
E1: You have a list of features gathered from various stakeholders.

E2: Your stakeholders are of different importance. ' maybe one represents the customer of your product, another is development/manufacturing, yet another is some governing technical department. Read about Finding Key Stakeholders.

E3: Each stakeholder has ranked all features, with smaller numbers for the more important ones.

Rules:
R0: Make sure you have a proper understanding of the goals (read Specifying Goals and Decomposing Goals) of the project or product. Make sure you understand your set of stakeholders in the light of the goals.

R1: Ignoring E2 above, the simplest approach would be to add the ranks for each feature and - voila - the features that are ranked best on average win.

R2: Not ignoring E2, don't make the mistake to give each stakeholder a weight and multiply the respective ranks with the weight before you sum the rank-numbers. This only works if you have lucky numbers.

R3: You can do like R2 suggests, but then you have to reverse the ranking numbers of E3, giving large numbers to the more important features.

R4: You should think hard what the stakeholder's weights represent. If you have one stakeholder that is more important than the others, it's paramount to make him happy, at least not to upset him. Do this by delivering his important features first, and only after that you think of features for the other stakeholders.
Note: Take the example from E2. I bet the purpose of your business is to make the customer happy.

R5: Make sure you do a Kano analysis in order to find features nobody thought of in the first place. This may be your advantage over competitors.

R6: If confronted with a large (> 20 items) list of features, let the stakeholders break it down into importance classes first. It's virtually impossible for the average stakeholder to truely rank so many items.

R7: Don't give in if your stakeholders say they need to know the costs before they can rank the features. You want to know what they want to have! You can always work out things to be cheaper, especially if your features are real (business) requirements, not solutions disguised as requirements. Bear in mind: "If you don't know the value of a feature, it does not make sense to ask what it costs ". (Tom DeMarco)

R8: If your features are Use Cases, go here.

R9: if all the above does seem to simple, revert to Karl Wiegers (First Things First), Alan Davis (The Art of Requirements Triage), Don Firesmith (Prioritizing Requirements), Lena Karlsson (An Experiment on Exhaustive Pair-Wise
Comparisons versus Planning Game Partitioning
), or QFD.

R10: Read Mike Cohn (Agile Estimating and Planning).

Thursday, April 10, 2008

Calculate Process Efficiency

Name: Calculate Process Efficiency
Type: Process
Status: final
Version: 2008-04-10

Gist: to find out, where in a process you can improve efficiency, with respect to added value to the customer
Sources: http://www.shmula.com/458/the-hidden-factory-would-the-customer-pay-for-that

Process DEFINED AS a systematic chain of activities with a customer observable outcome.
Note: this may be a use case, a scenario used to plan the user experience for a product, a company's process chart ...

Process efficiency DEFINED AS value adding activities in a process divided by all the activities, in seconds, minutes, hours, days ...
Note: You could also measure it in money, or people, or other resources.

S1: Plot the process, maybe using a UML activity diagram

S2: For each activity in the diagram, decide if it's either
- adding value to the customer,
- not adding value to the customer, but is absolutely necessary, or
- not adding value to the customer

S3: Measure with a stopwatch how long each activity takes.

S4: Sum the measured times for each category of S2

S5: Divide the measurement for the 'adding value'-category by the sum of S4, that's your Process Efficiency.

Note: Most likely you will come up with a number smaller than 1, because of the 'not adding value' activities that are necessary.
Note: the 'not adding value' category is the cost you bear on your customer, but does not add any value to the customer. (uh!)
Note: Obviously, from this point on work to eliminate not adding value activities. Or at the very least reduce the time needed for them.

Friday, April 04, 2008

7 Rules for Explaining Requirements to Colleagues 'Downstream'

Name: 7 Rules for Explaining Requirements to Colleagues 'Downstream'
Type: Rules
Status: final
Version: 2008-04-04

Gist: As a business/systems analyst, you will inevitably be confronted with the need to tell somebody about the requirements. The challange is to transport your knowledge effectively.

colleague downstream DEFINED AS someone who takes your requirements as an input for her work. I.e architects, developers, desiners, testers, techical authors.

R1: Include your audience as early and as frequently as possible. Don't fall to the BUFR principle, even if you are part of an organization that follows that principle. YOU can always make things a little differently, i. e. use unofficial channels. CC people, invite them to your stakeholder meetings, talk to them at the coffee machine.

R2: In general, use feedback cycles. It is not sufficient if your audience nods or exclaims 'understood!'.

R3: Explain the requirements, and let the others do a design right on the spot. The idea is not to come to a good design, but to see if you have effectively transported your ideas. Do the same with testers, let them lay out their test plans on the spot. Expand this Idea by doing the proven, highly efficient JAD.

R4: Don't be mislead by R3, it may be not sufficient. If you as well have to explain your requirements to someone who is not available (offshoring?), consider a formal review by someone who is not available.

R5: There's a classic but sometimes forgotten technique to make sure 'the other side' has understood: you write the business requirements, they write the system requirements. Again, this is not sufficient.

R6: Make sure your 'downstream colleagues' have to do with the subject matter for long years. (I. e. stretch projects as long as you can ;-) This way, they become subject matter experts (SME's). Weird, but efficient. However, beware of the risk that the new SME's are running the show, not your business stakeholders.

R7: Make sure YOU understand, that you cannot not communicate (see Watzlawick). However, also the reverse is true: You can't communicate, really. You will present your model of the knowledge and the receiver inevitably will have a different one.

Friday, March 28, 2008

Measuring and Estimating Personal Velocity

Name: Measuring and Estimating Personal Velocity
Type: Process
Status: final
Version: 2008-03-28

Gist: to improve on estimating how long a given task will take. To give a well grounded answer to the question: "When will this be finished?"
Note: although this process aims at personal velocity, it can be adapted towards team velocity. But mind entry condition E1, every team member will have to fulfill it.

velocity DEFINED AS the number of task-points done per day.

Entry Conditions:
E1: You must be willing to take notes of all your tasks and review them regularily, i. e. once a week.

Process:
S1: for every task you get or generate for yourself, do a rough estimate on its complexity. Simply give points ('gummi bears'), use Fibonacci's numbers for that, i. e. 1, 2, 3, 5, 8, 13, 21, 34, ... (each number is the sum of the two preceeding ones). "If task A was worth 5 points, Task B is a bit more complex, so I will give an 8."
Note: It is not important to estimate for example the hours you'll probably need. A relative meter is sufficient.
Note: For me, this takes 5 seconds longer than just jotting down a task. With about 30 tasks/week this adds up to whopping 2,5 minutes this substracts from my weekly work time.

S2: whenever you finish the task, just mark it 'done'. No need to recalculate your original estimate.
Note: This works because of a levelling effect over time and the number of tasks. See Notes below.

S3: Regularily (I go for once a week), sum the points of all tasks you finished since last time (e. g. the past 7 days). Do it at least a couple of weeks to get credible results.
Note: I do it all the time because I found that my velocity changes with the projects I work on. It takes 3 minutes per week.

S4: Divide the sum by the number of days you actually worked on tasks, so that holidays, etc. are not factored in. The result is your velocity, Gummi Bears Done per Day.
Note: you can decide to factor in holidays and the like, but then you will need a much more data in order to come up with useful answers.

S5: Whenever you are asked the magic question "When will this be finished?", do the rough estimate of S1 on 'this'. Divide it by your velocity. The result is the time you will need, in days. Don't forget to think about when you will start to do the task before answering...
Note: I use the average velocity of the last 5 weeks, to take changing velocities into account. Maybe after a year I find that it is good enough to use the year's average velocity.

Notes:
The steady estimation process S1 will level out a couple of unwanted effects:
- you will have good days and bad days
- you will have to estimate rather small AND rather large tasks
- one day, you will finish that monster task you estimated with 55 gummi bears, and this will boost this weeks sum.
- your always optimistic (or pessimistic) with your estimates
- tasks sometimes will have subtasks, that both appear in your task list side-by-side. 'this is not a problem unless you do a complete work breakdown

To give you a hint, tasks in my list range from 'Call Andy' to 'Write a new requirements management plan for Project A'.

I misuse MS Outlook's® Task-Details for keep track of the estimates. I don't care if the field says ' x hours', for me it's just 'x'

BA's weaponry: Standards on Business Analysis

Name: BA's weaponry: Standards on Business Analysis
Type: citation
Status: draft
Version: 2008-03-28

Gist: to give a list of standards that are available and can be useful for business analysts

ISO/IEC 42010:2007, Recommended practice for architectural description of software-intensive systems

ISO/IEC TR 19759:2005; Guide to the Software Engineering Body of Knowledge (SWEBOK)
The Guide to the SWE BOK

BABOK Version 1.6, The Business Analysis Body of Knowledge, The IIBA 2007

ISO/IEC 15288:2002, System life cycle processes

Product Development Body of Knowledge (PNBOK)

Enterprise Unified Process, Enterprise Business Modeling
Enterprise Unified Process, Enterprise Architecture

You can also try websearches with these keywords:

Business analysis standards it process ieee

Enjoy!

Update on the Evaluation of Tenders

Today, I updated the Rules for the Evaluation of Tenders with thoughts on how to check references.
Have a look.

There's also a category for all the blog posts concerning tenders.

Wednesday, March 26, 2008

Earned Value Analysis for Feature-Burndown in Iterative Projects

Name: Earned Value Analysis for Feature-Burndown in Iterative Projects
Type: Process
Status: final
Version: 2008-03-27

Gist: to explain how to obtain information about the Earned Value of an agile project that is using a backlog of either {user stories, features, requirements, change requests}

Source: T. Sulaiman, H. Smits in Measuring Integrated Progress on Agile Software Development Projects. In addition, there's tons of information out there on Earned Value Analysis, check out:
http://en.wikipedia.org/wiki/Earned_value_management
www.earnedvalueanalysis.com/
www.projectlearning.net/pdf/I2.1.pdf

feature FOR THIS POST DEFINED AS a thing to be implemented, could well be a requirement, a user story, a use case. Whatever your backlog consists of.

Entry-Conditions:
E1: Each feature in your backlog needs to have an estimate, best measured in some points, gummy-bears etc. (Make the sum of all features your parameter TSPP, total sum of planned feature points)
Note: It's not imortant that these estimates represent actual budget, time or similar. They have to be consistent with each other and represent properly the relative relations between them
Note: You can adjust the number later but you will have to recalculate things (see S3-S5).
E2: You need to know how many iterations you will undertake. (Make this your parameter TNI, total number of iterations).
Note: Ideally, iterations are all the same length, i.e. timeboxes. You can adjust the number later but you will have to recalculate things (see S3-S5).
E3: You need to know the budget you are about to spend, from now to the end of the last iteration (make this your parameter TBP, total budget planned).
E4: You need an established way of tracking how much budget you have actually spent on each of the iterations.
Note: This can be obtained by maintaining a log of workhours spent on the project by each of its members.

Process:
S1: After every iteration, find out how much budget you have actually used for the iteration. Sum it up over all finished iterations. Make this your ABS, actual budget spent
S2: After every iteration, find out how many points you have actually implemented within all iterations (= ASI, actual sum implemented). All the feature's points count as implemented if and only if the feature is in full effect.
Note: this means you have to have a clear understanding what 'done' means for a feature.

S3: Do some calculations
- Expected Percentage Completed = TNI / number of already finished iterations
- Actual Percentage Completed = TSPP / ASI
- Planned Value = TBP * Expected Percentage Completed
- Earned Value = TBP * Actual Percentage Completed

S4: Find out about you projects cost performance and cost estimate:
- Cost Performance Index = Earned Value / ABS
Note: > 1 means under budget, = 1 means on budget, < 1 means over budget
- Cost Estimate to Completion = TBP / Cost Performance Index

S5: Find out about you projects schedule performance and schedule estimate:
- Schedule Performance Index = Earned Value / Planned Value
Note: > 1 means before schedule, = 1 means on schedule, < 1 means behind schedule
- Schedule Estimate to Completion (in iterations) = TNI / Schedule Performance Index

Friday, March 14, 2008

How to figure out what the problem really is

Name: How to figure out what the problem really is
Type: Principles, Process, Rules
Status: final
Version: 2008-03-27
Source: Don Gause & Gerald Weinberg, Are Your Lights On?; two own thoughts

Problem DEFINED AS difference between things as desired and things as conceived
Note: Don's and Gerald's original definition stated 'perceived' instead of my 'conveived'. I changed this due to some buddhism-oriented mindset of mine. Thanks to my friend Sven for reminding me.

Principles
P1) Each solution is the source of the next problem.
P2) Moral issues tend to melt in the heat of a juicy problem.
P3a) You can never be sure you have a correct problem definition, even after the problem is solved.
P3b) You can never be sure you have a correct problem definition, but don't ever stop trying to get one.
P4) The trickiest part of certain problems is just recognizing their existance.
P5) There are problem solvers and solution problemers.
P6) In spite of appearances, people seldom know what they want until you give them what they ask for.
P7) The fish is always last to see the water.
P8) In the valley of the problem solvers, the problem creator is king.

Process
S1) Ask: Who has the problem? Then, for each party, ask: What is the essence of your problem?
S2) think of at least three things that might be wrong with your understanding of the Problem. If you can't think of at least three things that might be wrong with your understanding of the problem, you don't understand the problem.
S3) Test your problem definition on a foreigner, someone blind, or a child, or make yourself foreign, blind, or childlike.
S4) Generate points of view. Every point of view will produce new misfits between problem and solution.
S5) As you wander the weary path of problem definition, check back home once in a while to see if you haven't lost your way.
S6) Once you have a problem statement in words, play with the words until the statement is in everyone's head.
S7) Ask: Where does the problem come from? Who sent this problem? What is she trying to do to me? The source of the problem is most often within you.
S8) Ask yourself: Do I really want a solution?
S9) Ask Why 5 times. Normally this leads you to the essence of the problem.
S10) If a person is in a position to do something about a problem, but doesn't have the problem, then do something so he does.
S12) Try blaming yourself for a change - even for a moment.

Rules
R1) Don't take s.o. solution method for a problem definition - especially if it's your solution method.
R2) Don't solve other people's problems when they can solve them perfectly well themselves. If it's their problem, make it their problem.
R3) If people really have their lights on, a little reminder may be more effective than your complicated solution.
R4) If you solve s.o.'s problem too readily, he'll never believe you've solved his real problem.
R5) Don't leap to conclusions, but don't ignore your first impression.
R6) We never have enough time to do it right, but we always have enough time to do it over.
R7) We never have enough time to consider whether we want it, but we always have enough time to regret it.

Tuesday, February 12, 2008

Finding the Right Cycle Time - 10 Rules

Name: Finding the Right Cycle Time - 10 Rules
Type: Rules
Status: final, revised
Version: 2008-08-08

Gist: to explain what influences cycle time in an agile development and how to find the right time for a given situation
Sources: Implementing Lean Software Development, Mary and Tom Poppendieck, Addison Wesley, 2007; own experience; a talk of Mishkin Berteig at Agile 2008

R1: Think about 1 day, 1 week, or 1 month for a start.
Notes:

  • You don't have to predict a useful cycle time for a whole 1 year project in advance. Start with anything that seems to useful for the next, say, 25% of the project time, but start now.
  • 1 day may seem rediculuos. It isn't. it is likely that people agree to do such an 'experiment'. you get things rolling, everybody is happy about the exercise and you can gather data from your team! Also see R9.

R2: Listen to what your team says. Ask why every time someone comes up with an argument against a specific cycle time. Account for the natural resistance to change. Do baby steps towards a change
Note: there's also a valid thought against baby steps. If you try to reduce cycle time from 3 months to 2 weeks, it's tempting to go for 4 weeks first. However, if you decide to *start* with 2 weeks, your will double the chances to learn anything useful.

R3: If your cycles get hectic near the end, you should reduce cycle ime to even out the workload.


R4: If your customers are trying to change things while a cycle is underway, if they cannot wait until the next cycle, then your cycle time is too long.


R5: If you cannot release software very often, think about doing smaller cycles (iterations) within larger cycles (increments, releases). However, make sure that you really can't release more often.
Note: In some organisations, it is painful to change processes so that certain acceptance tests would consume less time. or maybe you have a great many users of your product and you can't find an efficient way to train them new features. But be careful, these arguments may not be the real ones. Keep asking why.

R6: Resist the temptation to do parallel cycles unless you know exactly what you are doing. It's very hard to maintain a strong notion of 'done' if you do it. Do not assume you need it because otherwise the team would not be utilized properly.
Note: Full utilization will slow your team down. Full stop.

R7: Make sure everyone, especially your customer, understands this one "Time, quality, scope - choose any two." -- Greg Larman

R8: If your managers want reports quarterly, surprise them with a telling set of data concerning your progress, not only one cycle's experience or less. Prove to really being in control and to really getting things done.

R9: If you seem to have a rather gigantic amount of things do be done in your project, go for shorter cycle time. You will soon have a pretty good sense of your velocity, thus being able to predict what you will be able to achieve.
Note: However, you have to adjust your prediction after the first 2-4. A very effective way of gathering this amount of data is doing EXTREMELY short iterations (1-day) during the first, say, two weeks. This is EXTREMELY useful to get a waterfall team do short iterations. Most people are willing to do an experiment that is that short. See R1.

R10: If more than 1 or 2 features can't be completed within one cycle, break the features down.

Tuesday, February 05, 2008

Simple Solid Decision Making

Name:
Type: Process
Status: final
Version: 2008-02-01

Gist: how to make well-grounded decisions even if you cannot use Tom Gilb's Impact Estimation method for some reason (see Gilb, Competitive Engineering, Elsevier 2005). Decision making is the process of choosing a solution to a problem from a set of solutions, given a set of goals.

Sources: Stefan Brombach (http://www.dreizeit.de/), Kai-Jürgen Lietz (Das Entscheider-Buch, Hanser 2007), Tom Gilb, own thoughts.

S1: Make clear the goals of your endeavour. you need to understand what you want to achieve, what you want to keep and what you want to avoid. Consider using the Decomposing Goals principles.

S2: If you have many 'small' goals, say more than 12, then consider integrating some of them in a more abstract goal. Example: 'low costs for running the application' and 'don't exceed the budget' could be combined into a 'cost' goal. If you have to be quick and can't afford doing step S1 (please really consider doing it!) take the common three goals {time, scope, quality}.

S3: Compare each goal with every other goal. The more important goal gets a point. If you can't decide which one is more important, give 0,5 points to either goal. In the end, add 1 point to every goal for every goal has at least 1 point. Now you know the ranking of your goals in terms of importance.

S4: Go find at least three different realistic 'solutions' or ways of achieving your goals. You need three or more to have a little room for manoeuvres. Bear in mind, if you only have one solution there's nothing to decide.

S5: Using a 0..3 scale, iterate through all your solutions and goals and again give points to the solutions. 0 means solution X does not help achieving goal Y at all, 3 means it helps a great deal. Multiply these points with the importance of the goal from step S3.

S6: For each solution, add all products from step S5. The solution with the largest sum wins.

Lean Development Principles

Name: Lean Development Principles
Typ: Principles
Status: final
Version: 2008-02-26

Quelle:
See: http://www.shmula.com/340/lean-for-software-interview-with-mary-poppendieck
Chris's article on InfoQ

P1: Flow (or low inventory, or just-in-time) (inventory DEFINED AS something partially done)

P2: No Workarounds, problem exposure
Note: if you encounter problems, solve them, don't take detours. This may mean you have to stop what you are doing.

P3: Eliminate Waste (waste DEFINED AS {something without value to the user, work partially done, extra feautures you don't need right now, stuff that causes delay})

P4: Delayed Commitments (DEFINED AS idea of scheduling decisions until the last moment when they need to be made, and not making them before that, because then you have the most information. -- Preston Smith)
Cite: "Never make a decision early unless you know why." -- Chris Matts

P5: Deliver Fast (DEFINED AS being able to quickly release software. You need proper procedures for that, like {excellent testing procedures, automated testing, stuff that is disjoint from each other (so as you add features you don't add complexity)}
Note: A rigid process menas less choices you have to make every day. Less choices mean more output. Routine enables innovation where it’s most valuable.