Today a post by Michael Dubakov at The Edge of Chaos provoked some thoughts. He basically argues, that a zero-defect mentality causes several unpleasant effects in a development team:
- Not enough courage to refactor complex, messy, buggy, but important piece of code.
- Can’t make important decision, instead make less risky, but wrong decision.
- Do everything to avoid responsibility, that leads to coward and stupid behavior.
Well, I guess the solomonic answer to this critique is 'it depends'. Actually, I'm quite enthusiastic about the zero-defects thought. However, I also know that zero-defects are extremely hard to attain, or can easily be attained by chance ;-)
You maybe remember the post where I detail 10 Requirements Principles for 0-defect systems. Therefore I posted the following as a comment to Michael's post, argueing
- that we should not call defects 'bugs',
- that zero-defects can be attained by using principles that have almost nothing to do with testing, but with requirements,
- that testing alone is insufficient for a decent defect removal effectiveness.
Hi Michael,
thanks for another thought-provoking post! i enjoy reading your blog.
Hmm, several issues here.
1) First of all I think we should come to a culture where we do not call more or less serious flaws in IT systems 'bugs'. To me, it does not make sense to belittle the fact. Let's call them what they are: defects (I'm not going into the possible definitions of a defect here).
This said, look here: scroll down a bit, and laugh.
And here's a funny cartoon.
2) A couple of years ago I had the luck to be part of a team that delivered (close to) zero defects. I wrote down the 10 principles we used from a requirements perspective. See
a longer version on Raven's Brain.
Interesting enough, only 1 out of the 10 principles is directly related to a defect-finding activity. We had a 'zero defect mentality' in a way that we all seriously thought it would be a good idea to deliver excellent quality to the stakeholders. Fear or frustration was not at all part of the game. Mistakes were tolerable, just not in the product at delivery date. Frankly, we were a bit astonished to find out we successfully delivered a nearly defect-free non-trivial system over a couple of releases.
3) While it seems to be a good idea to use the different strategies that you proposed in the original post, I'm missing some notion of how effective the various defect-reducing strategies are. The only relevant quantitative research I know of come from metrics guru Capers Jones. If I remember correctly he states that each single strategy is only about 30% effective, meaning that you need to combine 6-8 strategies in order to end up with a 'professional' defect removal effectiveness. AND you cannot reach a net removal effectiveness of, say, 95% with testing alone. From an economic standpoint it is wise to reduce the number of defects that 'arrive' at testing in the first place, and this is done most effectively by formal inspections (of requirements, design and code).
Greetings!
Rolf
PS: if you find a defect in my comment, you can keep it ;-)
No comments:
Post a Comment