Showing posts with label Testing. Show all posts
Showing posts with label Testing. Show all posts
Monday, August 15, 2011
Testing is indespensable. Or is it?
I added a set of rules (questions, really) to my wiki, concerning product test as measurement of product quality. Please check it out and spread the word: Article on PlanetProject.
Sunday, July 26, 2009
Intro to Statistical Process Control (SPC)
In exploring the web about Deming stuff I stumbled upon this site from Steve Horn. It introduces Statistical Process Control (SPC).
There are a number of pages on various aspects, like
The articles are read quickly and understood easily. They are written to the point.
I'd like to add that in Deming's 14 Points, Point 3 needs a special interpretation for software or (IT) project people.
The point says:
3. Cease dependence on inspection to achieve quality. Eliminate the need for inspection on a mass basis by building quality into the product in the first place.
Deming was talking about (mass) production. In that field, "inspection" means roughly the same as "testing" in our profession.
In fact, inspections (of requirements, designs, code and test cases) are among the most effective activities you can do in software, for building quality into the product in the first place.
BTW, if you worry about your software development organisation NOT really testing their products, it is a very wise idea to first introduce a couple of inspection stages, for this is both more efficient (economic) and effective (less defects). It is also a sensible way to introduce learning.
Testing is about finding defects, Inspections are about pointing out systematic errors and give people a real chance for preventing them in the future.
Here's one quote from Steve I like in particular:
Confident people are often assumed to be knowledgeable. If you want to be confident the easiest way is to miss out the 'Study' phase (of the Plan-Do-Study-Act-Cycle) altogether and never question whether your ideas are really effective. This may make you confident but it will not make you right.
(words in brackets are mine)
Friday, July 03, 2009
A Quest for Up-Front Quality
Today I'd like to point you to the presentation slides of a talk I gave at this year's Gilb Seminar on Culture Change in London.
You'll find the file if you go to this PlanetProject page on Zero Defects, navigate to principle P6 and click the link titled "presentation".
The title was "A Quest for Up-Front Quality".
Short outline:
- Why I wanted to have a rigorous QA effort for the first steps of a real-life project
- What I did to achieve this (Tom Gilb's Extreme Inspections, aka Agile Inspections, aka Specification Quality Control (SQC))
- What the outcomes were, in terms of both quality and budget (with detailed data)
- What the people said about the effort
- What the lessons learned are
If you want to see truely amazing results for one of the most effective methods of software development history, don't miss the slides. Any questions? Just ask!
The talk was warmly welcomed by the great Value-Thinkers of the seminar, special thanks to Ryan Shriver, Allan Kelly, Giovanni Asproni, Niels Malotaux, Matthew Leitch, Jerry Durant, Jenny Stuart, Lars Ljungberg, Renze Zijlstra, Clifford Shelley, Lorne Mitchell, Gio Wiederhold, Marilyn Bush, Yazdi Bankwala, and all others I forgot to mention.
Thursday, April 16, 2009
0-defects? I'm not kidding!
Today a post by Michael Dubakov at The Edge of Chaos provoked some thoughts. He basically argues, that a zero-defect mentality causes several unpleasant effects in a development team:
- Not enough courage to refactor complex, messy, buggy, but important piece of code.
- Can’t make important decision, instead make less risky, but wrong decision.
- Do everything to avoid responsibility, that leads to coward and stupid behavior.
Well, I guess the solomonic answer to this critique is 'it depends'. Actually, I'm quite enthusiastic about the zero-defects thought. However, I also know that zero-defects are extremely hard to attain, or can easily be attained by chance ;-)
You maybe remember the post where I detail 10 Requirements Principles for 0-defect systems. Therefore I posted the following as a comment to Michael's post, argueing
- that we should not call defects 'bugs',
- that zero-defects can be attained by using principles that have almost nothing to do with testing, but with requirements,
- that testing alone is insufficient for a decent defect removal effectiveness.
Hi Michael,
thanks for another thought-provoking post! i enjoy reading your blog.
Hmm, several issues here.
1) First of all I think we should come to a culture where we do not call more or less serious flaws in IT systems 'bugs'. To me, it does not make sense to belittle the fact. Let's call them what they are: defects (I'm not going into the possible definitions of a defect here).
This said, look here: scroll down a bit, and laugh.
And here's a funny cartoon.
2) A couple of years ago I had the luck to be part of a team that delivered (close to) zero defects. I wrote down the 10 principles we used from a requirements perspective. See
a longer version on Raven's Brain.
Interesting enough, only 1 out of the 10 principles is directly related to a defect-finding activity. We had a 'zero defect mentality' in a way that we all seriously thought it would be a good idea to deliver excellent quality to the stakeholders. Fear or frustration was not at all part of the game. Mistakes were tolerable, just not in the product at delivery date. Frankly, we were a bit astonished to find out we successfully delivered a nearly defect-free non-trivial system over a couple of releases.
3) While it seems to be a good idea to use the different strategies that you proposed in the original post, I'm missing some notion of how effective the various defect-reducing strategies are. The only relevant quantitative research I know of come from metrics guru Capers Jones. If I remember correctly he states that each single strategy is only about 30% effective, meaning that you need to combine 6-8 strategies in order to end up with a 'professional' defect removal effectiveness. AND you cannot reach a net removal effectiveness of, say, 95% with testing alone. From an economic standpoint it is wise to reduce the number of defects that 'arrive' at testing in the first place, and this is done most effectively by formal inspections (of requirements, design and code).
Greetings!
Rolf
PS: if you find a defect in my comment, you can keep it ;-)
Thursday, November 27, 2008
Failure by delivering more than was specified?
I stumbled upon a puzzle. Please help me solve it.
It strikes me as odd, but there might be situations where a provider fails (i.e. the system will not be accepted by the customer), because he delivered MORE than was specified. I'm not talking about the bells and whistles here, that (only) wasted resources.
Imagine a hard- or software product that is designed to serve more purposes than required by the single customer. Any COTS product and any product line component should fit this definition.
Which kind of requirements could be exceeded, scalar and/or binary requirements? I think scalar requirements (anything you can measure on some scale) cannot be exceeded, if they do not constrain the required target on the scale on two sides. Haven't seen that (It's always "X or better", e.g. 10.000 transactions per second or more.
Even if it was constraint on two sides, this simply would mean a defect.
But there can be a surplus of binary qualities, i.e. functions. A surplus function can affect other functions and/or scalar qualities, I think.
Say, as a quite obvious example, the system sports a sorting function which was not required. A complex set of data can be sorted, and sorting may take some time. A user can trigger the function that was not required.
- This might derail overall system availablity (resonse time), a user required quality.
- It might open a security hole.
- It might affect data integrity, if some neighboring system does not expect the data to be sorted THAT way.
- It might change the output of another function, that was required, and that does not expect the data to be sorted THAT way.
(First fantasy flush ends here.)
So, if you find a surplus function in a system, what do you do? Call it a defect and refuse to accept the system?
Eager for your comments!
It strikes me as odd, but there might be situations where a provider fails (i.e. the system will not be accepted by the customer), because he delivered MORE than was specified. I'm not talking about the bells and whistles here, that (only) wasted resources.
Imagine a hard- or software product that is designed to serve more purposes than required by the single customer. Any COTS product and any product line component should fit this definition.
Which kind of requirements could be exceeded, scalar and/or binary requirements? I think scalar requirements (anything you can measure on some scale) cannot be exceeded, if they do not constrain the required target on the scale on two sides. Haven't seen that (It's always "X or better", e.g. 10.000 transactions per second or more.
Even if it was constraint on two sides, this simply would mean a defect.
But there can be a surplus of binary qualities, i.e. functions. A surplus function can affect other functions and/or scalar qualities, I think.
Say, as a quite obvious example, the system sports a sorting function which was not required. A complex set of data can be sorted, and sorting may take some time. A user can trigger the function that was not required.
- This might derail overall system availablity (resonse time), a user required quality.
- It might open a security hole.
- It might affect data integrity, if some neighboring system does not expect the data to be sorted THAT way.
- It might change the output of another function, that was required, and that does not expect the data to be sorted THAT way.
(First fantasy flush ends here.)
So, if you find a surplus function in a system, what do you do? Call it a defect and refuse to accept the system?
Eager for your comments!
Thursday, July 26, 2007
Reviewing for content
Title: Reviewing for content
Type: principles
Status: final
Version: 2007-07-26
Gist: Conduct reviews in the right order.
Type: principles
Status: final
Version: 2007-07-26
Gist: Conduct reviews in the right order.
P1: check formal characteristics
P2: only if the object is clean, it's useful to see if it's right
Sunday, July 15, 2007
Good Practice in Writing Acceptance Criteria
Title: Good Practice in Writing Acceptance Criteria
Type: Rules
Status: draft
Version: 2007-07-15
Gist: To explain, how to produce useful acceptance criteria.
acceptance criteria (AC) DEFINED AS a means to express the important tests for a product or system from a behavioral point of view.
Sources: Dan North, Chris Matts, Dave Astels, my own practice.
Note: Writing acceptance criteria is not about specifying tests and not even about preparing for writing test cases. Instead, it's writing specifications of behavior.
R1: Use the standard form "given [Context] when [Event, Feature, Function] then [(expected) Outcome]".
R2: Write one AC for every single expected result. If you end up with more results per AC, try making the contexts more specific. Gist: By keeping the AC small and focused, they are easier to refactor.
R3: Don't just have one AC per feature, that says "everything turns out perfect". You need one per specific context.
Note: This rule seems ridiculously self-evident, but my recent experience while working with an "experienced" requirements analyst shows that it isn't...
Type: Rules
Status: draft
Version: 2007-07-15
Gist: To explain, how to produce useful acceptance criteria.
acceptance criteria (AC) DEFINED AS a means to express the important tests for a product or system from a behavioral point of view.
Sources: Dan North, Chris Matts, Dave Astels, my own practice.
Note: Writing acceptance criteria is not about specifying tests and not even about preparing for writing test cases. Instead, it's writing specifications of behavior.
R1: Use the standard form "given [Context] when [Event, Feature, Function] then [(expected) Outcome]".
R2: Write one AC for every single expected result. If you end up with more results per AC, try making the contexts more specific. Gist: By keeping the AC small and focused, they are easier to refactor.
R3: Don't just have one AC per feature, that says "everything turns out perfect". You need one per specific context.
Note: This rule seems ridiculously self-evident, but my recent experience while working with an "experienced" requirements analyst shows that it isn't...
Friday, May 04, 2007
Testcase Numbers based on Function Points
Name: Testcase Numbers based on Function Points
Type: RuleSet
Status: finished
Version: 2007-05-04
Source: Internet, IFPUG-Website, Capers Jones
Type: RuleSet
Status: finished
Version: 2007-05-04
Source: Internet, IFPUG-Website, Capers Jones
Gist: Calculate the number of test cases for a reasonably tested product.
R1: # of test cases = (# adjusted function points)^1,2
R2: # of acceptance tests = (# adjusted function points) * 1.2
Wednesday, April 11, 2007
Principles of Clear Thinking
Type: Rules
Status: Draft
Version: 2007-12-20
Quelle: Principles of Clear Thinking. <- Blog on http://www.blogger.com/www.gilb.com 2007-03-27 (R1 to R10); rest: own thoughts
R1. You have to have a clear set of objectives and constraints, to evaluate proposed solutions or strategies against.
R2. You have to have a reasonable set of facts about the benefits and costs of any proposed idea, so that you can relate it to you current outstanding requirements.
R3. You have to have some notion of the risks associated with the idea, so that you can understand and take account of the worst possible case.
R4. You have to have some ideas about how to test the ideas gradually, early and on a small scale before committing to full scale implementation.
R5. If there are more than very few factors involved ( 2 to 4) then you are going to have to use a written model of the objectives, constraints, costs, benefits, and risks.
R6. If you want to check your thinking with anyone else, then you will need a written model to safely and completely share your understanding with anyone else.
R7. You will need to make a clear distinction between necessities (constraints) and desirables (targets).
R8. You will need to state all assumptions clearly, in writing, and to challenge them, or ask ‘what if they are not true?’
R9. You will want to have a backup plan, contingencies, for the worst case scenarios – failure to fund, failure for benefits to materialize, unexpected risk elements, political problems.
R10. Assume that information from other people is unreliable, slanted, incomplete, risky – and needs checking.
R11. Assume that you models are incomplete and wrong, so check the evidence to support, modify or destroy your models.
Status: Draft
Version: 2007-12-20
Quelle: Principles of Clear Thinking. <- Blog on http://www.blogger.com/www.gilb.com 2007-03-27 (R1 to R10); rest: own thoughts
R1. You have to have a clear set of objectives and constraints, to evaluate proposed solutions or strategies against.
R2. You have to have a reasonable set of facts about the benefits and costs of any proposed idea, so that you can relate it to you current outstanding requirements.
R3. You have to have some notion of the risks associated with the idea, so that you can understand and take account of the worst possible case.
R4. You have to have some ideas about how to test the ideas gradually, early and on a small scale before committing to full scale implementation.
R5. If there are more than very few factors involved ( 2 to 4) then you are going to have to use a written model of the objectives, constraints, costs, benefits, and risks.
R6. If you want to check your thinking with anyone else, then you will need a written model to safely and completely share your understanding with anyone else.
R7. You will need to make a clear distinction between necessities (constraints) and desirables (targets).
R8. You will need to state all assumptions clearly, in writing, and to challenge them, or ask ‘what if they are not true?’
R9. You will want to have a backup plan, contingencies, for the worst case scenarios – failure to fund, failure for benefits to materialize, unexpected risk elements, political problems.
R10. Assume that information from other people is unreliable, slanted, incomplete, risky – and needs checking.
R11. Assume that you models are incomplete and wrong, so check the evidence to support, modify or destroy your models.
Subscribe to:
Posts (Atom)