Saturday, February 19, 2011
Improvement Principles
Wednesday, June 23, 2010
What are the most valueable specifications?
There also is a reproduction of a poster that was used to market the car at its times. It clearly states the most important values the car delivers. Reading through it, and thinking of the extreme focus on functions nowadays, I ask you: Did we forget to focus on the really important things, like value?
The poster reads:
It is our belief that the most important "specification" of the Cord front-frive are: Do you like its design? Does the comfort appeal to you? Does it do what you want a car to do better and easier? When a car, built by a reputable and experienced company meets these three requirements, you can safely depend on that company to adequately provide for all mechanical details. -- E.L.CORD
Thursday, March 04, 2010
Adding to Acceptance Testing
"Acceptance testing tools cost more than they're worth. I no longer use it or recommend it."
But James expanded on his statement in another post. And there he impressed me.
"When it comes to testing, my goal is to eliminate defects. At least the ones that matter. [...] And I'd much rather prevent defects than find and fix them days or weeks later."
Yes! Prevention generally has a better ROI than cure. Not producing any defects in the first place is a much more powerful principle than going to great lengths to find them. That's why I like the idea of inspections so much, especially if inspections are applied to early artifacts.
James also very beautifully laid out the idea that in order to come close to zero defects in the finished product, you need several defect elimination steps. It is not sufficient to rely on a few, or even one. Capers Jones has the numbers.
I'd love to see this discussion grow. Let's turn away from single techniques and methods and see the bigger picture. Let's advance the engineering state of the art (sota, which is a SET of heuristics). Thank you, James!
Thursday, February 25, 2010
Problem to Solution - The Continuum Between Requirements and Design
Requirements written that imply a problem end up biasing the form of the solution, which in turn kills creativity and innovation by forcing the solution in a particular direction. Once this happens, other possibilities (which may have been better) cannot be investigated. [...] This mistake can be avoided by identifying the root problem (or problems) before considering the nature or requirements of the solution. Each root problem or core need can often be expressed in a single statement. [...] The reason for having a simple unbiased statement of the problem is to allow the design team to find path to the best solution. [...] Creative and innovative solutions can be created through any software development process as long as the underlying mechanics of how to go from problem to solution are understood.
Saturday, December 05, 2009
Who is your hero systems engineer?
Nothing is a s practical as a good theory.
The Engineering Method (Design) is the use of heuristics to cause the best change in an uncertain situation within the available resources.
Saturday, October 03, 2009
Why High-level Measurable Requirements Speed up Projects by Building Trust
(Allow 5 minutes or less reading time)
Stephen M.R. Covey‘s The Speed of Trust caused me to realize that trust is an important subject in the field of Requirements Engineering.
Neither the specification of high-level requirements (a.k.a. objectives) nor the specification of measurable requirements are new practices in requirements engineering after all, just solid engineering practice. However, they both are extremely helpful for building trust between customer and supplier.
The level of trust between customer and supplier determines how much rework will be necessary to reach the project goals. Rework – one of the great wastes that software development allows abundantly – will add to the duration and cost of the project, especially if it happens late in the development cycle, i. e. after testing or even after deployment.
Let me explain.
If you specify high-level requirements – sometimes called objectives or goals – you make your intentions clear: You explicitly say what it is you want to achieve, where you want to be with the product or system.
If you specify requirements measurably, by giving either test method (binary requirements) or scale and meter parameters (scalar requirements), you make your intentions clear, too.
With intentions clarified, the supplier can see how the customer is going to assess his work. The customer‘s agenda will be clear to him. Knowing agendas enables trust. Trust is a prerequisite for speed and therefore low cost.
“Trust is good, control is better.” says a German proverb that is not quite exact in its English form. If you have speed and cost in mind as dimensions of “better,” then the sentence could not be more wrong! Imagine all the effort needed to continuously check somebody’s results and control everything he does. On the other hand, if you trust somebody, you can relax and concentrate on improving your own job and yourself. It’s obvious that trust speeds things up and therefore consumes less resources than suspiciousness.
Let‘s return to requirements engineering and the two helpful practices, namely specifying high-level requirements and specifying requirements measurably.
High-level Requirements
Say the customer writes many low-level requirements but fails to add the top level. By top level I mean the 3 to 10 maybe complex requirements that describe the objectives of the whole system or product. These objectives are then hidden somehow implicitly among the many low-level requirements. The supplier has to guess (or ask). Many suppliers assume the customer knew what he did when he decomposed his objectives into the requirements given in your requirements specification. They trust him. More often than not he didn‘t know, or why have the objectives not been stated in the requirements specification document in the first place?
So essentially the customer might have – at best – implicitly said what he wants to achieve and where he is headed. Chances are the supplier’s best guesses missed the point. Eventually he provides the system for the customer to check, and then the conversation goes on like this:
You: “Hm, so this ought to be the solution to my problem?”
He: “Er, … yes. It delivers what the requirements said!”
You: “OK, then I want my problem back.”
In this case he would better take it back, and work on his real agenda and on how to rebuild the misused trust. However, more often than not what follows is a lengthy phase to work the system or product over, in an attempt to fix it according to requirements that were not clear or even not there when the supplier began working.
Every bit of rework is a bit of wasted effort. We could have done it right the first time, and use the extra budget for a joint weekend fishing trip.
Measurable Requirements
Nearly the same line of reasoning can be used to promote measurable requirements.
Say the customer specified requirements but failed to AT THE SAME TIME give a clue about how he will test them, the supplier most likely gave him a leap of faith. He could then either be trustworthy or not. Assume he decided to specify acceptance criteria and how you intend to test long after development began, just before testing begins. Maybe the customer didn‘t find the time to do it before. Quite possibly he would change to some degree the possible interpretations of his requirements specification by adding the acceptance criteria and test procedures. From the supplier‘s angle the customer NOW shows your real agenda, and it‘s different from what the supplier thought it was. The customer misused his trust, unintentionally in most cases.
Besides this apparent strain on the relationship between the customer and the supplier, the system sponsor now will have to pay the bill. Quite literally so, as expensive rework to fix things has to be done. Hopefully the supplier delivered early, for more time is needed now.
So...
Trust is an important prerequisite to systems with few or even zero defects; I experienced that the one and probably last time I was part of a system development that resulted in (close to) zero defects. One of the prerequisites to zero defects is trust between customer and supplier, as we root-cause-analyzed in a post mortem (ref. principles P1, P4, P7, and P8). Zero defects mean zero rework after the system has been deployed. In the project I was working on it even meant zero rework after the system was delivered for acceptance testing. You see, it makes perfect business sense to build trust by working hard on both quantified and high-level requirements.
In fact, both practices are signs of a strong competence in engineering. Competence is – in turn – a prerequisite to trust, as Mr. Covey rightly points out in his aforementioned book.
If you want to learn more on how to do this, check out these sources:
- Measurable Value (with Agile), by Ryan Shriver.
- How to rewrite a requirement / How to make it measurable (See a live example of a bad and a good high-level specification), by Tom and Kai Gilb.
- How to Specify High-level Requirements (aka goals).
- Requirements Hierarchies and Types of Requirements.
Thursday, September 10, 2009
Quantum Mechanics, Buddhism, and Projects - Again!
Tuesday, July 28, 2009
Refurbished: Non-Functional Requirements and Levels of Specification
- One school of thought describes requirements decomposition as a process to help us select and evaluate appropriate designs.
- The other school describes requirements decomposition as being a form of the design process.
Sunday, July 26, 2009
Intro to Statistical Process Control (SPC)
3. Cease dependence on inspection to achieve quality. Eliminate the need for inspection on a mass basis by building quality into the product in the first place.
Confident people are often assumed to be knowledgeable. If you want to be confident the easiest way is to miss out the 'Study' phase (of the Plan-Do-Study-Act-Cycle) altogether and never question whether your ideas are really effective. This may make you confident but it will not make you right.
Monday, July 13, 2009
Levels of Spec Principle, Non-Functional Requirements
Friday, July 03, 2009
A Quest for Up-Front Quality
- Why I wanted to have a rigorous QA effort for the first steps of a real-life project
- What I did to achieve this (Tom Gilb's Extreme Inspections, aka Agile Inspections, aka Specification Quality Control (SQC))
- What the outcomes were, in terms of both quality and budget (with detailed data)
- What the people said about the effort
- What the lessons learned are
Monday, June 01, 2009
History Repeating?, or A Case for Real QA
- Why do we put testing in the next time box? Because it consumes too much time.
- Why does it consume a lot of time? Because there is a significant number of defects to find and fix (and analyse and deploy and...), before we consider the product good enough for release.
- Why is there a significant number of defects to be found with the testing stage? Because the product brings a significant number with it.
- Why does the test-ready product have a significant number of "inherent" defects? Because we have not reduced them significantly them further upstream.
- Why didn't we reduce it further upstream? Because we think testing is very effective in finding all kinds of defects, so testing alone (or along with very few other practices) is sufficient for high defect removal efficiency.
Tuesday, May 05, 2009
Atomic Functional Requirements
Thursday, April 16, 2009
0-defects? I'm not kidding!
Monday, April 13, 2009
Value Thinking: Using Scalpels not Hatchets
Sunday, December 21, 2008
Why Re-work isn't a Bad Thing in all Situations.
Alistair Cockburn wrote an interesting article on what to do with the excess production capability of non-bottleneck sub-processes in a complex production process. In the Crosstalk issue of Jan 2009, he explores new strategies to optimizing a process as a whole. One idea is to consciously use re-work (you know, the bad thing you want to avoid like the plague) to improve the quality of affairs earlier rather than later.
In terms of what to do with excess capacity at a non-bottleneck station, there is a strategy different from sitting idle, doing the work of the bottleneck, or simplifying the work at the bottleneck; it is to use the excess capacity to rework the ideas to get them more stable so that less rework is needed later at the bottleneck station.
Mark-up is mine. You see the basic idea.
In conclusion he lists the following four ways of using work that otherwise would have spent idling around, waiting for a bottleneck to finish work:
* Have the workers who would idle do the work of the bottleneck sub-process.
Note: Although this sometimes is mandated by some of the Agile community, I have seldom seen workers in the software industry that can produce excellent quality results in more than one or two disciplines.
* Have them simplify the work at the bottleneck sub-process.
Note: An idea would be to help out the BAs by scanning through documents and giving pointers to relevant information.
* Have them rework material to reduce future rework required at the bottleneck sub-process.
Note: See the introducing quote by A. Cockburn. Idea: Improve documents that are going to be built upon "further downstream" to approach a quality level of less than 1 major defect per page (more than 20 per page is very common in documents that already have a 'QA passed' stamp).
* Have them create multiple alternatives for the bottleneck sub-process to choose from.
Note: An example would be to provide different designs to stakeholders. Say, GUI designs. This can almost take the form of concurrently developing solutions to one problem by different engineering teams, for the best solution should be evaluated and consciously chosen.
Monday, October 13, 2008
How to Tell Principles from Paradigms
Type: Principles (or Paradigms :-?)
Status: final
Version: 2008-10-13
Source: Covey, 7 Habits of Highly Effective People
Gist: If you - like me - like to be picky with words sometimes, here's a nice explanation on two words we use over and over again. It started me thinking about what I give the 'principles' tag here on my blog. I will put more work on all the existing principles here, in order to better distinguish principles from other things.
Principles
Principles describe the way things are, like the law of gravity.
Principles represent natural laws, the reality
Principles are sustainable and timeless
Principles are self-evident & self-validating
Principles can't be faked, you're not in control of them
Principles will remain constant and universal
Principles ultimately govern
Paradigms
Paradigms are mental images of the way things are. ´Hence, a paradigm can be some one's mental image of a principle.
Paradigms represent implicit assumptions of our lives
Paradigms are projection of our background
Paradigms reveals my world view, my autobiography
Paradigms can change with my thinking
Monday, September 08, 2008
The 3 Most Powerful Requirements Principles
Type: Principles
Status: final
Version: 2008-09-08
Gist: A lot has been written about how requirements should look like, on what they are and what they are not, on what you should do to capture them etc. These 3 principles take you 80% of the way. They are grounded in sound systems engineering.
P1: My requirements are re-useable.
This means
- every requirement is unique. No copying please! 'I. e. I write requirements so that they have a unique name, and use the name in order to re-use the requirement in a different context, instead of copying the requirements text to the other context. Dick Karpinsky calls this the DRY rule: Don't Repeat Yourself.
- every requirement (not the whole document) provides information on version, author, stakeholder, owner, and status
- every requirement refers to s. th. and is referred to by s. th., e.g. designs, tests, suggestions/solutions, use cases, ...
P2: My requirements are intelligible.
This means
- every requirement has a type, e. g. one of {vision, function requirement, peformance requirement, resource requirement, design constraint, condition constraint} (courtesy of T. Gilb's basic requirement types)
- every requirement has a clear structure, i. e. unique, unchanging number or tag, scale, goal (= core specification attributes)
- complex requirements have a clear hierarchy, not some cloudy abstract
- no requirement in any non-commentary part uses words that are just fraught with significance, like 'high performance', 'good reliability', 'reduced costs', 'flexible'
P3: My requirements are relevant.
This means
- every requirement clearly indicates owner, author, stakeholder (in order to be able to judge relevance)
- every requirement clearly indicates its sources (in order to be able to check them)
- every requirement clearly indicates its relative priority (for a specific release, for a specific stakeholder)
- every requirement clearly indicates its constraints and goals
- every requirement clearly indicates what happens if it is not satisfied
- no requirement states a design to some problem, but every requirement declares a future state
Suggested Reading:
Tuesday, July 29, 2008
Why and How to State the Credibility of Estimations
Type: Principles
Status: final
Version: 2008-09-08
Sources: T. Gilb's work on Impact Estimation Tables, see http://www.gilb.com/; my own experience
Gist: Have you seen presentations suggesting a project or other endeavour with the purpose of convincing someone that the project would be the right thing to do? More often than not these presentations include shiny numbers of relatively low cost and relatively high benefit (of course). But how often does the presenter tell something about the Credibility of the numbers?
I believe every decision maker needs some assessment of credibility. If you don't provide him with how believable your evidence is, the decision maker might turn to other sources, like "what's this guy's success record?", "I don't believe her so I will resort to a very tight budget in order not to risk too much", etc.
Same problem with any estimate you prepare and present. These Principles explain why you should always give credibility information, and the Rules explain how you can do it.
" An estimate that is 100% certain is a commitment." -- Unknown
Credibility information DEFINED AS any evidence or number that expresses how certain we believe a statement is
Principles
P1: Estimates are assumptions about the future, therefore they must be uncertain.
"Unfortunately, tomorrow never gets here." - Leo Babauta
P2: Predictions are a difficult matter. The farther away the state we predict, the more difficult, the riskier.
P3: Credibility information is a measure of how certain we are about a predicted state.
P4: Decision makers need some assessment of the risk involved in the decision. What can happen in the worst case? (It is equally valid to ask for the best case, but seriously, the best case happens far too seldom to take it into account)
"Most people really like to be able to sleep at night." -- Heuser's Rule
P5: The clearer you state uncertainty,
* the easier it is to distinguish between different strategies
* the more likely other people believe you
* the clearer you yourself see the risk involved
* the better you can argue about a strategy
* the easier it is to learn something about the risk involved and to do something about it
* the clearer you give or take responsibility for an endeavour
Note: While the decision maker obviously accepts a risk involved by seeing uncertainty numbers, there are decision makers who don't like the idea at all. I guess this is a reason why credibility information isn't requested very often.
P6: If credibility information is missing, we by default assume the number or the source is not credible at all.
Note: Maybe the presenter just forgot to provide the information, and then it shouldn't be a problem. We can send him to get it.
Rules
R1: Any number that expresses some future state should be accompanied by credibility information.
R2: If you predict a value on a scale, like "end date", "targeted budget", "performance improvement", or "maintenance cost", give a range of that number.
Notes:
* It's useful to do this in a plus/minus X% fashion. The percentage is a clear and comparable signal to the audience.
* It is not mandatory to give the best/worst case estimates an equal range. How about "we are sure to complete this in 10 weeks, minus 1 plus 2"?
* In Rapid Development, Steve McConnell actually suggests that you communicate estimates in a range that gets smaller over time, with the larger number first. "Six to four months" sounds strange - but if you say the smaller number first, people tend to forget the larger one.)
R3: If you provide evidence, say where you have it from, or who said so, and what facts (numbers, history, written documentation) lead to this conclusion.
Note: Do this in the backup slides for example. At least you should be able to pull it out of the drawer if requested.
R4: Consider using a safety margin, like factor 2 (bridge builders), factor 4 (space craft engineers).
Notes:
* The margin is an expression of how much risk your are willing to take. Thus, it is a way of controlling risk.
* Use margins whenever you find a number or source is not very credible (like < 0.4), you don't have any historic data, or if there's no economic way of achieving higher credibility.
* Safety Margins do not necessarily increase real costs (but planned costs)
R5: Always adjust your cost/benefit-ratios by a credibility factor.
Notes:
A simple scale would be {0.0 - guess, no real facts available, 0.5 - some facts or past experience available, 1.0 - scientific proof available}
A more sophisticated scale would be
{0.0 Wild guess, no credibility
0.1 We know it has been done somewhere (outside the company)
0.2 We have one measurement somewhere (from outside the company)
0.3 There are several measurements in the estimated range (outside the company)
0.4 The measurements are relevant to our case because <fact1, fact2> with credibility <x, y> (don't get trapped in recursion here)
0.5 The method of measurement is considered reliable by <whom>
0.6 We have used the method in-house
0.7 We have reliable measurements in-house
0.8 Reliable in-house measurements correlate to independent external measurements
0.9 We have used the idea on this project and measured it
1.0 Perfect credibility, we have rock solid, contract-guaranteed, long-term, credible experience with this idea on this project and, the results are unlikely to disappear}
Thursday, July 24, 2008
10 Critical Requirements Principles for 0-Defects-Systems
Type: Principles
Status: final
Version: 2008-07-24
Gist: It might sound esoteric to you, but it is actually possible to build non-trivial systems with (close to) zero defects. At the very least, we should try to get close to zero defects as early in the development cycle as possible, because of the relative defect removal costs (factors 1-10-100-1000, remember?).
I had the luck to be part in such a system development (a multi-user database system with about six releases over a ten-year period with strictest requirements on data integrity and data availability. We detected 5 (five) defects in total during user acceptance testing. The system still works with no interruptions and obviously perfectly according to the requirements). These are the top ten rules we (un)consciously followed, as we found out in a Lessons Learned earlier this month. I added principles P7-10 because of recent and not so recent experience in other projects. However, following these later principles (and ignoring the principles P1-6) did not lead to zero-defect systems. I guess all these principles represent necessary not sufficient conditions. After all, this is anecdotal evidence.
P1: Customer and software developer who like each other.
P2: Small and easy scope of your requirements, increasing in small steps.
P3: High-value requirements are the initial focus.
High-value DEFINED anything that is more important to a primary stakeholder than other things (relative definition intentional).
P4: Acceptance criteria are paramount.
P5: Templates rule.
P6: Quality control for your requirements (inspections).
Requirements inspection DEFINED AS a method of process control through sampling measurement of specification quality.
P7: Top management's requirements (aka objectives).
Top management DEFINED AS every manager that is higher up in the food chain than the project manager and has impact on project success or failure
P8: Problems first (then the requirements).
P9: Performance requirements first.
Performance requirement DEFINED AS A specifies the stakeholder requirements for 'how well' a system should perform
P10: Replacing requirement-based estimation with planning.
I. e. use some capacity throughput metric and promote the concept of variation and fluctuation.