Requirements written that imply a problem end up biasing the form of the solution, which in turn kills creativity and innovation by forcing the solution in a particular direction. Once this happens, other possibilities (which may have been better) cannot be investigated. [...] This mistake can be avoided by identifying the root problem (or problems) before considering the nature or requirements of the solution. Each root problem or core need can often be expressed in a single statement. [...] The reason for having a simple unbiased statement of the problem is to allow the design team to find path to the best solution. [...] Creative and innovative solutions can be created through any software development process as long as the underlying mechanics of how to go from problem to solution are understood.
Thursday, February 25, 2010
Problem to Solution - The Continuum Between Requirements and Design
Wednesday, February 03, 2010
Using Extreme Inspections to Significantly Improve Requirements Practice
Using Extreme Inspections to Significantly Improve Requirements Practice
It will be showcased next week in the Feb issue of the eJournal of ModernAnalyst.
From the introduction:
Extreme Inspections are a low-cost, high-improvement way to assure specification quality, effectively teach good specification practice, and make informed decisions about the requirements specification process and its output, in any project. The method is not restricted to be used on requirements analysis related material; this article however is limited to requirements specification. It gives firsthand experience and hard data to support the above claim. Using an industry case study I conducted with one of my clients I will give information about the Extreme Inspection method - sufficient to understand what it is and why its use is almost mandatory, but not how to do it. I will also give evidence of its strengths and limitations, as well as recommendations for its use and other applications.
Thank you, Adrian, for your support!
Saturday, October 03, 2009
Why High-level Measurable Requirements Speed up Projects by Building Trust
(Allow 5 minutes or less reading time)
Stephen M.R. Covey‘s The Speed of Trust caused me to realize that trust is an important subject in the field of Requirements Engineering.
Neither the specification of high-level requirements (a.k.a. objectives) nor the specification of measurable requirements are new practices in requirements engineering after all, just solid engineering practice. However, they both are extremely helpful for building trust between customer and supplier.
The level of trust between customer and supplier determines how much rework will be necessary to reach the project goals. Rework – one of the great wastes that software development allows abundantly – will add to the duration and cost of the project, especially if it happens late in the development cycle, i. e. after testing or even after deployment.
Let me explain.
If you specify high-level requirements – sometimes called objectives or goals – you make your intentions clear: You explicitly say what it is you want to achieve, where you want to be with the product or system.
If you specify requirements measurably, by giving either test method (binary requirements) or scale and meter parameters (scalar requirements), you make your intentions clear, too.
With intentions clarified, the supplier can see how the customer is going to assess his work. The customer‘s agenda will be clear to him. Knowing agendas enables trust. Trust is a prerequisite for speed and therefore low cost.
“Trust is good, control is better.” says a German proverb that is not quite exact in its English form. If you have speed and cost in mind as dimensions of “better,” then the sentence could not be more wrong! Imagine all the effort needed to continuously check somebody’s results and control everything he does. On the other hand, if you trust somebody, you can relax and concentrate on improving your own job and yourself. It’s obvious that trust speeds things up and therefore consumes less resources than suspiciousness.
Let‘s return to requirements engineering and the two helpful practices, namely specifying high-level requirements and specifying requirements measurably.
High-level Requirements
Say the customer writes many low-level requirements but fails to add the top level. By top level I mean the 3 to 10 maybe complex requirements that describe the objectives of the whole system or product. These objectives are then hidden somehow implicitly among the many low-level requirements. The supplier has to guess (or ask). Many suppliers assume the customer knew what he did when he decomposed his objectives into the requirements given in your requirements specification. They trust him. More often than not he didn‘t know, or why have the objectives not been stated in the requirements specification document in the first place?
So essentially the customer might have – at best – implicitly said what he wants to achieve and where he is headed. Chances are the supplier’s best guesses missed the point. Eventually he provides the system for the customer to check, and then the conversation goes on like this:
You: “Hm, so this ought to be the solution to my problem?”
He: “Er, … yes. It delivers what the requirements said!”
You: “OK, then I want my problem back.”
In this case he would better take it back, and work on his real agenda and on how to rebuild the misused trust. However, more often than not what follows is a lengthy phase to work the system or product over, in an attempt to fix it according to requirements that were not clear or even not there when the supplier began working.
Every bit of rework is a bit of wasted effort. We could have done it right the first time, and use the extra budget for a joint weekend fishing trip.
Measurable Requirements
Nearly the same line of reasoning can be used to promote measurable requirements.
Say the customer specified requirements but failed to AT THE SAME TIME give a clue about how he will test them, the supplier most likely gave him a leap of faith. He could then either be trustworthy or not. Assume he decided to specify acceptance criteria and how you intend to test long after development began, just before testing begins. Maybe the customer didn‘t find the time to do it before. Quite possibly he would change to some degree the possible interpretations of his requirements specification by adding the acceptance criteria and test procedures. From the supplier‘s angle the customer NOW shows your real agenda, and it‘s different from what the supplier thought it was. The customer misused his trust, unintentionally in most cases.
Besides this apparent strain on the relationship between the customer and the supplier, the system sponsor now will have to pay the bill. Quite literally so, as expensive rework to fix things has to be done. Hopefully the supplier delivered early, for more time is needed now.
So...
Trust is an important prerequisite to systems with few or even zero defects; I experienced that the one and probably last time I was part of a system development that resulted in (close to) zero defects. One of the prerequisites to zero defects is trust between customer and supplier, as we root-cause-analyzed in a post mortem (ref. principles P1, P4, P7, and P8). Zero defects mean zero rework after the system has been deployed. In the project I was working on it even meant zero rework after the system was delivered for acceptance testing. You see, it makes perfect business sense to build trust by working hard on both quantified and high-level requirements.
In fact, both practices are signs of a strong competence in engineering. Competence is – in turn – a prerequisite to trust, as Mr. Covey rightly points out in his aforementioned book.
If you want to learn more on how to do this, check out these sources:
- Measurable Value (with Agile), by Ryan Shriver.
- How to rewrite a requirement / How to make it measurable (See a live example of a bad and a good high-level specification), by Tom and Kai Gilb.
- How to Specify High-level Requirements (aka goals).
- Requirements Hierarchies and Types of Requirements.
Wednesday, July 29, 2009
Give Kelly Waters a Hand
Tuesday, July 28, 2009
Why it is stupid to write specifications and leave out background or commentary information
Refurbished: Non-Functional Requirements and Levels of Specification
- One school of thought describes requirements decomposition as a process to help us select and evaluate appropriate designs.
- The other school describes requirements decomposition as being a form of the design process.
Monday, July 13, 2009
Levels of Spec Principle, Non-Functional Requirements
Friday, July 03, 2009
A Quest for Up-Front Quality
- Why I wanted to have a rigorous QA effort for the first steps of a real-life project
- What I did to achieve this (Tom Gilb's Extreme Inspections, aka Agile Inspections, aka Specification Quality Control (SQC))
- What the outcomes were, in terms of both quality and budget (with detailed data)
- What the people said about the effort
- What the lessons learned are
Friday, May 22, 2009
Estimation with Use Cases: Deeper Thoughts
If you have ever tried estimation with use cases, you know that the various levels of decomposition encountered in the wild are troublesome. John does an excellent job in conceptualizing this problem.
The article seems to be from 1999, and I'm not sure whether John's ideas made it into the various estimation tools. For me, reading them brought a great deal of clarity in the levels-of-use-cases concept, so I think it's worth reading anyway.
Tuesday, May 05, 2009
Atomic Functional Requirements
Tuesday, April 28, 2009
Shall or must to markup mandatory requirements?
Saturday, April 11, 2009
Extracting Business Rules from Use Cases
Monday, April 06, 2009
Rules for Checking Use Cases
For all readers who face the challenge to find a BA job, or at least a (second) job where BA skills are applicable, there's an interesting discussion going on at the modernAnalyst-Forum. I suggested to move to a QA perspective. This idea fits with the above Use Case Checking Rules.
Thursday, March 05, 2009
Update on Specifying Goals
- User does some inputs
- User "sends" the input, i. e. confirms his inputs
- System checks inputs against rules X and Y
- Systems shows an error message in case the checks fail.
- ...
- User does some inputs
- (System has to make sure rules X and Y are not broken)
- "I think I am an expert user."
- "I do these simple inputs. Damn I'm good!"
- "Now send the stuff to the stupid machine." *klick*
- "Ooops ... What the ...."
- "The stupid machine says I did it wrong?!"
- "Why didn't it prevent me from doing it wrong, is there anything this machine is useful for?"
Friday, January 30, 2009
Estimating with Use Case Points - FREE template
- An introduction to use case point estimation with good context for use case newbies
- A How-to for the final calculations
- The free template
Thursday, December 25, 2008
Software Development Practice in the 21st (22nd?) Century
This software never crashes. It never needs to be re-booted. This software is bug-free. It is perfect, as perfect as human beings have achieved. Consider these stats : the last three versions of the program -- each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. -- Charles Fishman on the software product the Group puts out
BTW, they seem to use three key pracices: very detailed requirements, inspections and continuous improvement.
The way the process works, it not only finds errors in the software. The process finds errors in the process. -- Charles Fishman on the On-Board Shuttle Group's process
PS: I'm going to go on vacation from Dec 28 to Jan 20. More interesting topics still to come, please be patient.
Wednesday, December 10, 2008
Use Case Content Patterns Updated
Monday, December 01, 2008
Patterns for Use Case CONTENT, anyone? Yes, finally!
Visit Planet Project to see the respective Process.
Note: I proudly contributed the idea of another pattern to Martin's, the DESTRUCTOR pattern ;-) Go have a look.
Thursday, November 27, 2008
Failure by delivering more than was specified?
It strikes me as odd, but there might be situations where a provider fails (i.e. the system will not be accepted by the customer), because he delivered MORE than was specified. I'm not talking about the bells and whistles here, that (only) wasted resources.
Imagine a hard- or software product that is designed to serve more purposes than required by the single customer. Any COTS product and any product line component should fit this definition.
Which kind of requirements could be exceeded, scalar and/or binary requirements? I think scalar requirements (anything you can measure on some scale) cannot be exceeded, if they do not constrain the required target on the scale on two sides. Haven't seen that (It's always "X or better", e.g. 10.000 transactions per second or more.
Even if it was constraint on two sides, this simply would mean a defect.
But there can be a surplus of binary qualities, i.e. functions. A surplus function can affect other functions and/or scalar qualities, I think.
Say, as a quite obvious example, the system sports a sorting function which was not required. A complex set of data can be sorted, and sorting may take some time. A user can trigger the function that was not required.
- This might derail overall system availablity (resonse time), a user required quality.
- It might open a security hole.
- It might affect data integrity, if some neighboring system does not expect the data to be sorted THAT way.
- It might change the output of another function, that was required, and that does not expect the data to be sorted THAT way.
(First fantasy flush ends here.)
So, if you find a surplus function in a system, what do you do? Call it a defect and refuse to accept the system?
Eager for your comments!
Friday, October 31, 2008
Templates available in GERMAN
- A template of an impact estimation table
- A set of 3 requirements templates for functions, scalar requirements and designs.
Both is Tom Gilb's work, although I took the table from Ryan Shriver.