Showing posts with label Principles. Show all posts
Showing posts with label Principles. Show all posts

Saturday, February 19, 2011

Improvement Principles

I assembled this set of principles to guide me and others through the sometimes stormy times of change. To me, 'times of change' equals 'always'. If there's one common theme among all my different job positions and private challenges, it's the wish for improvement. Use these principles whenever you embark on a journey.

Wednesday, June 23, 2010

What are the most valueable specifications?

During my last vacation I came across evidence of good technical value specifications. At VW's Autostadt there is a display of this wonderful old Phaeton Sedan (sorry the photo is blurred).
There also is a reproduction of a poster that was used to market the car at its times. It clearly states the most important values the car delivers. Reading through it, and thinking of the extreme focus on functions nowadays, I ask you: Did we forget to focus on the really important things, like value?

The poster reads:
It is our belief that the most important "specification" of the Cord front-frive are: Do you like its design? Does the comfort appeal to you? Does it do what you want a car to do better and easier? When a car, built by a reputable and experienced company meets these three requirements, you can safely depend on that company to adequately provide for all mechanical details. -- E.L.CORD

Thursday, March 04, 2010

Adding to Acceptance Testing

James Shore has sparked a lively discussion about sense and nonsense of acceptance testing. To be honest, my very first reaction to this piece of text was forbidding:

"Acceptance testing tools cost more than they're worth. I no longer use it or recommend it."

But James expanded on his statement in another post. And there he impressed me.

"When it comes to testing, my goal is to eliminate defects. At least the ones that matter. [...] And I'd much rather prevent defects than find and fix them days or weeks later."

Yes! Prevention generally has a better ROI than cure. Not producing any defects in the first place is a much more powerful principle than going to great lengths to find them. That's why I like the idea of inspections so much, especially if inspections are applied to early artifacts.

James also very beautifully laid out the idea that in order to come close to zero defects in the finished product, you need several defect elimination steps. It is not sufficient to rely on a few, or even one. Capers Jones has the numbers.

I'd love to see this discussion grow. Let's turn away from single techniques and methods and see the bigger picture. Let's advance the engineering state of the art (sota, which is a SET of heuristics). Thank you, James!

Thursday, February 25, 2010

Problem to Solution - The Continuum Between Requirements and Design

Christopher Brandt, relatively new blogger from Vancouver, Canada, has an awesome piece about the difference between problem and solution. If he blogs more about problem solving, and in this outstanding quality, he definitely is someone to follow.

While I do not completely agree with the concepts, I believe everybody doing requirements or design should at least have this level of understanding.

Requirements written that imply a problem end up biasing the form of the solution, which in turn kills creativity and innovation by forcing the solution in a particular direction. Once this happens, other possibilities (which may have been better) cannot be investigated. [...] This mistake can be avoided by identifying the root problem (or problems) before considering the nature or requirements of the solution. Each root problem or core need can often be expressed in a single statement. [...] The reason for having a simple unbiased statement of the problem is to allow the design team to find path to the best solution. [...] Creative and innovative solutions can be created through any software development process as long as the underlying mechanics of how to go from problem to solution are understood.

Which brings me to promote again and again the single-most powerful question for finding requirements from a given design: WHY?

Don't forget to have fun ;-)




Saturday, December 05, 2009

Who is your hero systems engineer?

These days, I‘m inquiring about engineering. I‘d like to know who your hero systems engineer was or is (or will be?). Please comment, or send me a twitter message. Thank you!

Thank you again for making it past the first paragraph. ;-) It all started when a couple of weeks ago I again was confronted with a colleague‘s opinion which said that I am a theorist. I refuse this opinion vehemently; quite the contrary, I believe my work, especially my work for the company, is a paragon of pragmatism. ;-)
So my argument against the opinion usually is ,no, I‘m not a theorist!', in a more or less agitated voice. Obviously, I need a better basis for making that point. Kurt Lewin said:
Nothing is a s practical as a good theory.
I used this sentence for a while as a footer for email that I suspected to raise the ,theorist‘ criticism again.
After all this sentence is only a claim as well, just like my stubborn phrase above. It may be stronger because it carries Lewin‘s weight. Unfortunately very few people instantly know who Kurt Lewin was, and that he most of all used experience - not theory - to advance humankind.

Then a friend of mine, a U. S. citizen who at some point in time chose to live in Norway (which is a very practical move in my eyes :-), pointed me to the work of Billy V. Koen, Professor for Mechanical Engineering at the University of Texas. You should watch this 1 hour movie if you are least interested in engineering, philosophy and art. Or in more mundane things like best practices, methods, techniques, recipes, and checklists, many of which concerning business analysis and project management can be found at Planet Project in case you don‘t know.

Here is Prof. Koen‘s definition of engineering from the movie:
The Engineering Method (Design) is the use of heuristics to cause the best change in an uncertain situation within the available resources.
Causing change in an uncertain situation within the available resources sounds a lot like project management to me. Like program or portfolio management, too. Maybe like management in general.

It is always when an improbable connection opens up between two different fields of my thinking when I find there is useful truth in it. Next, I want to learn about engineering and me, and one way to approach this is to find out who I admire for his or her engineering skills. I tried thinking of a couple of candidate names and found surprisingly few. I‘ve already identified Leonardo Da Vinci (who is not a Dan Brown invention like my nephew once suggested. :-) A quick request to the twitterverse offered nothing, no new name.

So this is my next take: I‘d like to know who your hero systems engineer was or is (or will be?), and why. Please comment, or send me a (twitter) message. Thank you!

Saturday, October 03, 2009

Why High-level Measurable Requirements Speed up Projects by Building Trust

(Allow 5 minutes or less reading time)


Stephen M.R. Covey‘s The Speed of Trust caused me to realize that trust is an important subject in the field of Requirements Engineering.

Neither the specification of high-level requirements (a.k.a. objectives) nor the specification of measurable requirements are new practices in requirements engineering after all, just solid engineering practice. However, they both are extremely helpful for building trust between customer and supplier.

The level of trust between customer and supplier determines how much rework will be necessary to reach the project goals. Rework – one of the great wastes that software development allows abundantly – will add to the duration and cost of the project, especially if it happens late in the development cycle, i. e. after testing or even after deployment.


Let me explain.


If you specify high-level requirements – sometimes called objectives or goals – you make your intentions clear: You explicitly say what it is you want to achieve, where you want to be with the product or system.

If you specify requirements measurably, by giving either test method (binary requirements) or scale and meter parameters (scalar requirements), you make your intentions clear, too.

With intentions clarified, the supplier can see how the customer is going to assess his work. The customer‘s agenda will be clear to him. Knowing agendas enables trust. Trust is a prerequisite for speed and therefore low cost.


“Trust is good, control is better.” says a German proverb that is not quite exact in its English form. If you have speed and cost in mind as dimensions of “better,” then the sentence could not be more wrong! Imagine all the effort needed to continuously check somebody’s results and control everything he does. On the other hand, if you trust somebody, you can relax and concentrate on improving your own job and yourself. It’s obvious that trust speeds things up and therefore consumes less resources than suspiciousness.

Let‘s return to requirements engineering and the two helpful practices, namely specifying high-level requirements and specifying requirements measurably.


High-level Requirements

Say the customer writes many low-level requirements but fails to add the top level. By top level I mean the 3 to 10 maybe complex requirements that describe the objectives of the whole system or product. These objectives are then hidden somehow implicitly among the many low-level requirements. The supplier has to guess (or ask). Many suppliers assume the customer knew what he did when he decomposed his objectives into the requirements given in your requirements specification. They trust him. More often than not he didn‘t know, or why have the objectives not been stated in the requirements specification document in the first place?


So essentially the customer might have – at best – implicitly said what he wants to achieve and where he is headed. Chances are the supplier’s best guesses missed the point. Eventually he provides the system for the customer to check, and then the conversation goes on like this:


You: “Hm, so this ought to be the solution to my problem?”

He: “Er, … yes. It delivers what the requirements said!”

You: “OK, then I want my problem back.”


In this case he would better take it back, and work on his real agenda and on how to rebuild the misused trust. However, more often than not what follows is a lengthy phase to work the system or product over, in an attempt to fix it according to requirements that were not clear or even not there when the supplier began working.


Every bit of rework is a bit of wasted effort. We could have done it right the first time, and use the extra budget for a joint weekend fishing trip.



Measurable Requirements

Nearly the same line of reasoning can be used to promote measurable requirements.

Say the customer specified requirements but failed to AT THE SAME TIME give a clue about how he will test them, the supplier most likely gave him a leap of faith. He could then either be trustworthy or not. Assume he decided to specify acceptance criteria and how you intend to test long after development began, just before testing begins. Maybe the customer didn‘t find the time to do it before. Quite possibly he would change to some degree the possible interpretations of his requirements specification by adding the acceptance criteria and test procedures. From the supplier‘s angle the customer NOW shows your real agenda, and it‘s different from what the supplier thought it was. The customer misused his trust, unintentionally in most cases.

Besides this apparent strain on the relationship between the customer and the supplier, the system sponsor now will have to pay the bill. Quite literally so, as expensive rework to fix things has to be done. Hopefully the supplier delivered early, for more time is needed now.


So...

Trust is an important prerequisite to systems with few or even zero defects; I experienced that the one and probably last time I was part of a system development that resulted in (close to) zero defects. One of the prerequisites to zero defects is trust between customer and supplier, as we root-cause-analyzed in a post mortem (ref. principles P1, P4, P7, and P8). Zero defects mean zero rework after the system has been deployed. In the project I was working on it even meant zero rework after the system was delivered for acceptance testing. You see, it makes perfect business sense to build trust by working hard on both quantified and high-level requirements.

In fact, both practices are signs of a strong competence in engineering. Competence is – in turn – a prerequisite to trust, as Mr. Covey rightly points out in his aforementioned book.


If you want to learn more on how to do this, check out these sources:

You can also find related material on Planet Project:


Thursday, September 10, 2009

Quantum Mechanics, Buddhism, and Projects - Again!

Today I'm proud to announce that my QMBP :-) article was again published by a major web site on business analysis, requirements engineering and product management: The Requirements Network. The site is full of interesting material for beginners and experts. I recommend reading it.

You'll find the piece of mine and some interesting comments here.

And - ha! - mind the URL of my article: ... node/1874.
Did you know that Winston Churchill was born that year? Must say something ... :-D
Find out more about 1874.


Tuesday, July 28, 2009

Refurbished: Non-Functional Requirements and Levels of Specification

follow me on Twitter: @rolfgoetz

After an interesting and insightful discussion with Tom Gilb and Sven Biedermann about one of my latest PlanetProject articles I decided to work it over. It is a how-to for a good requirements hierarchy. A good requirements hierarchy is an important prerequisite to a more-conscious and logical design or architecture process. This is because real requirements drive design, not designs in requirements clothes. (Thanks Tom for yet another clarification!)

It seems, in trying to write as short as possible I was kind of swept away from good writing practice and got sloppy with wording. I also found out that there is a philosophical, hence essential difference in how the process of 'requirements decomposition' can be seen.
  1. One school of thought describes requirements decomposition as a process to help us select and evaluate appropriate designs.
  2. The other school describes requirements decomposition as being a form of the design process.
Personally, I subscribe to the second meaning, because of a belief of mine: mankind is very used to solution-thinking, but very new to problem-thinking (Darwin weights in). So most forms of thinking, including requirements decomposition, are outputs of a solution-finding or design process. Or, as Sven put it, requirements decompostion is a shortcut to designing, where 'designing' takes on the meaning suggested by number 1 above.

However, it can be very useful to assume there actually is an essential and clear destinction between requirements decomposition and design processes. The point here is: you need to define it that way, then all is well :-) I didn't define it properly, and thus gave rise to many arguments written and exchanged.

Anyway, I hope the article now sheds even more light on the the constant quarrel about so called 'nonfunctional requirements', i. e. what they are, what they are for, why they are so sparse in the common requirement specification documents.

Most requirement specification document I see these days have lots of required functions, and few (if any) requirements for other attributes of the system, like all the -illities. AND many analysts (including me from time to time) confuse non-functional with scalar. Read the article on PlanetProject to learn more.


Sunday, July 26, 2009

Intro to Statistical Process Control (SPC)

In exploring the web about Deming stuff I stumbled upon this site from Steve Horn. It introduces Statistical Process Control (SPC).
There are a number of pages on various aspects, like

The articles are read quickly and understood easily. They are written to the point.

I'd like to add that in Deming's 14 Points, Point 3 needs a special interpretation for software or (IT) project people.
The point says: 
3. Cease dependence on inspection to achieve quality. Eliminate the need for inspection on a mass basis by building quality into the product in the first place.

Deming was talking about (mass) production. In that field, "inspection" means roughly the same as "testing" in our profession.
In fact, inspections (of requirements, designs, code and test cases) are among the most effective activities you can do in software, for building quality into the product in the first place.

BTW, if you worry about your software development organisation NOT really testing their products, it is a very wise idea to first introduce a couple of  inspection stages, for this is both more efficient (economic) and effective (less defects). It is also a sensible way to introduce learning. 
Testing is about finding defects, Inspections are about pointing out systematic errors and give people a real chance for preventing them in the future.

Here's one quote from Steve I like in particular:
Confident people are often assumed to be knowledgeable. If you want to be confident the easiest way is to miss out the 'Study' phase (of the Plan-Do-Study-Act-Cycle) altogether and never question whether your ideas are really effective. This may make you confident but it will not make you right.

(words in brackets are mine)


Monday, July 13, 2009

Levels of Spec Principle, Non-Functional Requirements

Follow me on Twitter: @rolfgoetz

Just a quick remark: I added a grammar representation to the "levels of specification principle" on PlanetProject. For those of you who like precision. 
If my boss sees this, he again will call me a "theorist." :-)


Friday, July 03, 2009

A Quest for Up-Front Quality

Today I'd like to point you to the presentation slides of a talk I gave at this year's Gilb Seminar on Culture Change in London. 

You'll find the file if you go to this PlanetProject page on Zero Defects, navigate to principle P6 and click the link titled "presentation".

The title was "A Quest for Up-Front Quality".
Short outline:
If you want to see truely amazing results for one of the most effective methods of software development history, don't miss the slides. Any questions? Just ask!

The talk was warmly welcomed by the great Value-Thinkers of the seminar, special thanks to Ryan ShriverAllan KellyGiovanni AsproniNiels Malotaux, Matthew Leitch, Jerry Durant, Jenny Stuart, Lars Ljungberg, Renze ZijlstraClifford ShelleyLorne MitchellGio WiederholdMarilyn BushYazdi Bankwala, and all others I forgot to mention. 


Monday, June 01, 2009

History Repeating?, or A Case for Real QA

Jeff Patton of AgileProductDesign.com has an excellent article on Kanban Development. He does a great job in explaining the kanban idea, in relation to the main line of thinking in Agile. The article is about using a pull system rather than a push system like traditional (waterfall) development or agile development with estimation, specification and implementation of user stories.

Reading the first couple of sections got me thinking about another topic. Isn't this agile thing "history repeating"?
First of all, I'd like to question how common agile development really is. It's clear that the agile community talks a lot about agile (sic!), but what does the numbers tell us? I don't have any, so I'm speculating. Let's assume the percentage of agile developments is significant. (I don't want to argue against Agile, but I want to know if my time is spent right ;-)

Jeffs writes:
"Once you’ve shrunk your stories down, you end up with different problems. User story backlogs become bigger and more difficult to manage. (...) Prioritizing, managing, and planning with this sort of backlog can be a nasty experience. (...) Product owners are often asked to break down stories to a level where a single story becomes meaningless."

This reminds me so much of waterfall development, a line of thinking I spent the most of my professional career in, to be honest.
First, this sounds a lot like the initial point of any requirements management argument. You have so many requirements to manage, you need extra discipline, extra processes, and (of course ;-) an extra tool to do that. We saw (and see) these kind of arguments all over the place in waterfall projects. Second, estimation, a potentially risky view into the future, has always been THE problem in big bang developments. People try to predict minute detail, the prediction covering many months or even years. This is outside most people's capacity. Third and last, any single atomic requirement in a waterfall spec really IS meaningless. I hope we learn from the waterfall experience.

Jeff goes on:
"Shrinking stories forces earlier elaboration and decision-making."
Waterfall-like again, right?

If user stories get shrunk in order to fit in a time-box, there's another solution to the problem (besides larger development time-boxes, or using a pull system like Jeff has beautifully laid out): not make a user story the main planning item. How about "fraction of value delivered" instead, in per cent?

Jeff again:
"It’s difficult to fit thorough validation of the story into a short time-box as well. So, often testing slips into the time-box after. Which leaves the nasty problem of what to do with bugs which often get piped into a subsequent time-box."

This is nasty indeed, I know from personal experience. BTW, the same problem exists, but on the other side of the development time-box, if you need or want to thoroughly specify stories/features/requirements. Typical solution: you put it in the time-box BEFORE.
Let's again find a different solution using one of my favorite tools, the 5 Whys.
  • Why do we put testing in the next time box? Because it consumes too much time.
  • Why does it consume a lot of time? Because there is a significant number of defects to find and fix (and analyse and deploy and...), before we consider the product good enough for release. 
  • Why is there a significant number of defects to be found with the testing stage? Because the product brings a significant number with it.
  • Why does the test-ready product have a significant number of "inherent" defects? Because we have not reduced them significantly them further upstream.
  • Why didn't we reduce it further upstream? Because we think testing is very effective in finding all kinds of defects, so testing alone (or along with very few other practices) is sufficient for high defect removal efficiency.
It is not. Period.
From an economic standpoint it is wise to do proper QA upstream, in order to arrive at all subsequent defect removal stages (including testing) with a smaller number of defects, hence with fewer testing hours needed. This works because defects are removed cheapest and fastest as close as possible to their origin.
What do I mean by proper upstream QA? Well, I've seen personally that inspections (on requirements/stories, design, code, and tests) deliver jaw-dropping results in terms of defects reduced and ROI. I'm sure there are a couple more, just ask you metrics guru of choice. The point is, see what really helps, by facts and numbers not opinions, and make a responsible decision.

Tuesday, May 05, 2009

Atomic Functional Requirements

The discussion on shall vs. must at Tyner Blain , on which I have blogged recently, triggered me to wrap up what I have learned about atomic functional requirements over the years. 
See 3 principles and 4 rules on PlanetProject. Feel free to do responsible changes.
The article  explains how to write atomic functional system requirements so that the spec is easy to read, and ambiguity is kept to a minimum.
Note that it is NOT about the even more important (non-)functional user requirements here, in a sense of what a stakeholder expects from the system, which problems shall be solved, what shall be the inherent qualities of the solution etc. I do not intend to argue that any spec must include atomic, functional requirements. But sometimes setup is so that it must. Don't forget to establish the context first.
Now enjoy the article on how to write atomic functional requirements. Thanks for your comments.

Thursday, April 16, 2009

0-defects? I'm not kidding!

Today a post by Michael Dubakov at The Edge of Chaos provoked some thoughts. He basically argues, that a zero-defect mentality causes several unpleasant effects in a development team:

- Not enough courage to refactor complex, messy, buggy, but important piece of code.
- Can’t make important decision, instead make less risky, but wrong decision.
- Do everything to avoid responsibility, that leads to coward and stupid behavior.

Well, I guess the solomonic answer to this critique is 'it depends'. Actually, I'm quite enthusiastic about the zero-defects thought. However, I also know that zero-defects are extremely hard to attain, or can easily be attained by chance ;-)

You maybe remember the post where I detail 10 Requirements Principles for 0-defect systems. Therefore I posted the following as a comment to Michael's post, argueing
- that we should not call defects 'bugs', 
- that zero-defects can be attained by using principles that have almost nothing to do with testing, but with requirements, 
- that testing alone is insufficient for a decent defect removal effectiveness.

Hi Michael, 

thanks for another thought-provoking post! i enjoy reading your blog.

Hmm, several issues here.

1) First of all I think we should come to a culture where we do not call more or less serious flaws in IT systems 'bugs'. To me, it does not make sense to belittle the fact. Let's call them what they are: defects (I'm not going into the possible definitions of a defect here).
This said, look here:  scroll down a bit, and laugh.
And here's a funny cartoon.

2) A couple of years ago I had the luck to be part of a team that delivered (close to) zero defects. I wrote down the 10 principles we used from a requirements perspective. See 
 the short version on PlanetProject, more or less uncommented, or Part 1 and Part 2 of 
a longer version on Raven's Brain
Interesting enough, only 1 out of the 10 principles is directly related to a defect-finding activity. We had a 'zero defect mentality' in a way that we all seriously thought it would be a good idea to deliver excellent quality to the stakeholders. Fear or frustration was not at all part of the game. Mistakes were tolerable, just not in the product at delivery date. Frankly, we were a bit astonished to find out we successfully delivered a nearly defect-free non-trivial system over a couple of releases.

3) While it seems to be a good idea to use the different strategies that you proposed in the original post, I'm missing some notion of how effective the various defect-reducing strategies are. The only relevant quantitative research I know of come from metrics guru Capers Jones. If I remember correctly he states that each single strategy is only about 30% effective, meaning that you need to combine 6-8 strategies in order to end up with a 'professional' defect removal effectiveness. AND you cannot reach a net removal effectiveness of, say, 95% with testing alone. From an economic standpoint it is wise to reduce the number of defects that 'arrive' at testing in the first place, and this is done most effectively by formal inspections (of requirements, design and code).


Greetings!

Rolf

PS: if you find a defect in my comment, you can keep it ;-)

Monday, April 13, 2009

Value Thinking: Using Scalpels not Hatchets

Ryan Shriver, managing consultant at Dominion Digital, has put out an excellent article on Value Thinking. I think his views are extremely relevant, as we see cost reduction programs all around the IT globe. I can't tell you how much Ryan is spot on, considering the organization I work for (sorry, can't tell you more in public).

The article is short, gives ample information on the WHAT and (more importantly) the WHY.
Ryan writes in due detail about 4 policies and 5 practices, his advice to struggling IT shops and IT departments.  Thanks Ryan!

It's on gantthead.com, and I recommend signing up if you aren't already, it's free.

Sunday, December 21, 2008

Why Re-work isn't a Bad Thing in all Situations.

Software Development has many characteristics of production. I'm thinking of the various steps that as a whole comprise the development process, in which artifacts are passed from one sub process to another. Requirements are passed on to design, code is passed on to test (or vice versa, in test driven-development), and so on.

Alistair Cockburn wrote an interesting article on what to do with the excess production capability of non-bottleneck sub-processes in a complex production process. In the Crosstalk issue of Jan 2009, he explores new strategies to optimizing a process as a whole. One idea is to consciously use re-work (you know, the bad thing you want to avoid like the plague) to improve the quality of affairs earlier rather than later.

In terms of what to do with excess capacity at a non-bottleneck station, there is a strategy different from sitting idle, doing the work of the bottleneck, or simplifying the work at the bottleneck; it is to use the excess capacity to rework the ideas to get them more stable so that less rework is needed later at the bottleneck station.


Mark-up is mine. You see the basic idea.
In conclusion he lists the following four ways of using work that otherwise would have spent idling around, waiting for a bottleneck to finish work:

* Have the workers who would idle do the work of the bottleneck sub-process.

Note: Although this sometimes is mandated by some of the Agile community, I have seldom seen workers in the software industry that can produce excellent quality results in more than one or two disciplines.

* Have them simplify the work at the bottleneck sub-process.

Note: An idea would be to help out the BAs by scanning through documents and giving pointers to relevant information.

* Have them rework material to reduce future rework required at the bottleneck sub-process.

Note: See the introducing quote by A. Cockburn. Idea: Improve documents that are going to be built upon "further downstream" to approach a quality level of less than 1 major defect per page (more than 20 per page is very common in documents that already have a 'QA passed' stamp).

* Have them create multiple alternatives for the bottleneck sub-process to choose from.

Note: An example would be to provide different designs to stakeholders. Say, GUI designs. This can almost take the form of concurrently developing solutions to one problem by different engineering teams, for the best solution should be evaluated and consciously chosen.

Monday, October 13, 2008

How to Tell Principles from Paradigms

Name: How to Tell Principles from Paradigms
Type: Principles (or Paradigms :-?)
Status: final
Version: 2008-10-13
Source: Covey, 7 Habits of Highly Effective People

Gist: If you - like me - like to be picky with words sometimes, here's a nice explanation on two words we use over and over again. It started me thinking about what I give the 'principles' tag here on my blog. I will put more work on all the existing principles here, in order to better distinguish principles from other things.

Principles
Principles describe the way things are, like the law of gravity.
Principles represent natural laws, the reality
Principles are sustainable and timeless
Principles are self-evident & self-validating
Principles can't be faked, you're not in control of them
Principles will remain constant and universal
Principles ultimately govern

Paradigms
Paradigms are mental images of the way things are. ´Hence, a paradigm can be some one's mental image of a principle.
Paradigms represent implicit assumptions of our lives
Paradigms are projection of our background
Paradigms reveals my world view, my autobiography
Paradigms can change with my thinking

Monday, September 08, 2008

The 3 Most Powerful Requirements Principles

Name: The 3 Most Powerful Requirements Principles
Type: Principles
Status: final
Version: 2008-09-08

Gist: A lot has been written about how requirements should look like, on what they are and what they are not, on what you should do to capture them etc. These 3 principles take you 80% of the way. They are grounded in sound systems engineering.

P1: My requirements are re-useable.
This means
- every requirement is unique. No copying please! 'I. e. I write requirements so that they have a unique name, and use the name in order to re-use the requirement in a different context, instead of copying the requirements text to the other context. Dick Karpinsky calls this the DRY rule: Don't Repeat Yourself.
- every requirement (not the whole document) provides information on version, author, stakeholder, owner, and status
- every requirement refers to s. th. and is referred to by s. th., e.g. designs, tests, suggestions/solutions, use cases, ...

P2: My requirements are intelligible.
This means
- every requirement has a type, e. g. one of {vision, function requirement, peformance requirement, resource requirement, design constraint, condition constraint} (courtesy of T. Gilb's basic requirement types)
- every requirement has a clear structure, i. e. unique, unchanging number or tag, scale, goal (= core specification attributes)
- complex requirements have a clear hierarchy, not some cloudy abstract
- no requirement in any non-commentary part uses words that are just fraught with significance, like 'high performance', 'good reliability', 'reduced costs', 'flexible'

P3: My requirements are relevant.
This means
- every requirement clearly indicates owner, author, stakeholder (in order to be able to judge relevance)
- every requirement clearly indicates its sources (in order to be able to check them)
- every requirement clearly indicates its relative priority (for a specific release, for a specific stakeholder)
- every requirement clearly indicates its constraints and goals
- every requirement clearly indicates what happens if it is not satisfied
- no requirement states a design to some problem, but every requirement declares a future state

Suggested Reading:

4 Strategies Towards Good Requirements

Tuesday, July 29, 2008

Why and How to State the Credibility of Estimations

Name: Why and How to State the Credibility of Estimations
Type: Principles
Status: final
Version: 2008-09-08
Sources: T. Gilb's work on Impact Estimation Tables, see http://www.gilb.com/; my own experience

Gist: Have you seen presentations suggesting a project or other endeavour with the purpose of convincing someone that the project would be the right thing to do? More often than not these presentations include shiny numbers of relatively low cost and relatively high benefit (of course). But how often does the presenter tell something about the Credibility of the numbers?
I believe every decision maker needs some assessment of credibility. If you don't provide him with how believable your evidence is, the decision maker might turn to other sources, like "what's this guy's success record?", "I don't believe her so I will resort to a very tight budget in order not to risk too much", etc.

Same problem with any estimate you prepare and present. These Principles explain why you should always give credibility information, and the Rules explain how you can do it.

" An estimate that is 100% certain is a commitment." -- Unknown

Credibility information DEFINED AS any evidence or number that expresses how certain we believe a statement is

Principles
P1: Estimates are assumptions about the future, therefore they must be uncertain.
"Unfortunately, tomorrow never gets here." - Leo Babauta

P2: Predictions are a difficult matter. The farther away the state we predict, the more difficult, the riskier.

P3: Credibility information is a measure of how certain we are about a predicted state.

P4: Decision makers need some assessment of the risk involved in the decision. What can happen in the worst case? (It is equally valid to ask for the best case, but seriously, the best case happens far too seldom to take it into account)

"Most people really like to be able to sleep at night." -- Heuser's Rule

P5: The clearer you state uncertainty,
* the easier it is to distinguish between different strategies
* the more likely other people believe you
* the clearer you yourself see the risk involved
* the better you can argue about a strategy
* the easier it is to learn something about the risk involved and to do something about it
* the clearer you give or take responsibility for an endeavour
Note: While the decision maker obviously accepts a risk involved by seeing uncertainty numbers, there are decision makers who don't like the idea at all. I guess this is a reason why credibility information isn't requested very often.

P6: If credibility information is missing, we by default assume the number or the source is not credible at all.
Note: Maybe the presenter just forgot to provide the information, and then it shouldn't be a problem. We can send him to get it.

Rules
R1: Any number that expresses some future state should be accompanied by credibility information.

R2: If you predict a value on a scale, like "end date", "targeted budget", "performance improvement", or "maintenance cost", give a range of that number.
Notes:
* It's useful to do this in a plus/minus X% fashion. The percentage is a clear and comparable signal to the audience.
* It is not mandatory to give the best/worst case estimates an equal range. How about "we are sure to complete this in 10 weeks, minus 1 plus 2"?

* In Rapid Development, Steve McConnell actually suggests that you communicate estimates in a range that gets smaller over time, with the larger number first. "Six to four months" sounds strange - but if you say the smaller number first, people tend to forget the larger one.)

R3: If you provide evidence, say where you have it from, or who said so, and what facts (numbers, history, written documentation) lead to this conclusion.
Note: Do this in the backup slides for example. At least you should be able to pull it out of the drawer if requested.

R4: Consider using a safety margin, like factor 2 (bridge builders), factor 4 (space craft engineers).
Notes:
* The margin is an expression of how much risk your are willing to take. Thus, it is a way of controlling risk.
* Use margins whenever you find a number or source is not very credible (like < 0.4), you don't have any historic data, or if there's no economic way of achieving higher credibility.
* Safety Margins do not necessarily increase real costs (but planned costs)

R5: Always adjust your cost/benefit-ratios by a credibility factor.
Notes:
A simple scale would be {0.0 - guess, no real facts available, 0.5 - some facts or past experience available, 1.0 - scientific proof available}
A more sophisticated scale would be
{0.0 Wild guess, no credibility
0.1 We know it has been done somewhere (outside the company)
0.2 We have one measurement somewhere (from outside the company)
0.3 There are several measurements in the estimated range (outside the company)
0.4 The measurements are relevant to our case because <fact1, fact2> with credibility <x, y> (don't get trapped in recursion here)
0.5 The method of measurement is considered reliable by <whom>
0.6 We have used the method in-house
0.7 We have reliable measurements in-house
0.8 Reliable in-house measurements correlate to independent external measurements
0.9 We have used the idea on this project and measured it
1.0 Perfect credibility, we have rock solid, contract-guaranteed, long-term, credible experience with this idea on this project and, the results are unlikely to disappear}

Thursday, July 24, 2008

10 Critical Requirements Principles for 0-Defects-Systems

Name: 10 Critical Requirements Principles for 0-Defects-Systems
Type: Principles
Status: final
Version: 2008-07-24

Gist: It might sound esoteric to you, but it is actually possible to build non-trivial systems with (close to) zero defects. At the very least, we should try to get close to zero defects as early in the development cycle as possible, because of the relative defect removal costs (factors 1-10-100-1000, remember?).
I had the luck to be part in such a system development (a multi-user database system with about six releases over a ten-year period with strictest requirements on data integrity and data availability. We detected 5 (five) defects in total during user acceptance testing. The system still works with no interruptions and obviously perfectly according to the requirements). These are the top ten rules we (un)consciously followed, as we found out in a Lessons Learned earlier this month. I added principles P7-10 because of recent and not so recent experience in other projects. However, following these later principles (and ignoring the principles P1-6) did not lead to zero-defect systems. I guess all these principles represent necessary not sufficient conditions. After all, this is anecdotal evidence.

P1: Customer and software developer who like each other.
P2: Small and easy scope of your requirements, increasing in small steps.
P3: High-value requirements are the initial focus.
High-value DEFINED anything that is more important to a primary stakeholder than other things (relative definition intentional).
P4: Acceptance criteria are paramount.
P5: Templates rule.
P6: Quality control for your requirements (inspections).
Requirements inspection DEFINED AS a method of process control through sampling measurement of specification quality.
P7: Top management's requirements (aka objectives).
Top management DEFINED AS every manager that is higher up in the food chain than the project manager and has impact on project success or failure
P8: Problems first (then the requirements).
P9: Performance requirements first.
Performance requirement DEFINED AS A specifies the stakeholder requirements for 'how well' a system should perform
P10: Replacing requirement-based estimation with planning.
I. e. use some capacity throughput metric and promote the concept of variation and fluctuation.