Tuesday, April 28, 2009

Tweets from Rolf

Just in case you're interested and did not notice, you can follow me on Twitter using http://twitter.com/rolfgoetz

Shall or must to markup mandatory requirements?

Generations of Business Analysts before me had to decide which auxiliary verb to choose in a mandatory system requirement. "Shall", like in "The system shall..." is probably most widely used in the English spec-writing world. But is it a good idea?
Scott Selhorst of Tyner Blain did some research using a novel research method on how ambiguous shall and must are. The bottom line: if for any reason it might not be clear to some reader that shall means mandatory, use must (consistently throughout the spec, of course, and put the convention in your requirements management plan).

I think the shall (as opposed to must) dates back to the old MIL-STD 498, or even DOD-STD 2167A.  And anyone who "grew up" BA-wise in the military or air traffic control fields like me, is very accustomed to the shall, and must might sound too harsh. But, hey, here's my favourite answer to that: "You're writing a spec, and you're not gonna win the literature Nobel prize for it!"

Interesting enough, as Scott points out in his article, in many languages other than English shall is not really used for the meaning of mandatory. In German, for instance, the meaning of shall is much closer to should than must.  And we all know what happens in waterfall projects when acceptance testing is due and the developer has implemented only half of the should requirements...

Writing Many High Quality Artefacts - Efficiently

I finished a process description on writing many artefacts, like use cases, requirements, book chapters, test cases, in high quality with the least effort possible. Please have a look, I'm eager to see your comments. After all, it's a Planet Project wiki article, so if you feel like it, just edit.

This, again, is on evolution, or learning by feedback. Makes heavy use of inspections. Inspections BTW, are one of the most effective QA methods known in software engineering history, according to metrics expert Capers Jones.

Friday, April 17, 2009

Understanding the exponential function

I was pointed to a set of videos that explores the exponential function and what most people think about its consequences. Seems to be a boring topic.
It isn't. I watched the first 10 minutes, only to realize my jaw has dropped to the floor. This is stuff I knew and didn't grasp at the same time. I strongly suggest you invest the ∼75 minutes and watch it.
It hit me two days after I started to drivel about me liking sustainable solutions. I wasn't up to all my senses, obviously. ;-) 

Thursday, April 16, 2009

0-defects? I'm not kidding!

Today a post by Michael Dubakov at The Edge of Chaos provoked some thoughts. He basically argues, that a zero-defect mentality causes several unpleasant effects in a development team:

- Not enough courage to refactor complex, messy, buggy, but important piece of code.
- Can’t make important decision, instead make less risky, but wrong decision.
- Do everything to avoid responsibility, that leads to coward and stupid behavior.

Well, I guess the solomonic answer to this critique is 'it depends'. Actually, I'm quite enthusiastic about the zero-defects thought. However, I also know that zero-defects are extremely hard to attain, or can easily be attained by chance ;-)

You maybe remember the post where I detail 10 Requirements Principles for 0-defect systems. Therefore I posted the following as a comment to Michael's post, argueing
- that we should not call defects 'bugs', 
- that zero-defects can be attained by using principles that have almost nothing to do with testing, but with requirements, 
- that testing alone is insufficient for a decent defect removal effectiveness.

Hi Michael, 

thanks for another thought-provoking post! i enjoy reading your blog.

Hmm, several issues here.

1) First of all I think we should come to a culture where we do not call more or less serious flaws in IT systems 'bugs'. To me, it does not make sense to belittle the fact. Let's call them what they are: defects (I'm not going into the possible definitions of a defect here).
This said, look here:  scroll down a bit, and laugh.
And here's a funny cartoon.

2) A couple of years ago I had the luck to be part of a team that delivered (close to) zero defects. I wrote down the 10 principles we used from a requirements perspective. See 
 the short version on PlanetProject, more or less uncommented, or Part 1 and Part 2 of 
a longer version on Raven's Brain
Interesting enough, only 1 out of the 10 principles is directly related to a defect-finding activity. We had a 'zero defect mentality' in a way that we all seriously thought it would be a good idea to deliver excellent quality to the stakeholders. Fear or frustration was not at all part of the game. Mistakes were tolerable, just not in the product at delivery date. Frankly, we were a bit astonished to find out we successfully delivered a nearly defect-free non-trivial system over a couple of releases.

3) While it seems to be a good idea to use the different strategies that you proposed in the original post, I'm missing some notion of how effective the various defect-reducing strategies are. The only relevant quantitative research I know of come from metrics guru Capers Jones. If I remember correctly he states that each single strategy is only about 30% effective, meaning that you need to combine 6-8 strategies in order to end up with a 'professional' defect removal effectiveness. AND you cannot reach a net removal effectiveness of, say, 95% with testing alone. From an economic standpoint it is wise to reduce the number of defects that 'arrive' at testing in the first place, and this is done most effectively by formal inspections (of requirements, design and code).


Greetings!

Rolf

PS: if you find a defect in my comment, you can keep it ;-)

Monday, April 13, 2009

Value Thinking: Using Scalpels not Hatchets

Ryan Shriver, managing consultant at Dominion Digital, has put out an excellent article on Value Thinking. I think his views are extremely relevant, as we see cost reduction programs all around the IT globe. I can't tell you how much Ryan is spot on, considering the organization I work for (sorry, can't tell you more in public).

The article is short, gives ample information on the WHAT and (more importantly) the WHY.
Ryan writes in due detail about 4 policies and 5 practices, his advice to struggling IT shops and IT departments.  Thanks Ryan!

It's on gantthead.com, and I recommend signing up if you aren't already, it's free.

Saturday, April 11, 2009

Extracting Business Rules from Use Cases

A while ago I posted a process for Extracting Business Rules from Use Cases, on this blog and on PlanetProject. I'm proud to announce an article on the very same subject. While the original form was quite condensed, the new version has more background, and examples. It has the core process of course. Find it at modernAnalyst.com as a featured article.
I'd be happy to discuss the topic here, there, or work with you on improvements on PlanetProject.

Monday, April 06, 2009

Rules for Checking Use Cases

A couple of months ago I assumed the role of a quality manager for one of my projects. I learned that document quality control requires a set of defined rules to check documents against. Because the documents to do quality control on were 90% use cases, I defined a set of 8 rules for checking use cases, working with authors and checkers, as well as subject matter experts (read: customers). you can find them on Planet Project.
For all readers who face the challenge to find a BA job, or at least a (second) job where BA skills are applicable, there's an interesting discussion going on at the modernAnalyst-Forum. I suggested to move to a QA perspective. This idea fits with the above Use Case Checking Rules.