Showing posts with label Quality. Show all posts
Showing posts with label Quality. Show all posts

Thursday, July 08, 2010

Praise! Use numbers to show quality impact.

I just stumbled upon a very nice blog entry by Fergal McGovern of Visible Thread. It is about my article on "Extreme Inspections". Fergal appreciates that I showed ROI figures:
What was doubly exciting for me in this article is that you see the actual ROI of inspection in clear terms. Utilising the approach of ‘extreme inspection’ in one case Rolf cites, we see a reduction in the number of defects per page by 50%. Clear empirical evidence of actual defect reduction as a consequence of inspections in real projects is hard to come by and so Rolf’s case studies are useful to consider.
Thank you Fergal!

I took a looong look on Fergal's company and website. Their solutions look very promising in terms of document quality assessment and improvement. Visible Thread does detailed quality checks on Office documents and shows the results, for grounding decisions on data is always better than on gut feeling.
Head over and see for yourself.

Wednesday, June 23, 2010

What are the most valueable specifications?

During my last vacation I came across evidence of good technical value specifications. At VW's Autostadt there is a display of this wonderful old Phaeton Sedan (sorry the photo is blurred).
There also is a reproduction of a poster that was used to market the car at its times. It clearly states the most important values the car delivers. Reading through it, and thinking of the extreme focus on functions nowadays, I ask you: Did we forget to focus on the really important things, like value?

The poster reads:
It is our belief that the most important "specification" of the Cord front-frive are: Do you like its design? Does the comfort appeal to you? Does it do what you want a car to do better and easier? When a car, built by a reputable and experienced company meets these three requirements, you can safely depend on that company to adequately provide for all mechanical details. -- E.L.CORD

Thursday, March 04, 2010

Adding to Acceptance Testing

James Shore has sparked a lively discussion about sense and nonsense of acceptance testing. To be honest, my very first reaction to this piece of text was forbidding:

"Acceptance testing tools cost more than they're worth. I no longer use it or recommend it."

But James expanded on his statement in another post. And there he impressed me.

"When it comes to testing, my goal is to eliminate defects. At least the ones that matter. [...] And I'd much rather prevent defects than find and fix them days or weeks later."

Yes! Prevention generally has a better ROI than cure. Not producing any defects in the first place is a much more powerful principle than going to great lengths to find them. That's why I like the idea of inspections so much, especially if inspections are applied to early artifacts.

James also very beautifully laid out the idea that in order to come close to zero defects in the finished product, you need several defect elimination steps. It is not sufficient to rely on a few, or even one. Capers Jones has the numbers.

I'd love to see this discussion grow. Let's turn away from single techniques and methods and see the bigger picture. Let's advance the engineering state of the art (sota, which is a SET of heuristics). Thank you, James!

Thursday, February 25, 2010

Problem to Solution - The Continuum Between Requirements and Design

Christopher Brandt, relatively new blogger from Vancouver, Canada, has an awesome piece about the difference between problem and solution. If he blogs more about problem solving, and in this outstanding quality, he definitely is someone to follow.

While I do not completely agree with the concepts, I believe everybody doing requirements or design should at least have this level of understanding.

Requirements written that imply a problem end up biasing the form of the solution, which in turn kills creativity and innovation by forcing the solution in a particular direction. Once this happens, other possibilities (which may have been better) cannot be investigated. [...] This mistake can be avoided by identifying the root problem (or problems) before considering the nature or requirements of the solution. Each root problem or core need can often be expressed in a single statement. [...] The reason for having a simple unbiased statement of the problem is to allow the design team to find path to the best solution. [...] Creative and innovative solutions can be created through any software development process as long as the underlying mechanics of how to go from problem to solution are understood.

Which brings me to promote again and again the single-most powerful question for finding requirements from a given design: WHY?

Don't forget to have fun ;-)




Wednesday, February 03, 2010

Using Extreme Inspections to Significantly Improve Requirements Practice

Today I'm pround to announce that one of my latest articles has been published on modernanalyst.com:

Using Extreme Inspections to Significantly Improve Requirements Practice

It will be showcased next week in the Feb issue of the eJournal of ModernAnalyst.

From the introduction:
Extreme Inspections are a low-cost, high-improvement way to assure specification quality, effectively teach good specification practice, and make informed decisions about the requirements specification process and its output, in any project. The method is not restricted to be used on requirements analysis related material; this article however is limited to requirements specification. It gives firsthand experience and hard data to support the above claim. Using an industry case study I conducted with one of my clients I will give information about the Extreme Inspection method - sufficient to understand what it is and why its use is almost mandatory, but not how to do it. I will also give evidence of its strengths and limitations, as well as recommendations for its use and other applications.

Thank you, Adrian, for your support!

Thursday, December 24, 2009

Project management template for FREE download KOSTENLOSER Download: Projektmanagement Vorlage

Projects by the book or by the trenches?

The PRINCE2 project management standard by the OGC is a very practical piece of work. There is no obligation to do it by the book while you meet certain criteria. And it is supposed to be helpful, not an obstacle. I believe it is indeed helpful, and that is exactly why I based upon it the design of some quite central piece of paper document for some projects I work on, the Project Identification Document (German: Projektleitdokument).

I wanted to fulfill the following requirements:
• Mustn‘t be lengthy.
• Mustn‘t become shelf-ware.
• Must be easy to maintain.
• Must be interesting for the steering committee, the project manager and the project team.
• Must include everything relevant to guide projects of 5 - 20 week‘s duration.
• Mustn‘t shut off readers who are not familiar with PRINCE 2.
• Must facilitate checking defect density, i. e. number of defects per page.
• Must combine the PRINCE 2 Project Identification Document, Business Case, Project Mandate and Project Brief into one (1) manageable document (huh!).

I decided to design a standard document structure for one of my clients, along with guiding descriptions of every document section and a checklist for each section. As you might suspect from my background, my former work and my friends, I not only rewrote the respective PRINCE 2 product, but included my 12+ years experience with projects and some relevant pieces of good engineering, as taught by honorable Tom and Kai Gilb.

Out came the RTF template you can download for free from PlanetProject. Be aware that RIGHT NOW IT IS IN GERMAN only. If you would like to have it in English, please contact me and provide me with a good reason to work on the translation :-) (or maybe YOU translate it and provide it on Planet Project, too?)

Let me know what you think, please!

German Planguage Concept Glossary

I published a German Version of the Gilb's Planguage Concept Glossary on Google Docs. It comprises the 178 core terms of systems engineering and RealQA concepts that are part of the Competitive Engineering book. In the future I plan to translate the rest of the over 500 carefully defined terms.

The German words have been chosen to form a body of terms that is similar to the English original version, similar in coherence and completeness. Therefore, it is not a simple word to word dictionary translation. To the German reader, some of the words may seem to be translated badly. However, if you study the original glossary carefully, you will see that they make sense.

Furthermore, the terms have been tested in corporate practice and professional German conversation over a period of about a year. Most of them can be intuitively understood by Germans or German-speaking people; others need a quick look into the glossary definitions (which is something that is restricted to English-speaking readers ;-).

I'm happy to take your comments, critique and encouragement!

Saturday, December 05, 2009

Who is your hero systems engineer?

These days, I‘m inquiring about engineering. I‘d like to know who your hero systems engineer was or is (or will be?). Please comment, or send me a twitter message. Thank you!

Thank you again for making it past the first paragraph. ;-) It all started when a couple of weeks ago I again was confronted with a colleague‘s opinion which said that I am a theorist. I refuse this opinion vehemently; quite the contrary, I believe my work, especially my work for the company, is a paragon of pragmatism. ;-)
So my argument against the opinion usually is ,no, I‘m not a theorist!', in a more or less agitated voice. Obviously, I need a better basis for making that point. Kurt Lewin said:
Nothing is a s practical as a good theory.
I used this sentence for a while as a footer for email that I suspected to raise the ,theorist‘ criticism again.
After all this sentence is only a claim as well, just like my stubborn phrase above. It may be stronger because it carries Lewin‘s weight. Unfortunately very few people instantly know who Kurt Lewin was, and that he most of all used experience - not theory - to advance humankind.

Then a friend of mine, a U. S. citizen who at some point in time chose to live in Norway (which is a very practical move in my eyes :-), pointed me to the work of Billy V. Koen, Professor for Mechanical Engineering at the University of Texas. You should watch this 1 hour movie if you are least interested in engineering, philosophy and art. Or in more mundane things like best practices, methods, techniques, recipes, and checklists, many of which concerning business analysis and project management can be found at Planet Project in case you don‘t know.

Here is Prof. Koen‘s definition of engineering from the movie:
The Engineering Method (Design) is the use of heuristics to cause the best change in an uncertain situation within the available resources.
Causing change in an uncertain situation within the available resources sounds a lot like project management to me. Like program or portfolio management, too. Maybe like management in general.

It is always when an improbable connection opens up between two different fields of my thinking when I find there is useful truth in it. Next, I want to learn about engineering and me, and one way to approach this is to find out who I admire for his or her engineering skills. I tried thinking of a couple of candidate names and found surprisingly few. I‘ve already identified Leonardo Da Vinci (who is not a Dan Brown invention like my nephew once suggested. :-) A quick request to the twitterverse offered nothing, no new name.

So this is my next take: I‘d like to know who your hero systems engineer was or is (or will be?), and why. Please comment, or send me a (twitter) message. Thank you!

Saturday, October 03, 2009

Why High-level Measurable Requirements Speed up Projects by Building Trust

(Allow 5 minutes or less reading time)


Stephen M.R. Covey‘s The Speed of Trust caused me to realize that trust is an important subject in the field of Requirements Engineering.

Neither the specification of high-level requirements (a.k.a. objectives) nor the specification of measurable requirements are new practices in requirements engineering after all, just solid engineering practice. However, they both are extremely helpful for building trust between customer and supplier.

The level of trust between customer and supplier determines how much rework will be necessary to reach the project goals. Rework – one of the great wastes that software development allows abundantly – will add to the duration and cost of the project, especially if it happens late in the development cycle, i. e. after testing or even after deployment.


Let me explain.


If you specify high-level requirements – sometimes called objectives or goals – you make your intentions clear: You explicitly say what it is you want to achieve, where you want to be with the product or system.

If you specify requirements measurably, by giving either test method (binary requirements) or scale and meter parameters (scalar requirements), you make your intentions clear, too.

With intentions clarified, the supplier can see how the customer is going to assess his work. The customer‘s agenda will be clear to him. Knowing agendas enables trust. Trust is a prerequisite for speed and therefore low cost.


“Trust is good, control is better.” says a German proverb that is not quite exact in its English form. If you have speed and cost in mind as dimensions of “better,” then the sentence could not be more wrong! Imagine all the effort needed to continuously check somebody’s results and control everything he does. On the other hand, if you trust somebody, you can relax and concentrate on improving your own job and yourself. It’s obvious that trust speeds things up and therefore consumes less resources than suspiciousness.

Let‘s return to requirements engineering and the two helpful practices, namely specifying high-level requirements and specifying requirements measurably.


High-level Requirements

Say the customer writes many low-level requirements but fails to add the top level. By top level I mean the 3 to 10 maybe complex requirements that describe the objectives of the whole system or product. These objectives are then hidden somehow implicitly among the many low-level requirements. The supplier has to guess (or ask). Many suppliers assume the customer knew what he did when he decomposed his objectives into the requirements given in your requirements specification. They trust him. More often than not he didn‘t know, or why have the objectives not been stated in the requirements specification document in the first place?


So essentially the customer might have – at best – implicitly said what he wants to achieve and where he is headed. Chances are the supplier’s best guesses missed the point. Eventually he provides the system for the customer to check, and then the conversation goes on like this:


You: “Hm, so this ought to be the solution to my problem?”

He: “Er, … yes. It delivers what the requirements said!”

You: “OK, then I want my problem back.”


In this case he would better take it back, and work on his real agenda and on how to rebuild the misused trust. However, more often than not what follows is a lengthy phase to work the system or product over, in an attempt to fix it according to requirements that were not clear or even not there when the supplier began working.


Every bit of rework is a bit of wasted effort. We could have done it right the first time, and use the extra budget for a joint weekend fishing trip.



Measurable Requirements

Nearly the same line of reasoning can be used to promote measurable requirements.

Say the customer specified requirements but failed to AT THE SAME TIME give a clue about how he will test them, the supplier most likely gave him a leap of faith. He could then either be trustworthy or not. Assume he decided to specify acceptance criteria and how you intend to test long after development began, just before testing begins. Maybe the customer didn‘t find the time to do it before. Quite possibly he would change to some degree the possible interpretations of his requirements specification by adding the acceptance criteria and test procedures. From the supplier‘s angle the customer NOW shows your real agenda, and it‘s different from what the supplier thought it was. The customer misused his trust, unintentionally in most cases.

Besides this apparent strain on the relationship between the customer and the supplier, the system sponsor now will have to pay the bill. Quite literally so, as expensive rework to fix things has to be done. Hopefully the supplier delivered early, for more time is needed now.


So...

Trust is an important prerequisite to systems with few or even zero defects; I experienced that the one and probably last time I was part of a system development that resulted in (close to) zero defects. One of the prerequisites to zero defects is trust between customer and supplier, as we root-cause-analyzed in a post mortem (ref. principles P1, P4, P7, and P8). Zero defects mean zero rework after the system has been deployed. In the project I was working on it even meant zero rework after the system was delivered for acceptance testing. You see, it makes perfect business sense to build trust by working hard on both quantified and high-level requirements.

In fact, both practices are signs of a strong competence in engineering. Competence is – in turn – a prerequisite to trust, as Mr. Covey rightly points out in his aforementioned book.


If you want to learn more on how to do this, check out these sources:

You can also find related material on Planet Project:


Tuesday, July 28, 2009

Why it is stupid to write specifications and leave out background or commentary information

Follow me on Twitter: @rolfgoetz

I wrote a set of rules over at PlanetProject that explain why it is a good idea to specify requirements (or designs) giving commentary or background information, and how to do it.

The article is based on a rather tiring discussion with a hired reqiurements specification writer in one of my projects. She seems to get defensive as soon as I suggest she should supply more 'context' for the readers of her request for proposal document.

More education please, this will bring us closer to the 'professional' bit of 'professional business analyst'.

Read the article here. Comments or addidions welcome! (It's a wiki...)


Sunday, July 26, 2009

Intro to Statistical Process Control (SPC)

In exploring the web about Deming stuff I stumbled upon this site from Steve Horn. It introduces Statistical Process Control (SPC).
There are a number of pages on various aspects, like

The articles are read quickly and understood easily. They are written to the point.

I'd like to add that in Deming's 14 Points, Point 3 needs a special interpretation for software or (IT) project people.
The point says: 
3. Cease dependence on inspection to achieve quality. Eliminate the need for inspection on a mass basis by building quality into the product in the first place.

Deming was talking about (mass) production. In that field, "inspection" means roughly the same as "testing" in our profession.
In fact, inspections (of requirements, designs, code and test cases) are among the most effective activities you can do in software, for building quality into the product in the first place.

BTW, if you worry about your software development organisation NOT really testing their products, it is a very wise idea to first introduce a couple of  inspection stages, for this is both more efficient (economic) and effective (less defects). It is also a sensible way to introduce learning. 
Testing is about finding defects, Inspections are about pointing out systematic errors and give people a real chance for preventing them in the future.

Here's one quote from Steve I like in particular:
Confident people are often assumed to be knowledgeable. If you want to be confident the easiest way is to miss out the 'Study' phase (of the Plan-Do-Study-Act-Cycle) altogether and never question whether your ideas are really effective. This may make you confident but it will not make you right.

(words in brackets are mine)


Friday, July 03, 2009

A Quest for Up-Front Quality

Today I'd like to point you to the presentation slides of a talk I gave at this year's Gilb Seminar on Culture Change in London. 

You'll find the file if you go to this PlanetProject page on Zero Defects, navigate to principle P6 and click the link titled "presentation".

The title was "A Quest for Up-Front Quality".
Short outline:
If you want to see truely amazing results for one of the most effective methods of software development history, don't miss the slides. Any questions? Just ask!

The talk was warmly welcomed by the great Value-Thinkers of the seminar, special thanks to Ryan ShriverAllan KellyGiovanni AsproniNiels Malotaux, Matthew Leitch, Jerry Durant, Jenny Stuart, Lars Ljungberg, Renze ZijlstraClifford ShelleyLorne MitchellGio WiederholdMarilyn BushYazdi Bankwala, and all others I forgot to mention. 


Monday, June 01, 2009

History Repeating?, or A Case for Real QA

Jeff Patton of AgileProductDesign.com has an excellent article on Kanban Development. He does a great job in explaining the kanban idea, in relation to the main line of thinking in Agile. The article is about using a pull system rather than a push system like traditional (waterfall) development or agile development with estimation, specification and implementation of user stories.

Reading the first couple of sections got me thinking about another topic. Isn't this agile thing "history repeating"?
First of all, I'd like to question how common agile development really is. It's clear that the agile community talks a lot about agile (sic!), but what does the numbers tell us? I don't have any, so I'm speculating. Let's assume the percentage of agile developments is significant. (I don't want to argue against Agile, but I want to know if my time is spent right ;-)

Jeffs writes:
"Once you’ve shrunk your stories down, you end up with different problems. User story backlogs become bigger and more difficult to manage. (...) Prioritizing, managing, and planning with this sort of backlog can be a nasty experience. (...) Product owners are often asked to break down stories to a level where a single story becomes meaningless."

This reminds me so much of waterfall development, a line of thinking I spent the most of my professional career in, to be honest.
First, this sounds a lot like the initial point of any requirements management argument. You have so many requirements to manage, you need extra discipline, extra processes, and (of course ;-) an extra tool to do that. We saw (and see) these kind of arguments all over the place in waterfall projects. Second, estimation, a potentially risky view into the future, has always been THE problem in big bang developments. People try to predict minute detail, the prediction covering many months or even years. This is outside most people's capacity. Third and last, any single atomic requirement in a waterfall spec really IS meaningless. I hope we learn from the waterfall experience.

Jeff goes on:
"Shrinking stories forces earlier elaboration and decision-making."
Waterfall-like again, right?

If user stories get shrunk in order to fit in a time-box, there's another solution to the problem (besides larger development time-boxes, or using a pull system like Jeff has beautifully laid out): not make a user story the main planning item. How about "fraction of value delivered" instead, in per cent?

Jeff again:
"It’s difficult to fit thorough validation of the story into a short time-box as well. So, often testing slips into the time-box after. Which leaves the nasty problem of what to do with bugs which often get piped into a subsequent time-box."

This is nasty indeed, I know from personal experience. BTW, the same problem exists, but on the other side of the development time-box, if you need or want to thoroughly specify stories/features/requirements. Typical solution: you put it in the time-box BEFORE.
Let's again find a different solution using one of my favorite tools, the 5 Whys.
  • Why do we put testing in the next time box? Because it consumes too much time.
  • Why does it consume a lot of time? Because there is a significant number of defects to find and fix (and analyse and deploy and...), before we consider the product good enough for release. 
  • Why is there a significant number of defects to be found with the testing stage? Because the product brings a significant number with it.
  • Why does the test-ready product have a significant number of "inherent" defects? Because we have not reduced them significantly them further upstream.
  • Why didn't we reduce it further upstream? Because we think testing is very effective in finding all kinds of defects, so testing alone (or along with very few other practices) is sufficient for high defect removal efficiency.
It is not. Period.
From an economic standpoint it is wise to do proper QA upstream, in order to arrive at all subsequent defect removal stages (including testing) with a smaller number of defects, hence with fewer testing hours needed. This works because defects are removed cheapest and fastest as close as possible to their origin.
What do I mean by proper upstream QA? Well, I've seen personally that inspections (on requirements/stories, design, code, and tests) deliver jaw-dropping results in terms of defects reduced and ROI. I'm sure there are a couple more, just ask you metrics guru of choice. The point is, see what really helps, by facts and numbers not opinions, and make a responsible decision.

Tuesday, May 05, 2009

Atomic Functional Requirements

The discussion on shall vs. must at Tyner Blain , on which I have blogged recently, triggered me to wrap up what I have learned about atomic functional requirements over the years. 
See 3 principles and 4 rules on PlanetProject. Feel free to do responsible changes.
The article  explains how to write atomic functional system requirements so that the spec is easy to read, and ambiguity is kept to a minimum.
Note that it is NOT about the even more important (non-)functional user requirements here, in a sense of what a stakeholder expects from the system, which problems shall be solved, what shall be the inherent qualities of the solution etc. I do not intend to argue that any spec must include atomic, functional requirements. But sometimes setup is so that it must. Don't forget to establish the context first.
Now enjoy the article on how to write atomic functional requirements. Thanks for your comments.

Tuesday, April 28, 2009

Writing Many High Quality Artefacts - Efficiently

I finished a process description on writing many artefacts, like use cases, requirements, book chapters, test cases, in high quality with the least effort possible. Please have a look, I'm eager to see your comments. After all, it's a Planet Project wiki article, so if you feel like it, just edit.

This, again, is on evolution, or learning by feedback. Makes heavy use of inspections. Inspections BTW, are one of the most effective QA methods known in software engineering history, according to metrics expert Capers Jones.

Thursday, April 16, 2009

0-defects? I'm not kidding!

Today a post by Michael Dubakov at The Edge of Chaos provoked some thoughts. He basically argues, that a zero-defect mentality causes several unpleasant effects in a development team:

- Not enough courage to refactor complex, messy, buggy, but important piece of code.
- Can’t make important decision, instead make less risky, but wrong decision.
- Do everything to avoid responsibility, that leads to coward and stupid behavior.

Well, I guess the solomonic answer to this critique is 'it depends'. Actually, I'm quite enthusiastic about the zero-defects thought. However, I also know that zero-defects are extremely hard to attain, or can easily be attained by chance ;-)

You maybe remember the post where I detail 10 Requirements Principles for 0-defect systems. Therefore I posted the following as a comment to Michael's post, argueing
- that we should not call defects 'bugs', 
- that zero-defects can be attained by using principles that have almost nothing to do with testing, but with requirements, 
- that testing alone is insufficient for a decent defect removal effectiveness.

Hi Michael, 

thanks for another thought-provoking post! i enjoy reading your blog.

Hmm, several issues here.

1) First of all I think we should come to a culture where we do not call more or less serious flaws in IT systems 'bugs'. To me, it does not make sense to belittle the fact. Let's call them what they are: defects (I'm not going into the possible definitions of a defect here).
This said, look here:  scroll down a bit, and laugh.
And here's a funny cartoon.

2) A couple of years ago I had the luck to be part of a team that delivered (close to) zero defects. I wrote down the 10 principles we used from a requirements perspective. See 
 the short version on PlanetProject, more or less uncommented, or Part 1 and Part 2 of 
a longer version on Raven's Brain
Interesting enough, only 1 out of the 10 principles is directly related to a defect-finding activity. We had a 'zero defect mentality' in a way that we all seriously thought it would be a good idea to deliver excellent quality to the stakeholders. Fear or frustration was not at all part of the game. Mistakes were tolerable, just not in the product at delivery date. Frankly, we were a bit astonished to find out we successfully delivered a nearly defect-free non-trivial system over a couple of releases.

3) While it seems to be a good idea to use the different strategies that you proposed in the original post, I'm missing some notion of how effective the various defect-reducing strategies are. The only relevant quantitative research I know of come from metrics guru Capers Jones. If I remember correctly he states that each single strategy is only about 30% effective, meaning that you need to combine 6-8 strategies in order to end up with a 'professional' defect removal effectiveness. AND you cannot reach a net removal effectiveness of, say, 95% with testing alone. From an economic standpoint it is wise to reduce the number of defects that 'arrive' at testing in the first place, and this is done most effectively by formal inspections (of requirements, design and code).


Greetings!

Rolf

PS: if you find a defect in my comment, you can keep it ;-)

Monday, April 06, 2009

Rules for Checking Use Cases

A couple of months ago I assumed the role of a quality manager for one of my projects. I learned that document quality control requires a set of defined rules to check documents against. Because the documents to do quality control on were 90% use cases, I defined a set of 8 rules for checking use cases, working with authors and checkers, as well as subject matter experts (read: customers). you can find them on Planet Project.
For all readers who face the challenge to find a BA job, or at least a (second) job where BA skills are applicable, there's an interesting discussion going on at the modernAnalyst-Forum. I suggested to move to a QA perspective. This idea fits with the above Use Case Checking Rules.

Thursday, December 25, 2008

Software Development Practice in the 21st (22nd?) Century

A couple of months ago I published an article about 10 Critical Requirements Principles for 0-Defects-Systems (on Ravens Brain). Anecdotal evidence on how I once achieved zero defects in one of my projects.

If you like the thought of reliable software (sic!) you might as well like Charles Fishman's article on the On-Board Shuttle Group, the producer of the space shuttle's software. Yes, they have a lot of money to spend, but many lives and expensive equipment are at stake.
This software never crashes. It never needs to be re-booted. This software is bug-free. It is perfect, as perfect as human beings have achieved. Consider these stats : the last three versions of the program -- each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. -- Charles Fishman on the software product the Group puts out


BTW, they seem to use three key pracices: very detailed requirements, inspections and continuous improvement.
The way the process works, it not only finds errors in the software. The process finds errors in the process. -- Charles Fishman on the On-Board Shuttle Group's process
Happy New Year! (Note that 2009 is a year of the 21st century. Can't we do any better?)

PS: I'm going to go on vacation from Dec 28 to Jan 20. More interesting topics still to come, please be patient.

Saturday, December 13, 2008

Lack of Engineering Practices Make 'Agile' Teams Fail

James Shore wrote a blog post that is among my TOP 5 in this quarter. Go read it! He argues that the industry sees so many agile projects fail because most teams that call themselves 'agile' mistake the real idea with the wrong set of practices. Maybe this quote says best what his article is about:
These teams say they're Agile, but they're just planning (and replanning) frequently. Short cycles and the ability to re-plan are the benefit that Agile gives you. It's the reward, not the method. These psuedo-Agile teams are having dessert every night and skipping their vegetables. By leaving out all the other stuff--the stuff that's really Agile--they're setting themselves up for rotten teeth, an oversized waistline, and ultimate failure. They feel good now, but it won't last.

This speaks from my heart. It's exactly what happened when I tried to turn a project 'agile' in a super waterfall organization (picking up this metaphor I'd say it's an Angel Falls organization). We will have technical debt to pay off for at least a decade. However, we found that out quite quickly ;-)
If you're looking for some method that is strong both in the management and engineering field, check out Evo by Tom Gilb. I know I recommended Mr. Gilb's work over and over again in the past 2 years, and it almost seems ridiculous. Be assured, I'm not getting paid by Tom :-)
To get a very clear perspective on iterative planning check out Niels Malotaux' Time Line.

Wednesday, December 10, 2008

Use Case Content Patterns Updated

Martin Langlands incorporated "my" DESTRUCTOR pattern into one article, now version 2 of his Use Case Content Patterns description. Have a look here, the whole thing is easier to handle now.
Hint: Click 'files' at the bottom of the wiki page to see the pattern file. Sorry for the inconvenience!