Monday, June 01, 2009

History Repeating?, or A Case for Real QA

Jeff Patton of has an excellent article on Kanban Development. He does a great job in explaining the kanban idea, in relation to the main line of thinking in Agile. The article is about using a pull system rather than a push system like traditional (waterfall) development or agile development with estimation, specification and implementation of user stories.

Reading the first couple of sections got me thinking about another topic. Isn't this agile thing "history repeating"?
First of all, I'd like to question how common agile development really is. It's clear that the agile community talks a lot about agile (sic!), but what does the numbers tell us? I don't have any, so I'm speculating. Let's assume the percentage of agile developments is significant. (I don't want to argue against Agile, but I want to know if my time is spent right ;-)

Jeffs writes:
"Once you’ve shrunk your stories down, you end up with different problems. User story backlogs become bigger and more difficult to manage. (...) Prioritizing, managing, and planning with this sort of backlog can be a nasty experience. (...) Product owners are often asked to break down stories to a level where a single story becomes meaningless."

This reminds me so much of waterfall development, a line of thinking I spent the most of my professional career in, to be honest.
First, this sounds a lot like the initial point of any requirements management argument. You have so many requirements to manage, you need extra discipline, extra processes, and (of course ;-) an extra tool to do that. We saw (and see) these kind of arguments all over the place in waterfall projects. Second, estimation, a potentially risky view into the future, has always been THE problem in big bang developments. People try to predict minute detail, the prediction covering many months or even years. This is outside most people's capacity. Third and last, any single atomic requirement in a waterfall spec really IS meaningless. I hope we learn from the waterfall experience.

Jeff goes on:
"Shrinking stories forces earlier elaboration and decision-making."
Waterfall-like again, right?

If user stories get shrunk in order to fit in a time-box, there's another solution to the problem (besides larger development time-boxes, or using a pull system like Jeff has beautifully laid out): not make a user story the main planning item. How about "fraction of value delivered" instead, in per cent?

Jeff again:
"It’s difficult to fit thorough validation of the story into a short time-box as well. So, often testing slips into the time-box after. Which leaves the nasty problem of what to do with bugs which often get piped into a subsequent time-box."

This is nasty indeed, I know from personal experience. BTW, the same problem exists, but on the other side of the development time-box, if you need or want to thoroughly specify stories/features/requirements. Typical solution: you put it in the time-box BEFORE.
Let's again find a different solution using one of my favorite tools, the 5 Whys.
  • Why do we put testing in the next time box? Because it consumes too much time.
  • Why does it consume a lot of time? Because there is a significant number of defects to find and fix (and analyse and deploy and...), before we consider the product good enough for release. 
  • Why is there a significant number of defects to be found with the testing stage? Because the product brings a significant number with it.
  • Why does the test-ready product have a significant number of "inherent" defects? Because we have not reduced them significantly them further upstream.
  • Why didn't we reduce it further upstream? Because we think testing is very effective in finding all kinds of defects, so testing alone (or along with very few other practices) is sufficient for high defect removal efficiency.
It is not. Period.
From an economic standpoint it is wise to do proper QA upstream, in order to arrive at all subsequent defect removal stages (including testing) with a smaller number of defects, hence with fewer testing hours needed. This works because defects are removed cheapest and fastest as close as possible to their origin.
What do I mean by proper upstream QA? Well, I've seen personally that inspections (on requirements/stories, design, code, and tests) deliver jaw-dropping results in terms of defects reduced and ROI. I'm sure there are a couple more, just ask you metrics guru of choice. The point is, see what really helps, by facts and numbers not opinions, and make a responsible decision.