Showing posts with label Planning. Show all posts
Showing posts with label Planning. Show all posts

Monday, June 07, 2010

Hyper-Productive Teams

The Agile Engineer Ryan Shriver has stunning news from the productivity front. After he 'produced' a whopping 1.750 attendees for his webinar on the topic, he was kind enough to share the presentation slides and a companion article. I quote from the article:

Hyper-productivity describes a state of being where teams are working at much higher levels of performance such as two, three and four times more productive than their peers. 
The interesting question for agilists and other method-interested people alike is: How can I do it? Well research shows than hyper-productivity seems to come with a couple of practices, all to be found in Ryans slide set and article. Be aware, don't get trapped into the logic than you only need to follow these practices to be hyper-productive.
I'd like to highlight one theme here, and Ryan probably says it best (mark-up is mine):
[...] the difference with the hyper-productive teams is their ability to stick to the practices over the project while continually removing impediments limiting performance.  
This resonates quite a bit with something Eric Ries posted about a startup lessons learned conference, in which Kent Beck happened to develop another manifesto, apparently (mark-up is mine):
Team vision and discipline over individuals and interactions (or processes and tools)
Validated learning over working software (or comprehensive documentation)
Customer discovery over customer collaboration (or contract negotiation)
Initiating change over responding to change (or following a plan)
I wouldn't go as far as calling this the new agile manifesto. This is more a set of paradigms for startups, so one could call it a startup manifesto ;-).

Thursday, February 25, 2010

Problem to Solution - The Continuum Between Requirements and Design

Christopher Brandt, relatively new blogger from Vancouver, Canada, has an awesome piece about the difference between problem and solution. If he blogs more about problem solving, and in this outstanding quality, he definitely is someone to follow.

While I do not completely agree with the concepts, I believe everybody doing requirements or design should at least have this level of understanding.

Requirements written that imply a problem end up biasing the form of the solution, which in turn kills creativity and innovation by forcing the solution in a particular direction. Once this happens, other possibilities (which may have been better) cannot be investigated. [...] This mistake can be avoided by identifying the root problem (or problems) before considering the nature or requirements of the solution. Each root problem or core need can often be expressed in a single statement. [...] The reason for having a simple unbiased statement of the problem is to allow the design team to find path to the best solution. [...] Creative and innovative solutions can be created through any software development process as long as the underlying mechanics of how to go from problem to solution are understood.

Which brings me to promote again and again the single-most powerful question for finding requirements from a given design: WHY?

Don't forget to have fun ;-)




Thursday, December 24, 2009

German Planguage Concept Glossary

I published a German Version of the Gilb's Planguage Concept Glossary on Google Docs. It comprises the 178 core terms of systems engineering and RealQA concepts that are part of the Competitive Engineering book. In the future I plan to translate the rest of the over 500 carefully defined terms.

The German words have been chosen to form a body of terms that is similar to the English original version, similar in coherence and completeness. Therefore, it is not a simple word to word dictionary translation. To the German reader, some of the words may seem to be translated badly. However, if you study the original glossary carefully, you will see that they make sense.

Furthermore, the terms have been tested in corporate practice and professional German conversation over a period of about a year. Most of them can be intuitively understood by Germans or German-speaking people; others need a quick look into the glossary definitions (which is something that is restricted to English-speaking readers ;-).

I'm happy to take your comments, critique and encouragement!

Saturday, December 05, 2009

Who is your hero systems engineer?

These days, I‘m inquiring about engineering. I‘d like to know who your hero systems engineer was or is (or will be?). Please comment, or send me a twitter message. Thank you!

Thank you again for making it past the first paragraph. ;-) It all started when a couple of weeks ago I again was confronted with a colleague‘s opinion which said that I am a theorist. I refuse this opinion vehemently; quite the contrary, I believe my work, especially my work for the company, is a paragon of pragmatism. ;-)
So my argument against the opinion usually is ,no, I‘m not a theorist!', in a more or less agitated voice. Obviously, I need a better basis for making that point. Kurt Lewin said:
Nothing is a s practical as a good theory.
I used this sentence for a while as a footer for email that I suspected to raise the ,theorist‘ criticism again.
After all this sentence is only a claim as well, just like my stubborn phrase above. It may be stronger because it carries Lewin‘s weight. Unfortunately very few people instantly know who Kurt Lewin was, and that he most of all used experience - not theory - to advance humankind.

Then a friend of mine, a U. S. citizen who at some point in time chose to live in Norway (which is a very practical move in my eyes :-), pointed me to the work of Billy V. Koen, Professor for Mechanical Engineering at the University of Texas. You should watch this 1 hour movie if you are least interested in engineering, philosophy and art. Or in more mundane things like best practices, methods, techniques, recipes, and checklists, many of which concerning business analysis and project management can be found at Planet Project in case you don‘t know.

Here is Prof. Koen‘s definition of engineering from the movie:
The Engineering Method (Design) is the use of heuristics to cause the best change in an uncertain situation within the available resources.
Causing change in an uncertain situation within the available resources sounds a lot like project management to me. Like program or portfolio management, too. Maybe like management in general.

It is always when an improbable connection opens up between two different fields of my thinking when I find there is useful truth in it. Next, I want to learn about engineering and me, and one way to approach this is to find out who I admire for his or her engineering skills. I tried thinking of a couple of candidate names and found surprisingly few. I‘ve already identified Leonardo Da Vinci (who is not a Dan Brown invention like my nephew once suggested. :-) A quick request to the twitterverse offered nothing, no new name.

So this is my next take: I‘d like to know who your hero systems engineer was or is (or will be?), and why. Please comment, or send me a (twitter) message. Thank you!

Thursday, September 10, 2009

Quantum Mechanics, Buddhism, and Projects - Again!

Today I'm proud to announce that my QMBP :-) article was again published by a major web site on business analysis, requirements engineering and product management: The Requirements Network. The site is full of interesting material for beginners and experts. I recommend reading it.

You'll find the piece of mine and some interesting comments here.

And - ha! - mind the URL of my article: ... node/1874.
Did you know that Winston Churchill was born that year? Must say something ... :-D
Find out more about 1874.


Sunday, July 26, 2009

Intro to Statistical Process Control (SPC)

In exploring the web about Deming stuff I stumbled upon this site from Steve Horn. It introduces Statistical Process Control (SPC).
There are a number of pages on various aspects, like

The articles are read quickly and understood easily. They are written to the point.

I'd like to add that in Deming's 14 Points, Point 3 needs a special interpretation for software or (IT) project people.
The point says: 
3. Cease dependence on inspection to achieve quality. Eliminate the need for inspection on a mass basis by building quality into the product in the first place.

Deming was talking about (mass) production. In that field, "inspection" means roughly the same as "testing" in our profession.
In fact, inspections (of requirements, designs, code and test cases) are among the most effective activities you can do in software, for building quality into the product in the first place.

BTW, if you worry about your software development organisation NOT really testing their products, it is a very wise idea to first introduce a couple of  inspection stages, for this is both more efficient (economic) and effective (less defects). It is also a sensible way to introduce learning. 
Testing is about finding defects, Inspections are about pointing out systematic errors and give people a real chance for preventing them in the future.

Here's one quote from Steve I like in particular:
Confident people are often assumed to be knowledgeable. If you want to be confident the easiest way is to miss out the 'Study' phase (of the Plan-Do-Study-Act-Cycle) altogether and never question whether your ideas are really effective. This may make you confident but it will not make you right.

(words in brackets are mine)


Monday, June 01, 2009

History Repeating?, or A Case for Real QA

Jeff Patton of AgileProductDesign.com has an excellent article on Kanban Development. He does a great job in explaining the kanban idea, in relation to the main line of thinking in Agile. The article is about using a pull system rather than a push system like traditional (waterfall) development or agile development with estimation, specification and implementation of user stories.

Reading the first couple of sections got me thinking about another topic. Isn't this agile thing "history repeating"?
First of all, I'd like to question how common agile development really is. It's clear that the agile community talks a lot about agile (sic!), but what does the numbers tell us? I don't have any, so I'm speculating. Let's assume the percentage of agile developments is significant. (I don't want to argue against Agile, but I want to know if my time is spent right ;-)

Jeffs writes:
"Once you’ve shrunk your stories down, you end up with different problems. User story backlogs become bigger and more difficult to manage. (...) Prioritizing, managing, and planning with this sort of backlog can be a nasty experience. (...) Product owners are often asked to break down stories to a level where a single story becomes meaningless."

This reminds me so much of waterfall development, a line of thinking I spent the most of my professional career in, to be honest.
First, this sounds a lot like the initial point of any requirements management argument. You have so many requirements to manage, you need extra discipline, extra processes, and (of course ;-) an extra tool to do that. We saw (and see) these kind of arguments all over the place in waterfall projects. Second, estimation, a potentially risky view into the future, has always been THE problem in big bang developments. People try to predict minute detail, the prediction covering many months or even years. This is outside most people's capacity. Third and last, any single atomic requirement in a waterfall spec really IS meaningless. I hope we learn from the waterfall experience.

Jeff goes on:
"Shrinking stories forces earlier elaboration and decision-making."
Waterfall-like again, right?

If user stories get shrunk in order to fit in a time-box, there's another solution to the problem (besides larger development time-boxes, or using a pull system like Jeff has beautifully laid out): not make a user story the main planning item. How about "fraction of value delivered" instead, in per cent?

Jeff again:
"It’s difficult to fit thorough validation of the story into a short time-box as well. So, often testing slips into the time-box after. Which leaves the nasty problem of what to do with bugs which often get piped into a subsequent time-box."

This is nasty indeed, I know from personal experience. BTW, the same problem exists, but on the other side of the development time-box, if you need or want to thoroughly specify stories/features/requirements. Typical solution: you put it in the time-box BEFORE.
Let's again find a different solution using one of my favorite tools, the 5 Whys.
  • Why do we put testing in the next time box? Because it consumes too much time.
  • Why does it consume a lot of time? Because there is a significant number of defects to find and fix (and analyse and deploy and...), before we consider the product good enough for release. 
  • Why is there a significant number of defects to be found with the testing stage? Because the product brings a significant number with it.
  • Why does the test-ready product have a significant number of "inherent" defects? Because we have not reduced them significantly them further upstream.
  • Why didn't we reduce it further upstream? Because we think testing is very effective in finding all kinds of defects, so testing alone (or along with very few other practices) is sufficient for high defect removal efficiency.
It is not. Period.
From an economic standpoint it is wise to do proper QA upstream, in order to arrive at all subsequent defect removal stages (including testing) with a smaller number of defects, hence with fewer testing hours needed. This works because defects are removed cheapest and fastest as close as possible to their origin.
What do I mean by proper upstream QA? Well, I've seen personally that inspections (on requirements/stories, design, code, and tests) deliver jaw-dropping results in terms of defects reduced and ROI. I'm sure there are a couple more, just ask you metrics guru of choice. The point is, see what really helps, by facts and numbers not opinions, and make a responsible decision.

Thursday, April 16, 2009

0-defects? I'm not kidding!

Today a post by Michael Dubakov at The Edge of Chaos provoked some thoughts. He basically argues, that a zero-defect mentality causes several unpleasant effects in a development team:

- Not enough courage to refactor complex, messy, buggy, but important piece of code.
- Can’t make important decision, instead make less risky, but wrong decision.
- Do everything to avoid responsibility, that leads to coward and stupid behavior.

Well, I guess the solomonic answer to this critique is 'it depends'. Actually, I'm quite enthusiastic about the zero-defects thought. However, I also know that zero-defects are extremely hard to attain, or can easily be attained by chance ;-)

You maybe remember the post where I detail 10 Requirements Principles for 0-defect systems. Therefore I posted the following as a comment to Michael's post, argueing
- that we should not call defects 'bugs', 
- that zero-defects can be attained by using principles that have almost nothing to do with testing, but with requirements, 
- that testing alone is insufficient for a decent defect removal effectiveness.

Hi Michael, 

thanks for another thought-provoking post! i enjoy reading your blog.

Hmm, several issues here.

1) First of all I think we should come to a culture where we do not call more or less serious flaws in IT systems 'bugs'. To me, it does not make sense to belittle the fact. Let's call them what they are: defects (I'm not going into the possible definitions of a defect here).
This said, look here:  scroll down a bit, and laugh.
And here's a funny cartoon.

2) A couple of years ago I had the luck to be part of a team that delivered (close to) zero defects. I wrote down the 10 principles we used from a requirements perspective. See 
 the short version on PlanetProject, more or less uncommented, or Part 1 and Part 2 of 
a longer version on Raven's Brain
Interesting enough, only 1 out of the 10 principles is directly related to a defect-finding activity. We had a 'zero defect mentality' in a way that we all seriously thought it would be a good idea to deliver excellent quality to the stakeholders. Fear or frustration was not at all part of the game. Mistakes were tolerable, just not in the product at delivery date. Frankly, we were a bit astonished to find out we successfully delivered a nearly defect-free non-trivial system over a couple of releases.

3) While it seems to be a good idea to use the different strategies that you proposed in the original post, I'm missing some notion of how effective the various defect-reducing strategies are. The only relevant quantitative research I know of come from metrics guru Capers Jones. If I remember correctly he states that each single strategy is only about 30% effective, meaning that you need to combine 6-8 strategies in order to end up with a 'professional' defect removal effectiveness. AND you cannot reach a net removal effectiveness of, say, 95% with testing alone. From an economic standpoint it is wise to reduce the number of defects that 'arrive' at testing in the first place, and this is done most effectively by formal inspections (of requirements, design and code).


Greetings!

Rolf

PS: if you find a defect in my comment, you can keep it ;-)

Sunday, December 21, 2008

Why Re-work isn't a Bad Thing in all Situations.

Software Development has many characteristics of production. I'm thinking of the various steps that as a whole comprise the development process, in which artifacts are passed from one sub process to another. Requirements are passed on to design, code is passed on to test (or vice versa, in test driven-development), and so on.

Alistair Cockburn wrote an interesting article on what to do with the excess production capability of non-bottleneck sub-processes in a complex production process. In the Crosstalk issue of Jan 2009, he explores new strategies to optimizing a process as a whole. One idea is to consciously use re-work (you know, the bad thing you want to avoid like the plague) to improve the quality of affairs earlier rather than later.

In terms of what to do with excess capacity at a non-bottleneck station, there is a strategy different from sitting idle, doing the work of the bottleneck, or simplifying the work at the bottleneck; it is to use the excess capacity to rework the ideas to get them more stable so that less rework is needed later at the bottleneck station.


Mark-up is mine. You see the basic idea.
In conclusion he lists the following four ways of using work that otherwise would have spent idling around, waiting for a bottleneck to finish work:

* Have the workers who would idle do the work of the bottleneck sub-process.

Note: Although this sometimes is mandated by some of the Agile community, I have seldom seen workers in the software industry that can produce excellent quality results in more than one or two disciplines.

* Have them simplify the work at the bottleneck sub-process.

Note: An idea would be to help out the BAs by scanning through documents and giving pointers to relevant information.

* Have them rework material to reduce future rework required at the bottleneck sub-process.

Note: See the introducing quote by A. Cockburn. Idea: Improve documents that are going to be built upon "further downstream" to approach a quality level of less than 1 major defect per page (more than 20 per page is very common in documents that already have a 'QA passed' stamp).

* Have them create multiple alternatives for the bottleneck sub-process to choose from.

Note: An example would be to provide different designs to stakeholders. Say, GUI designs. This can almost take the form of concurrently developing solutions to one problem by different engineering teams, for the best solution should be evaluated and consciously chosen.

Saturday, December 13, 2008

Lack of Engineering Practices Make 'Agile' Teams Fail

James Shore wrote a blog post that is among my TOP 5 in this quarter. Go read it! He argues that the industry sees so many agile projects fail because most teams that call themselves 'agile' mistake the real idea with the wrong set of practices. Maybe this quote says best what his article is about:
These teams say they're Agile, but they're just planning (and replanning) frequently. Short cycles and the ability to re-plan are the benefit that Agile gives you. It's the reward, not the method. These psuedo-Agile teams are having dessert every night and skipping their vegetables. By leaving out all the other stuff--the stuff that's really Agile--they're setting themselves up for rotten teeth, an oversized waistline, and ultimate failure. They feel good now, but it won't last.

This speaks from my heart. It's exactly what happened when I tried to turn a project 'agile' in a super waterfall organization (picking up this metaphor I'd say it's an Angel Falls organization). We will have technical debt to pay off for at least a decade. However, we found that out quite quickly ;-)
If you're looking for some method that is strong both in the management and engineering field, check out Evo by Tom Gilb. I know I recommended Mr. Gilb's work over and over again in the past 2 years, and it almost seems ridiculous. Be assured, I'm not getting paid by Tom :-)
To get a very clear perspective on iterative planning check out Niels Malotaux' Time Line.

Monday, November 24, 2008

All you want to know about Stakeholder Management

Alec Satin of the "Making Project Management Better" weblog has a brilliant list of links to Stakeholder management resources.

From his explainatory text:

In this list are articles to get you thinking about some of the less obvious issues in pleasing stakeholders. Primary selection criteria: (a) quick to read, (b) good value for your time, and (c) of interest whatever your level of experience." Markup is mine.


Enjoy!

Wednesday, October 01, 2008

Impact Estimation Table Template for free download

Courtesy of Ryan Shriver I translated his Impact Estimation Table template into GERMAN. Go here to download it.

The above link to the german table will also give you a glimpse on the Wiki project I'm doing: All the posts, i.e. principles, process and rules will be available in a wiki format shortly.

Friday, August 08, 2008

Update on Finding the right cycle time

I updated

Finding the right cycle time

with the idea of 1-day iterations. Thanks to Mishkin Berteig for injecting this idea.
For those of you who think 1-week iterations are ridiculous, and 1-day iterations can't reasonably be a serious contribution of mine, think again.

Friday, August 01, 2008

5 Steps Towards Simple and Effective Project Estimation

Name: 5 Steps Towards Simple and Effective Project Estimation
Type: Process
Status: final
Version: 2008-07-31
Sources: my practice, Niels Malotaux (see http://www.maloteaux.nl/), www.iit.edu/~it/delphi.html

Gist: Every once in a while we start projects, or are about to start one, and somebody asks "how long will it take? " (The better question would be "how long will it take to have benefit A and B?", but that's another story.) The following simple 5 step process provided sufficient precision while not being too time-consuming. Additional benefit of the process: A group of people did it, most commonly the group that as to work on it anyway.

Entry1: You need to have clear objectives.
Note: If you don't have clear objectives, you will notice in Step 4.

S1: Sit down - alone or better with one colleague - and break the work down to one or two dozens of task-chunks. List them.
Notes:
* If it's a well-known problem go find someone who did it before!
* If it's a rather small project you can break it down to single tasks, but you don't have to. Roughly 10-25 list items.
* Best effort. Don't worry too much about forgetting things or about the uncertainty because you don't know every constraint.

S2: Distribute the list to the people who will work on the project or know enough about the work to be done. Ask for their best estimates for each chunk. Allow for adding items to the list.
Notes:
* It's a good idea to limit the time for the estimations, especially if you have external staff that is paid by time and material.
* Don't go for ultimate precision, you won't achieve it anyway. Variances will average.
* You're much better off with an initial, not so precise estimate and early, constant calibration.

S3: See where estimates differ greatly. Discuss or further define the contents, not the estimates.
Notes:
* Everyone's an expert here. It is not about right or wrong estimates, but about understanding the work to be done.
* In the original Delphi process there's an element of anonymity. I can't follow that advice, for the purpose of this process also is to build consensus.

S4: Iterate S2-3 until sufficient consensus is reached.
Notes:
* Niels says, not more than one or two iterations are necessary.
* I say, it sometimes needs a dictator approach.
* Careful here, make sure to discuss the content of the work. Also helpful to further specify the objectives!

S5: Add up all task-chunk's estimates: that's how much work the project is.
Note: The single most useful approach to fitting the work in a tight schedule is to work hard to NOT do things that later will be superfluous.

Related Posts:
Earned Value Analysis for Feature-Burndown in Iterative Projects
Measuring and Estimating one's Velocity
Calculate Process Efficiency
Quantum Mechanics, Buddhism and Projects

Friday, July 04, 2008

7 Useful Viewpoints for Task Planning Oriented Towards Quick Results

Name: 7 Useful Viewpoints for Task Planning Oriented Towards Quick Results
Type: Rules
Status: final
Version: 2008-07-04

Gist: While planning tasks or activities, and at the same time focusing on quick results, it is a good idea to describe each task by means of different viewpoints. Here's my suggestion, derived from Tom Gilb's work, the BABOK and my own experience.

R1: Each task should be described in terms of
Purpose. Why are you doing this? This is the single most important category, because it may lead you to the understanding that this is not the right task.
Entry. What do you need in order to begin and complete the task? This prevents you from doing things without the proper basis, which would most likely causes unnecessary re-work.
Process. Which steps do you follow? This is a little plan for your task. It may be only one step, but most of the times there are more than one, even for small tasks. Splitting tasks into steps allows you to return to previous steps in case you learn something new.
Exit. What are the results of the task? What will be the benefit? This is what counts and what can give you the beautiful chance of realizing benefits 'along the way' some greater set of tasks.
Stakeholders. With whom will you have to speak? Whose requirements will you have to take into account? This prevents re-work caused by ignoring influential people.
Methods/Techniques. How can you do it? Note anything that may help you completing the task.
Time-Limit. How long do you plan to work on it? If you need substantially longer, you will have a clear indicator that something is wrong. Short deadlines help you focus (80/20 principle).

Friday, May 16, 2008

8 Useful Tips to Pick an Analyst

Name: 8 Useful Tips to Pick an Analyst
Type: Rules
Status: final
Version: 2008-05-16

Gist: Every now and then I run into the task to pick a business analyst or systems analyst for a project or customer. Sometimes this starts with writing a job offer, sometimes with a colleague asking if I could take part in a job interview with some prospect. What should you look for?

R1: Clarify what tasks the candidate will have. 'Ok, this is a no-brainer

R2: Clarify what the challenge in this particular project or this particular job will be.

R3: Clarify what (analyst-specific) professional skills will be needed, now, and in the future, in this particular job.

R4: Clarify what soft skills will be needed, now, and in the future, in this particular job.

R5: Make sure the candidate knows these things (and has a sound approach to dealing with them):

  • Stakeholders seem to change their mind on a regular basis. 'I think they don't, it just looks like a change
  • Goals are seldom properly understood.
  • Demand and Supply speak two different sociolects.
  • Stakeholders (among other people...) have trouble communicating about future, i.e. the benefits, or the system-to-be.
  • Other team members, including fellow analysts, architects, and QA-folk, deserve proper appreciation.
  • A problem is a solution to a problem is a solution to... (also see the means-and-ands post)
  • Many people (the prospect, too? Would be a bad sign for an analyst.) have a well developed answer reflex, so finding out what the problem really is can be quite a challenge.
  • Any required product quality can be described in a meaningful, quantified way. (Refer to Tom Gilb's extensive and highly useful wokr on the subject.)

R6: Clarify what testing skills will be needed. 'An analyst that does not know how to design acceptance criteria?

R7: Look for someone who uses words like 'now and then', 'generally', 'frequently', 'some', 'somewhat', 'arguable', 'among other', 'on the other hand', 'also', 'able', 'allow'.

Note: The language gives you hints on the prospect's experience level. Experts tend to have many answers.

R8: Avoid people who use words like 'steady', 'always', 'every', 'all', 'absolute', 'totally', 'without doubt', 'nothing', 'only', 'entirely', 'without exception', 'must', 'have to'.

Tuesday, May 13, 2008

Why Managing Too Many Risks is Too Risky

I wrote a guest post over at the Raven's Brain weblog. The topic is 'Heaps of Risks - Why Managing Too Many Risks is Too Risky'.

Risk management is a source of unusual human behavior: euphoria or excessive gambling when risk is underestimated, and panic attacks or depression when we predict that things are riskier than they really are. Both are risks by themselves.
In the post I'll address a risk that is also closely related to human behavior and presents a risk: Overly extensive risk registers, i.e. a list of several dozends of risks for a given project.

Read more

If it feels too troublesome to leave a comment there, why not leave it here?

Friday, April 25, 2008

12 Helpful Rules for Refactoring a System

Name: 12 Helpful Rules for Refactoring a System
Type: Rules
Status: final
Version: 2008-04-25

Gist: to provoke 2nd thoughts if you are to take some part in a refactoring. Refactorings seem to come about more frequently these days, maybe because even the stolid of the business people start to talk about 'agile'. Like all these now-buzzwords, there's a lot of potential misunderstanding hiding in the very word.

Refactoring DEFINED AS modifying something without changing its externally observable behaviour with the purpose of making maintenance easier. Note that you can refactor many things, ranging from systems to architectures to design to code. However, relevant literature suggests that Refactoring is meant to relate to Design and Implementation. See Martin Fowler's Website. Also see Code Refactoring.

R1) Whenever you hear the word refactoring, run! 'just joking...

R2) Precisely understand what the various stakeholders think a refactoring is. You'll be surprised. Correct the wording if needed.

R3) Precisely understand what purpose the refactoring has, from each of the stakeholder's point of view. Align expectations before you start.

R4) Assume a great deal of resistance from the people who pay. Refactoring for them is another word for "Give me $$$, I'll give you what you already have! " Find out who will pay for maintenance.

R5) By all means reduce the risk of having a system with new bugs after the refactoring. This can be done by rigorous test driven or even behaviour driven development for example. Any change in behaviour should be treated as a bug.

R6) Challenge the business case (if any ...) of the refactoring. At least, do proper cost-benefit analysis. For example, the system may not live long enough to amortize refactoring.

R7) Prepare for the situation that nobody can provide exact and dependable information on how future budgets will be affected by the refactoring

R8) Back off from refactoring if significant portions of the code will be both refactored and worked over due to new requirements. Do one thing at a time and bear in mind R5.

R9) Prepare for 3 different points of view if you ask 2 people for what specific refactorings should be done.

R10) Think of short-term and long-term effects.

R11) The improvement must be both quantified beforehand and measured afterwards, at least for non-local changes to the system.


R12) Be suspicious if someone suggests only one or two specific refactoring techniques will suffice to reach a project-level goal.

Monday, April 21, 2008

5 Powerful Principles to Challenge Arguments

Name: 5 Powerful Principles to Challenge Arguments
Type: Principles
Status: final
Version: 2008-04-21

Gist: to provide a general checklist for the suspicious. The 5 principles show behavioural patters people use to make you do things in their favour, but not necessarily in your favour. Use this on (in alphabetical order) advisors, colleagues, consultants, counsels, managers, salespersons, vendors.

Sources: Scott Berkun: How to Detect Bullshit; and my all-time-favourite checklist: 12 Tough Questions by Tom Gilb.

P1) People are uncertain and tend to ignorantly stretch the facts.

  • How do you know?
  • What are your sources? How can I check them?
  • Who told you that?
  • Can you quantify the improvement?

Note: Carefully watch the answerer. If he needs a while, maybe uncomfortably shifting position, there's a good chance he's either making something up or needs time to figure out how to disguise a weak argument.

P2) People with weak arguments often have not made their homework on the topic.

  • What is the counter argument?
  • Who else shares this point of view?
  • What are the risks of this, and what will you do about it?
  • Can you quantify the improvement?
  • How does your idea affect my goals and my budgets?
  • What would make you change your mind?
  • Have we got a complete solution?

Note: As from any facts, one can draw a set of reasonable interpretations, not just one. Everyone with intimate knowledge won't have great difficulties taking a different point of view for a while.

P3) People tend towards urgency when you are asked to make a decision with some hidden consequence.

  • Can I sleep on this?
  • When do we need to have a decision made? Why?
  • I'd like to consult Person A first.
  • Expert B, what do you think?

Note: People pressing ahead may try to throw you off your guard.

P4) People without a clear understanding of their point of view tend to inflate the language used.

  • Please break this in smaller pieces, so I can understand.
  • Explain this in simpler terms, please.
  • I refuse to accept this until me, or someone I trust, fully understands it.
  • Are you trying to say <...>? Then speak more plainly next time, please.

Note: Mark Twain once wrote in a letter to a friend: 'I'm sorry this letter is so long; I didn't have enough time to make it shorter.'

P5) People tend to have stronger arguments if they know someone is present who is hard to deceive.

  • Use your network.
  • Invite colleagues who have worked with these people.

Note: Simply help each other, like your family would do.

Monday, April 14, 2008

The Ominous Bill of Rights

Name: The Ominous Bill of Rights
Type: Principles
Status: final
Version: 2008-04-14

Gist: To present the Bill of Rights for a project development team (or contractor, originated by Tom Gilb), and to contrast it with a Bill from the customer / client point of view. I intend to balance Tom's view. PLEASE NOTICE, that these principles does not represent any official terms or principles used by my employer. This is, as all the other posts, a completely private matter.
Find a German version below.

Sources: The Contractor's Bill is from Gilb, Tom. 1988. Principles of Software Engineering Management. Wokingham and Reading, MA: Addison-Wesley. Page 23.
The Customer's Bill is out if my head.
Note: Tom did not call it 'Contractor's Bill of Rights', but only 'Bill of Rights'. It's clear what he had in mind if you read the specific principles.

Contractor's Bill of Rights:
P1: You have a right to know precisely what is expected of you.
P2: You have a right to clarify things with colleagues, anywhere in the organization.
P3: You have a right to initiate clearer definitions of objectives and strategies.
P4: You have a right to get objectives presented in measurable, quantified formats.
P5: You have a right to change your objectives and strategies, for better performance.
P6: You have a right to try out new ideas for improving communication.
P7: You have a right to fail when trying, but must kill your failures quickly.
P8: You have a right to challenge constructively higher-level objectives and strategies.
P9: You have a right to be judged objectively on your performance against measurable objectives.
P10 You have a right to offer constructive help to colleagues to improve communication.

Customer's Bill of Rights:
P11: We have a right to set the objectives for the endeavour.
P12: We have a right to change our minds about objectives at any time.
P13: We have a right to set the criteria by which we will measure your success.
P14: We have a right to be informed about exactly what you have accomplished at any time.
P15: We have a right to hear your arguments if you plan to change a decision that effects our objectives.
P16: We have a right to be informed of the consequences of a planned change, in measurable, quantified formats.
P17: We have a right to constuctively debate your planned change.
P18 We have a right to get clear documentation about any changed decision.

German Versions:
Rechte des Auftragnehmers:
Sie haben das Recht, genau zu wissen, was wir von Ihnen erwarten.
Sie haben das Recht, Dinge mit Kollegen aus der gesamten Projektorganisation zu klären.
Sie haben das Recht, klarere Definitionen von Zielen und Strategien anzustoßen.
Sie haben das Recht, Ziele in messbarer, quantifizierter Form präsentiert zu bekommen.
Sie haben das Recht, unsere Ziele und Strategien konstruktiv zu hinterfragen, um insgesamt ein besseres Ergebnis zu erzielen.
Sie haben das Recht, objektiv anhand messbarerer Ziele bewertet zu werden.
Sie haben das Recht, zu entscheiden über die Maßnahmen zur Erreichung der Ziele und ihre Reihenfolge.

Rechte des Auftraggebers:
Wir haben das Recht, die Ziele für das Vorhaben vorzugeben.
Wir haben das Recht, diese Ziele jederzeit zu ändern.
Wir haben das Recht, die Kriterien vorzugeben, anhand derer wir Erfolg messen.
Wir haben das Recht, jederzeit darüber informiert zu werden, was Sie bereits erledigt haben.
Wir haben das Recht, Ihre Argumente zu hören, falls Sie vorhaben Entscheidungen zu treffen, die sich auf die Zielerreichung auswirken.
Wir haben das Recht, über die Auswirkungen informiert zu werden, und zwar in objektiv überprüfbarer Art und Weise.
Wir haben das Recht, Ihre Entscheidung konstruktiv zu hinterfragen.
Wir haben das Recht, getroffene Entscheidungen klar dokumentiert zu bekommen.