Showing posts with label Evolutionary Development. Show all posts
Showing posts with label Evolutionary Development. Show all posts

Wednesday, July 29, 2009

Give Kelly Waters a Hand

Follow me on Twitter: @rolfgoetz

Kelly Waters, the creator of the wonderful All About Agile Weblog, in a recent post is striving for more reach.
I can recommend his work, it's all about agile software development and agile project management. He highlights interesting things about agile all over the web and writes original content explaining some of the key agile principles, how to implement Scrum, user stories and lots more. A really rich source of information about the topic, and easy to read and grasp. He also has ann extensive home page there, so you'll find things easily. 
Go check him out and stay with him!

PS: In case you somehow loose the link to the All About Agile Weblog, you can always come back to my site and scroll down to the "links."-section on the right. It will be there.





Monday, June 01, 2009

History Repeating?, or A Case for Real QA

Jeff Patton of AgileProductDesign.com has an excellent article on Kanban Development. He does a great job in explaining the kanban idea, in relation to the main line of thinking in Agile. The article is about using a pull system rather than a push system like traditional (waterfall) development or agile development with estimation, specification and implementation of user stories.

Reading the first couple of sections got me thinking about another topic. Isn't this agile thing "history repeating"?
First of all, I'd like to question how common agile development really is. It's clear that the agile community talks a lot about agile (sic!), but what does the numbers tell us? I don't have any, so I'm speculating. Let's assume the percentage of agile developments is significant. (I don't want to argue against Agile, but I want to know if my time is spent right ;-)

Jeffs writes:
"Once you’ve shrunk your stories down, you end up with different problems. User story backlogs become bigger and more difficult to manage. (...) Prioritizing, managing, and planning with this sort of backlog can be a nasty experience. (...) Product owners are often asked to break down stories to a level where a single story becomes meaningless."

This reminds me so much of waterfall development, a line of thinking I spent the most of my professional career in, to be honest.
First, this sounds a lot like the initial point of any requirements management argument. You have so many requirements to manage, you need extra discipline, extra processes, and (of course ;-) an extra tool to do that. We saw (and see) these kind of arguments all over the place in waterfall projects. Second, estimation, a potentially risky view into the future, has always been THE problem in big bang developments. People try to predict minute detail, the prediction covering many months or even years. This is outside most people's capacity. Third and last, any single atomic requirement in a waterfall spec really IS meaningless. I hope we learn from the waterfall experience.

Jeff goes on:
"Shrinking stories forces earlier elaboration and decision-making."
Waterfall-like again, right?

If user stories get shrunk in order to fit in a time-box, there's another solution to the problem (besides larger development time-boxes, or using a pull system like Jeff has beautifully laid out): not make a user story the main planning item. How about "fraction of value delivered" instead, in per cent?

Jeff again:
"It’s difficult to fit thorough validation of the story into a short time-box as well. So, often testing slips into the time-box after. Which leaves the nasty problem of what to do with bugs which often get piped into a subsequent time-box."

This is nasty indeed, I know from personal experience. BTW, the same problem exists, but on the other side of the development time-box, if you need or want to thoroughly specify stories/features/requirements. Typical solution: you put it in the time-box BEFORE.
Let's again find a different solution using one of my favorite tools, the 5 Whys.
  • Why do we put testing in the next time box? Because it consumes too much time.
  • Why does it consume a lot of time? Because there is a significant number of defects to find and fix (and analyse and deploy and...), before we consider the product good enough for release. 
  • Why is there a significant number of defects to be found with the testing stage? Because the product brings a significant number with it.
  • Why does the test-ready product have a significant number of "inherent" defects? Because we have not reduced them significantly them further upstream.
  • Why didn't we reduce it further upstream? Because we think testing is very effective in finding all kinds of defects, so testing alone (or along with very few other practices) is sufficient for high defect removal efficiency.
It is not. Period.
From an economic standpoint it is wise to do proper QA upstream, in order to arrive at all subsequent defect removal stages (including testing) with a smaller number of defects, hence with fewer testing hours needed. This works because defects are removed cheapest and fastest as close as possible to their origin.
What do I mean by proper upstream QA? Well, I've seen personally that inspections (on requirements/stories, design, code, and tests) deliver jaw-dropping results in terms of defects reduced and ROI. I'm sure there are a couple more, just ask you metrics guru of choice. The point is, see what really helps, by facts and numbers not opinions, and make a responsible decision.

Tuesday, April 28, 2009

Writing Many High Quality Artefacts - Efficiently

I finished a process description on writing many artefacts, like use cases, requirements, book chapters, test cases, in high quality with the least effort possible. Please have a look, I'm eager to see your comments. After all, it's a Planet Project wiki article, so if you feel like it, just edit.

This, again, is on evolution, or learning by feedback. Makes heavy use of inspections. Inspections BTW, are one of the most effective QA methods known in software engineering history, according to metrics expert Capers Jones.

Monday, April 13, 2009

Value Thinking: Using Scalpels not Hatchets

Ryan Shriver, managing consultant at Dominion Digital, has put out an excellent article on Value Thinking. I think his views are extremely relevant, as we see cost reduction programs all around the IT globe. I can't tell you how much Ryan is spot on, considering the organization I work for (sorry, can't tell you more in public).

The article is short, gives ample information on the WHAT and (more importantly) the WHY.
Ryan writes in due detail about 4 policies and 5 practices, his advice to struggling IT shops and IT departments.  Thanks Ryan!

It's on gantthead.com, and I recommend signing up if you aren't already, it's free.

Sunday, February 15, 2009

Do you care about your customer, every day?

There's a wonderful parable on real customer satisfaction on the "i must be an acrobat" weblog by Joshua Hoover.

He compares eating out in a restaurant with inattentive waiters to developing software without delivering real results to a customer.

Thursday, December 25, 2008

Software Development Practice in the 21st (22nd?) Century

A couple of months ago I published an article about 10 Critical Requirements Principles for 0-Defects-Systems (on Ravens Brain). Anecdotal evidence on how I once achieved zero defects in one of my projects.

If you like the thought of reliable software (sic!) you might as well like Charles Fishman's article on the On-Board Shuttle Group, the producer of the space shuttle's software. Yes, they have a lot of money to spend, but many lives and expensive equipment are at stake.
This software never crashes. It never needs to be re-booted. This software is bug-free. It is perfect, as perfect as human beings have achieved. Consider these stats : the last three versions of the program -- each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. -- Charles Fishman on the software product the Group puts out


BTW, they seem to use three key pracices: very detailed requirements, inspections and continuous improvement.
The way the process works, it not only finds errors in the software. The process finds errors in the process. -- Charles Fishman on the On-Board Shuttle Group's process
Happy New Year! (Note that 2009 is a year of the 21st century. Can't we do any better?)

PS: I'm going to go on vacation from Dec 28 to Jan 20. More interesting topics still to come, please be patient.

Sunday, December 21, 2008

Why Re-work isn't a Bad Thing in all Situations.

Software Development has many characteristics of production. I'm thinking of the various steps that as a whole comprise the development process, in which artifacts are passed from one sub process to another. Requirements are passed on to design, code is passed on to test (or vice versa, in test driven-development), and so on.

Alistair Cockburn wrote an interesting article on what to do with the excess production capability of non-bottleneck sub-processes in a complex production process. In the Crosstalk issue of Jan 2009, he explores new strategies to optimizing a process as a whole. One idea is to consciously use re-work (you know, the bad thing you want to avoid like the plague) to improve the quality of affairs earlier rather than later.

In terms of what to do with excess capacity at a non-bottleneck station, there is a strategy different from sitting idle, doing the work of the bottleneck, or simplifying the work at the bottleneck; it is to use the excess capacity to rework the ideas to get them more stable so that less rework is needed later at the bottleneck station.


Mark-up is mine. You see the basic idea.
In conclusion he lists the following four ways of using work that otherwise would have spent idling around, waiting for a bottleneck to finish work:

* Have the workers who would idle do the work of the bottleneck sub-process.

Note: Although this sometimes is mandated by some of the Agile community, I have seldom seen workers in the software industry that can produce excellent quality results in more than one or two disciplines.

* Have them simplify the work at the bottleneck sub-process.

Note: An idea would be to help out the BAs by scanning through documents and giving pointers to relevant information.

* Have them rework material to reduce future rework required at the bottleneck sub-process.

Note: See the introducing quote by A. Cockburn. Idea: Improve documents that are going to be built upon "further downstream" to approach a quality level of less than 1 major defect per page (more than 20 per page is very common in documents that already have a 'QA passed' stamp).

* Have them create multiple alternatives for the bottleneck sub-process to choose from.

Note: An example would be to provide different designs to stakeholders. Say, GUI designs. This can almost take the form of concurrently developing solutions to one problem by different engineering teams, for the best solution should be evaluated and consciously chosen.

Saturday, December 13, 2008

Lack of Engineering Practices Make 'Agile' Teams Fail

James Shore wrote a blog post that is among my TOP 5 in this quarter. Go read it! He argues that the industry sees so many agile projects fail because most teams that call themselves 'agile' mistake the real idea with the wrong set of practices. Maybe this quote says best what his article is about:
These teams say they're Agile, but they're just planning (and replanning) frequently. Short cycles and the ability to re-plan are the benefit that Agile gives you. It's the reward, not the method. These psuedo-Agile teams are having dessert every night and skipping their vegetables. By leaving out all the other stuff--the stuff that's really Agile--they're setting themselves up for rotten teeth, an oversized waistline, and ultimate failure. They feel good now, but it won't last.

This speaks from my heart. It's exactly what happened when I tried to turn a project 'agile' in a super waterfall organization (picking up this metaphor I'd say it's an Angel Falls organization). We will have technical debt to pay off for at least a decade. However, we found that out quite quickly ;-)
If you're looking for some method that is strong both in the management and engineering field, check out Evo by Tom Gilb. I know I recommended Mr. Gilb's work over and over again in the past 2 years, and it almost seems ridiculous. Be assured, I'm not getting paid by Tom :-)
To get a very clear perspective on iterative planning check out Niels Malotaux' Time Line.

Wednesday, October 01, 2008

Impact Estimation Table Template for free download

Courtesy of Ryan Shriver I translated his Impact Estimation Table template into GERMAN. Go here to download it.

The above link to the german table will also give you a glimpse on the Wiki project I'm doing: All the posts, i.e. principles, process and rules will be available in a wiki format shortly.

Friday, August 08, 2008

Update on Finding the right cycle time

I updated

Finding the right cycle time

with the idea of 1-day iterations. Thanks to Mishkin Berteig for injecting this idea.
For those of you who think 1-week iterations are ridiculous, and 1-day iterations can't reasonably be a serious contribution of mine, think again.

Monday, May 19, 2008

How to Personally Get the Most Out of Finished Projects

Name: How to Personally Get the Most Out of Finished Projects
Type: Process
Status: final
Version: 2008-05-19

Gist: Dustin Wax wrote an awesome article at the Lifehack web log. Topic: Getting Past Done: What to Do After You've Finished a Big Project
As it is significantly more stuff than fluff I will essentially post a link only, for once. However, you might like the following definition.

reflective thinking DEFINED AS active, persistent, and careful consideration of any belief or supposed form of knowledge in the light of the grounds that support it and the further conclusions to which it tends [that] includes a conscious and voluntary effort to establish belief upon a firm basis of evidence and rationality <- Dewey, J. 1933. How We Think: A Restatement of the Relation of Reflective Thinking to the Educative Process. Lexington, MA: Heath.

S1: Read: Getting Past Done: What to Do After You've Finished a Big Project <http://www.lifehack.org/articles/productivity/getting-past-done-what-to-do-after-youve-finished-a-big-project.html> and extract the questions.

S2: Answer the questions in a quiet moment. You could also use them for a guideline in a lessons learned meeting.
Note: Reflection is method of learning with scientific proof of its effectiveness.

S3: Prepare a checklist for future use on finished projects (or link here).

Friday, April 11, 2008

11 Rules for Prioritizing Features in a Customer-Oriented Organization

Name: 11 Rules for Prioritizing Features in a Customer-Oriented Organization
Type: Rules
Status: final
Version: 2008-04-11

Gist: While prioritizing features, people people quite often take a ranking approach in order to balance different stakeholder's perspectives. This really may prevent satisfied stakeholders (or a shot time to market), and may lead to a mediocre release plan. Why? The math used in most ranking approaches is wrong. Here are 11 Rules for proper prioritization.

Source: http://tynerblain.com/blog/2008/04/09/improved-prioritization, thanks to Scott Selhorst for yet another great idea. Go there if you need a more visual explanation. I added my bits and pieces.

Prerequisites:
E1: You have a list of features gathered from various stakeholders.

E2: Your stakeholders are of different importance. ' maybe one represents the customer of your product, another is development/manufacturing, yet another is some governing technical department. Read about Finding Key Stakeholders.

E3: Each stakeholder has ranked all features, with smaller numbers for the more important ones.

Rules:
R0: Make sure you have a proper understanding of the goals (read Specifying Goals and Decomposing Goals) of the project or product. Make sure you understand your set of stakeholders in the light of the goals.

R1: Ignoring E2 above, the simplest approach would be to add the ranks for each feature and - voila - the features that are ranked best on average win.

R2: Not ignoring E2, don't make the mistake to give each stakeholder a weight and multiply the respective ranks with the weight before you sum the rank-numbers. This only works if you have lucky numbers.

R3: You can do like R2 suggests, but then you have to reverse the ranking numbers of E3, giving large numbers to the more important features.

R4: You should think hard what the stakeholder's weights represent. If you have one stakeholder that is more important than the others, it's paramount to make him happy, at least not to upset him. Do this by delivering his important features first, and only after that you think of features for the other stakeholders.
Note: Take the example from E2. I bet the purpose of your business is to make the customer happy.

R5: Make sure you do a Kano analysis in order to find features nobody thought of in the first place. This may be your advantage over competitors.

R6: If confronted with a large (> 20 items) list of features, let the stakeholders break it down into importance classes first. It's virtually impossible for the average stakeholder to truely rank so many items.

R7: Don't give in if your stakeholders say they need to know the costs before they can rank the features. You want to know what they want to have! You can always work out things to be cheaper, especially if your features are real (business) requirements, not solutions disguised as requirements. Bear in mind: "If you don't know the value of a feature, it does not make sense to ask what it costs ". (Tom DeMarco)

R8: If your features are Use Cases, go here.

R9: if all the above does seem to simple, revert to Karl Wiegers (First Things First), Alan Davis (The Art of Requirements Triage), Don Firesmith (Prioritizing Requirements), Lena Karlsson (An Experiment on Exhaustive Pair-Wise
Comparisons versus Planning Game Partitioning
), or QFD.

R10: Read Mike Cohn (Agile Estimating and Planning).

Thursday, April 10, 2008

Calculate Process Efficiency

Name: Calculate Process Efficiency
Type: Process
Status: final
Version: 2008-04-10

Gist: to find out, where in a process you can improve efficiency, with respect to added value to the customer
Sources: http://www.shmula.com/458/the-hidden-factory-would-the-customer-pay-for-that

Process DEFINED AS a systematic chain of activities with a customer observable outcome.
Note: this may be a use case, a scenario used to plan the user experience for a product, a company's process chart ...

Process efficiency DEFINED AS value adding activities in a process divided by all the activities, in seconds, minutes, hours, days ...
Note: You could also measure it in money, or people, or other resources.

S1: Plot the process, maybe using a UML activity diagram

S2: For each activity in the diagram, decide if it's either
- adding value to the customer,
- not adding value to the customer, but is absolutely necessary, or
- not adding value to the customer

S3: Measure with a stopwatch how long each activity takes.

S4: Sum the measured times for each category of S2

S5: Divide the measurement for the 'adding value'-category by the sum of S4, that's your Process Efficiency.

Note: Most likely you will come up with a number smaller than 1, because of the 'not adding value' activities that are necessary.
Note: the 'not adding value' category is the cost you bear on your customer, but does not add any value to the customer. (uh!)
Note: Obviously, from this point on work to eliminate not adding value activities. Or at the very least reduce the time needed for them.

Wednesday, March 26, 2008

Earned Value Analysis for Feature-Burndown in Iterative Projects

Name: Earned Value Analysis for Feature-Burndown in Iterative Projects
Type: Process
Status: final
Version: 2008-03-27

Gist: to explain how to obtain information about the Earned Value of an agile project that is using a backlog of either {user stories, features, requirements, change requests}

Source: T. Sulaiman, H. Smits in Measuring Integrated Progress on Agile Software Development Projects. In addition, there's tons of information out there on Earned Value Analysis, check out:
http://en.wikipedia.org/wiki/Earned_value_management
www.earnedvalueanalysis.com/
www.projectlearning.net/pdf/I2.1.pdf

feature FOR THIS POST DEFINED AS a thing to be implemented, could well be a requirement, a user story, a use case. Whatever your backlog consists of.

Entry-Conditions:
E1: Each feature in your backlog needs to have an estimate, best measured in some points, gummy-bears etc. (Make the sum of all features your parameter TSPP, total sum of planned feature points)
Note: It's not imortant that these estimates represent actual budget, time or similar. They have to be consistent with each other and represent properly the relative relations between them
Note: You can adjust the number later but you will have to recalculate things (see S3-S5).
E2: You need to know how many iterations you will undertake. (Make this your parameter TNI, total number of iterations).
Note: Ideally, iterations are all the same length, i.e. timeboxes. You can adjust the number later but you will have to recalculate things (see S3-S5).
E3: You need to know the budget you are about to spend, from now to the end of the last iteration (make this your parameter TBP, total budget planned).
E4: You need an established way of tracking how much budget you have actually spent on each of the iterations.
Note: This can be obtained by maintaining a log of workhours spent on the project by each of its members.

Process:
S1: After every iteration, find out how much budget you have actually used for the iteration. Sum it up over all finished iterations. Make this your ABS, actual budget spent
S2: After every iteration, find out how many points you have actually implemented within all iterations (= ASI, actual sum implemented). All the feature's points count as implemented if and only if the feature is in full effect.
Note: this means you have to have a clear understanding what 'done' means for a feature.

S3: Do some calculations
- Expected Percentage Completed = TNI / number of already finished iterations
- Actual Percentage Completed = TSPP / ASI
- Planned Value = TBP * Expected Percentage Completed
- Earned Value = TBP * Actual Percentage Completed

S4: Find out about you projects cost performance and cost estimate:
- Cost Performance Index = Earned Value / ABS
Note: > 1 means under budget, = 1 means on budget, < 1 means over budget
- Cost Estimate to Completion = TBP / Cost Performance Index

S5: Find out about you projects schedule performance and schedule estimate:
- Schedule Performance Index = Earned Value / Planned Value
Note: > 1 means before schedule, = 1 means on schedule, < 1 means behind schedule
- Schedule Estimate to Completion (in iterations) = TNI / Schedule Performance Index

Tuesday, February 12, 2008

Finding the Right Cycle Time - 10 Rules

Name: Finding the Right Cycle Time - 10 Rules
Type: Rules
Status: final, revised
Version: 2008-08-08

Gist: to explain what influences cycle time in an agile development and how to find the right time for a given situation
Sources: Implementing Lean Software Development, Mary and Tom Poppendieck, Addison Wesley, 2007; own experience; a talk of Mishkin Berteig at Agile 2008

R1: Think about 1 day, 1 week, or 1 month for a start.
Notes:

  • You don't have to predict a useful cycle time for a whole 1 year project in advance. Start with anything that seems to useful for the next, say, 25% of the project time, but start now.
  • 1 day may seem rediculuos. It isn't. it is likely that people agree to do such an 'experiment'. you get things rolling, everybody is happy about the exercise and you can gather data from your team! Also see R9.

R2: Listen to what your team says. Ask why every time someone comes up with an argument against a specific cycle time. Account for the natural resistance to change. Do baby steps towards a change
Note: there's also a valid thought against baby steps. If you try to reduce cycle time from 3 months to 2 weeks, it's tempting to go for 4 weeks first. However, if you decide to *start* with 2 weeks, your will double the chances to learn anything useful.

R3: If your cycles get hectic near the end, you should reduce cycle ime to even out the workload.


R4: If your customers are trying to change things while a cycle is underway, if they cannot wait until the next cycle, then your cycle time is too long.


R5: If you cannot release software very often, think about doing smaller cycles (iterations) within larger cycles (increments, releases). However, make sure that you really can't release more often.
Note: In some organisations, it is painful to change processes so that certain acceptance tests would consume less time. or maybe you have a great many users of your product and you can't find an efficient way to train them new features. But be careful, these arguments may not be the real ones. Keep asking why.

R6: Resist the temptation to do parallel cycles unless you know exactly what you are doing. It's very hard to maintain a strong notion of 'done' if you do it. Do not assume you need it because otherwise the team would not be utilized properly.
Note: Full utilization will slow your team down. Full stop.

R7: Make sure everyone, especially your customer, understands this one "Time, quality, scope - choose any two." -- Greg Larman

R8: If your managers want reports quarterly, surprise them with a telling set of data concerning your progress, not only one cycle's experience or less. Prove to really being in control and to really getting things done.

R9: If you seem to have a rather gigantic amount of things do be done in your project, go for shorter cycle time. You will soon have a pretty good sense of your velocity, thus being able to predict what you will be able to achieve.
Note: However, you have to adjust your prediction after the first 2-4. A very effective way of gathering this amount of data is doing EXTREMELY short iterations (1-day) during the first, say, two weeks. This is EXTREMELY useful to get a waterfall team do short iterations. Most people are willing to do an experiment that is that short. See R1.

R10: If more than 1 or 2 features can't be completed within one cycle, break the features down.

Tuesday, February 05, 2008

Simple Solid Decision Making

Name:
Type: Process
Status: final
Version: 2008-02-01

Gist: how to make well-grounded decisions even if you cannot use Tom Gilb's Impact Estimation method for some reason (see Gilb, Competitive Engineering, Elsevier 2005). Decision making is the process of choosing a solution to a problem from a set of solutions, given a set of goals.

Sources: Stefan Brombach (http://www.dreizeit.de/), Kai-Jürgen Lietz (Das Entscheider-Buch, Hanser 2007), Tom Gilb, own thoughts.

S1: Make clear the goals of your endeavour. you need to understand what you want to achieve, what you want to keep and what you want to avoid. Consider using the Decomposing Goals principles.

S2: If you have many 'small' goals, say more than 12, then consider integrating some of them in a more abstract goal. Example: 'low costs for running the application' and 'don't exceed the budget' could be combined into a 'cost' goal. If you have to be quick and can't afford doing step S1 (please really consider doing it!) take the common three goals {time, scope, quality}.

S3: Compare each goal with every other goal. The more important goal gets a point. If you can't decide which one is more important, give 0,5 points to either goal. In the end, add 1 point to every goal for every goal has at least 1 point. Now you know the ranking of your goals in terms of importance.

S4: Go find at least three different realistic 'solutions' or ways of achieving your goals. You need three or more to have a little room for manoeuvres. Bear in mind, if you only have one solution there's nothing to decide.

S5: Using a 0..3 scale, iterate through all your solutions and goals and again give points to the solutions. 0 means solution X does not help achieving goal Y at all, 3 means it helps a great deal. Multiply these points with the importance of the goal from step S3.

S6: For each solution, add all products from step S5. The solution with the largest sum wins.

Lean Development Principles

Name: Lean Development Principles
Typ: Principles
Status: final
Version: 2008-02-26

Quelle:
See: http://www.shmula.com/340/lean-for-software-interview-with-mary-poppendieck
Chris's article on InfoQ

P1: Flow (or low inventory, or just-in-time) (inventory DEFINED AS something partially done)

P2: No Workarounds, problem exposure
Note: if you encounter problems, solve them, don't take detours. This may mean you have to stop what you are doing.

P3: Eliminate Waste (waste DEFINED AS {something without value to the user, work partially done, extra feautures you don't need right now, stuff that causes delay})

P4: Delayed Commitments (DEFINED AS idea of scheduling decisions until the last moment when they need to be made, and not making them before that, because then you have the most information. -- Preston Smith)
Cite: "Never make a decision early unless you know why." -- Chris Matts

P5: Deliver Fast (DEFINED AS being able to quickly release software. You need proper procedures for that, like {excellent testing procedures, automated testing, stuff that is disjoint from each other (so as you add features you don't add complexity)}
Note: A rigid process menas less choices you have to make every day. Less choices mean more output. Routine enables innovation where it’s most valuable.

Thursday, December 20, 2007

Course of Action

Name: Course of Action
Type: Process
Status: final
Version: 2007-12-20

Gist: to explain how to act upon a problem or complex, dynamic, invisible situation

S1: work out your goals see >Specifying Goals, to be used as guides and criteria for evaluation of your actions (see step 5).
S2: gather information about the situation and about how it changed over time in order to build a structural model of the problem
S3: build hypotheses about how the situation has changed and deduct how it will change
S4: plan your actions, decide on your actions, act (don't forget to think about doing nothing)
S5: check the effects (momentary facts AND tendences!) of your actions and refine your strategy with new evidence
Note: this is not a linear process. Trace back to any step whenever you feel you should.

Friday, November 09, 2007

Specifying Goals

Name: Specifying Goals
Type: Rules
Status: final
Version: 2007-12-20
Source: various authors, including D. Doering ('Die Logik des Misslingens')
Gist: to describe goals (of an organization, of an IT system, …) in a clear and useful fashion
Note: like always, goals are a kind of requirement. So it doesn't hurt at all if you use the rules to lower levels of requirements, e.g. system requirements.
R1: Seperate different goals, even if they seem interrelated, for you can reference them.
R2: Name each goal unambiguously, use a short name.
R3: use the PAM structure:
P urpose
A dvantage
M easure
Purpose: Supplies a higher level rationale for the goal. Understanding the why gives you freedom in finding proper solutions.
Advantage: Differentiates the goal from others and from non-goals. So you also know what NOT to achieve.
Measure: Keeps you on track and supports your communication with the higher management. Gives you a clear sense of 'done' and provides satisfaction at the end. Makes it possible to evaluate measures concerning suitability and effectiveness.
R4: Iterate! There's nothing bad about changing goals along the way. Please restate the set of goals, though.
R5: Avoid comparatives, work until you know what they really mean. Try decomposing a comparative goal into a number of specific goals.
R6: Mind if you specify positive (get to x) or negative goals (avoid y). A positive goal gives you fewer opportunities to succeed than a negative goal, which can be good or bad. Be aware of the possibility that someone states a goal negatively because he does not know how it can be stated positively. You're bound to fail, not within the scope of the stated goals but in the scope of the real goals. Negative goals tend to be too global.
R7: global and specific goals are good, unclear goals are bad. try to make a global goal specific, however. if your global goal is unsufficient and you cannot come to a specific goal, try stating intermediate goals that maximise the opportunity of success. How? By specifying many subgoals with a good chance of reaching them. another way to put it is: try to set goals so, that by adhering to them you will be in a better tactical position towards your strategy.
global: the future state is determined by few (sometimes only one) criteria
specific: the future state is determined by many criteria
unclear: the future state is not determined by a sufficient number of criteria
R8: be extremely suspicious if you seem to have only one goal. this situation is very rare and the phenomenon indicates you forgot a number of goals.

Tuesday, September 04, 2007

Do the Right Things

Name: Do the Right Things
Type: Process
Status: final
Version: 2007-09-04

Gist: to provide verification for a feature of a product, a plan of actions concerning the development of a product, or concerning the development of real options.
Note: you can apply this process on various levels of problems and or solutions.
Note: This process optimizes the success in the success equation

stakeholder differentiating: a function or property of the product (or an action of a plan) adds to the overall capability to satisfy the main stakeholders
Note: This means you have to have some notion who the product's main stakeholders are. The stakeholders will accept the product more if the product helps them reaching their goals.
Note: YOU could be the main stakeholder. Then differentiating means options for a wealth of future situations.


mission critical: a function or property of a product (or an action of a plan) is mission critical if the stakeholders will reach one or more of their goals only if the product has it (or the plan caters for it)
Note: This means you have to have some notion of the stakeholder's goals. The stakeholders will accept the product more if the product helps them reaching their goals.
Note: careful, you might presume that some way is the only way.

S1: produce a 2-dimensional space using the above dimensions (mission critical/not mission critical, stakeholder differentiating/ not stakeholder differentiating). This gives you 4 quadrants.
S2: for each feature or sub-feature of the planned product (or action of your plan, or possible step towards your personal goal), ask and answer the following questions: "is it stakeholder differentiating? y/n" and "is it mission critical= y/n".
S3: place it in the quadrants of the space you created in Step 1. You may express more continuous answers than y/n along the axises of your space.
S4: The verfification rules for the four quadrants are:

stakeholder differentiating AND mission critical:
invest and excel. you should do lots of things. put your main effort here. provide the most inner quality.
Note: this is the place where the rewarding things to do yourself are. It is nice to harvest the options you .
stakeholder differentiating AND NOT mission critical:
"good enough" will do. use Pareto. do cheap and easy things.
Note: Will you have the option anyway, whether you do a lot now or not?

NOT stakeholder differentiating AND mission critical:
find a quality partner. use your partner's services. don't do it on your own
Note: Do not expect that your partner gives you many options. However, it is nice to share success (with a business partner or some other partner). Be honest and thankful.

NOT stakeholder differentiating AND NOT mission critical:
do nothing about it. don't waste time and or money
Note: You can use the options that lie here anyway.