Monday, January 4, 2010

How Many Bugs Will We Find?

When we do an iteration or a release or a sprint or a [insert unit of work here], it generally goes something like this:
  • Figure out what you're going to do and commit to it
  • Start building it
  • Start testing it
  • Find urgent bug in the previous output and squeeze it in
  • Finish "first implementation"
  • [scramble to fix issues found, finish implementation, etc]
  • Done
  • Repeat
I'll note that most of the books you'll read on development methodologies skip one or two of those steps, in particular the defect handling and near-the-end juggling steps. Life is generally less orderly than the books.

After a few tries at this, someone generally comes to engineering and says something like, "I've noticed we have a crunch at the end and we're not really finishing what we signed up for because we're having to deal with issues, and testers are saying that with features rolling off the line right up until the end of the sprint we're going to keep having integration issues after the iteration is over. What do ya'll think we should do about it?" After talking for a while, the answer generally comes down to one of two things:
  1. Leave a bit of slack in this interation and use that to fix issues found in the field or found in the iteration. Generally this will happen at the end of an iteration.
  2. Create inter-iterations or some other slack period between iterations for issue resolution, either found in the field or found in iteration. For example, an iteration might be 12 days long (2.5 weeks) and leave 3 days "between iterations" for resolving issues.
The basic idea in either case is to leave some room in the schedule for handing the issues that will inevitably come up. The next question, of course, will be:

How many bugs are we going to find?

This is not an easy question. Keep in mind we're in planning and scheduling mode, so the real question is how much time the team is going to need to allocate to bug fixing. After all, finding 5 bugs could mean 15 minutes of work, or it could mean 15 days of work, depending on what the bugs are. So let's start thinking time. How much time should the team plan to spend on bugs?

There are three major inputs to this equation:
  • historical data
  • contents of the iteration
  • tolerance for risk of things slipping out of the iteration
Historical data is your most valuable way of keeping yourself honest. This is what you look at to know how well you - your team, your testers, your management - will actually do in a given scenario. Look at past iterations and ask yourself how much time you spent resolving bugs. Break your iterations into two types: those right after a release hits the field, and those with no major new field work. You'll probably find your time spent on field issues is higher right after a release, so these two numbers might be quite different. This is your baseline number, and you want it in man days. As a rough baseline, this is probably on the order of 10 - 20% of your total man days (at least, in my experience).

The contents of the iteration will help identify any unusual characteristics of the iteration. A major new feature or work in a problem area will increase the number (and possibly time to fix) issues. An iteration that's fairly safe may require less budgeted bug fix time. Generally this will change your baseline number by up to 25%. You can do this by function points or task hours, but in the end also listen to your gut. That "eek!" you hear is your gut saying, "bug time needed here!". Listen to it.

Lastly, consider the "slippability" of the iteration. Sometimes it's okay if things change in the middle of an iteration. Sometimes it's okay if bugs wait. If that's the culture or the iteration you have, that's fine. Schedule less time for defect handling. Other times there are major things riding on hitting an iteration end, no matter what. If your culture is that conservative, or if revenue depends on hitting the targets you say you're going to hit, then schedule more time for defect handling during the release, because the alternative - not making it - is less acceptable.

Much as we'd all like to have perfect code that does everything that the customer wants, there will be defects and we have to handle them. How you handle them is up to you, but better to think about that before you're in the thick of it.

No comments:

Post a Comment