Tuesday, February 3, 2009

Test Estimates

Usually when I work with estimates, I'm very careful to include estimates for the entire effort: design, implementation, testing, bugfixing, shipping. After all, just part of that is no good to our customer; only the whole feature is worthwhile, and ultimately an estimate is an attempt to answer the question "When will our customer be able to have and use feature X?".

Within that, however, I'm ultimately responsible for testing. I'm not going to implement it, I don't run the team that designs it, and no one in their right mind would put me in charge of marketing launch! So how do we do test estimates? How do we estimate the part that we are ultimately directly responsible for?

There are a number of things to consider:
  • How we will approach this feature or change
  • How long the actual tests will take
  • How likely we are to find issues and how much of our test it will delay
  • How much testing we're likely to have to redo
  • The level of acceptable risk to the company for this feature
  • Other pressures, in particular around dates
Let's break each of these down a little bit:

How we will approach this feature
We have to design our tests for a feature. Will we need a new tool? Are we doing exploratory testing? Is this a regulated API with a compliance program already provided? Is there a spec we have to meet or is this something with a little bit of give in the requirements?

Here you're looking at a relatively coarse-grained level for large items and for long lead times. New tools, new techniques, or major changes to your overall test strategy will extend your estimate.

How long the actual tests will take
At some point you're going to sit down to do or run your tests. How long will this take? Is this an installer that takes 20 min to run? How many missions do you have and at what durations? If it's a test prerequisite to have a full system, how long will it take you to fill a system?

This test is where you add up your design and see how long just the doing will take.

How likely we are to find issues and how much of our test it will delay
Here's where things get fuzzy. Are you going to find problems? Probably. Will some of those problems prevent you from doing tests you'd like to do? Probably. How many problems and how much delay? Is development going to fix the problems immediately or will there be a delay before dev gets to it?

This is very organization dependent. Sometimes we simply don't include this in the test estimate at all (it goes into the implementation estimate). Other times we take a stab at an estimate for this. If you're including it in your test estimate, I'd strongly suggest calling it out as a separate line item, and talk to dev about it before you present your estimate. You're likely to get a lot of pushback in this area, either due to a belief that this time the software will be better/less buggy, or because they think you've overestimated how quickly development will react to bug reports. Let history be your guide here as much as possible; it's a lot harder for reasonable people to push back when faced with data.

How much testing we're likely to have to redo
This comes directly from the point above. Since the software is likely to change, some of our testing is likely to have to be redone. However much this is, you have to add in that one-hour (or one day or one-week) test again.

The level of acceptable risk to the company for this feature
Testing can, of course, go on pretty much forever. At some point the benefits to shipping outweigh the risks that something will go on. To provide a good test estimate you need some knowledge of that point. It's no good including time to check every comment for spelling errors if there's no source code licensing and therefore the risk that the defect will matter to the customer is very low, for example. At the same time, if you're testing the space shuttle, you probably don't want to skip a whole lot of tests.

Other pressures, in particular around dates
It's not uncommon to find that a feature has been promised to a customer on a date, and that date is essentially immovable; maybe it's due to revenue recognition, or some compliance date, or whatever. In these situations, go ahead and do a test estimate, but if it comes out longer than the fixed date, well, then you've got a dilemma. In extreme cases, when the date doesn't move and all preceding items are either already done or also immovable, consider simply not providing an estimate. After all, if everything else is fixed - inputs, ship date, and thing to be shipped - then risk is the moving variable. In these cases, better to just note the situation and start testing.

Tying it all together
In the end a test estimate is just part of an overall feature estimate. There are a number of variables that are very company specific; unfortunately there simply isn't a universal formula for a test estimate. The main point is consistency - make sure your estimates account for the same variables every time, and then consider that your task is to be more and more accurate at hitting your estimates. After all, what goes in doesn't matter nearly as much as whether the outcome is consistent and accurate.

No comments:

Post a Comment