Monday, May 4, 2009

Estimating Verification

One of the things I've been looking at recently is how long it takes us to verify a defect is fixed. Specifically, I'm trying to estimate defect verification for various purposes.

Why Bother Looking?
For any effort greater than zero, I like to first figure out if it's going to be worth the effort. Otherwise I'm sure I can come up with something else to do! So, why do we care how long it takes us to verify a defect?

There are a couple of reasons I'm looking at this:
  • Going into a release, we have a target for what defects we're going to fix. How long it takes us to verify them directly affects how long it takes to get the release out the door.
  • All of dev has a goal of getting under 25 bugs in all the queues. This means QA has to verify as dev bugs come in, and we have to figure out how much time to allot to this.
Consider Defect Types
So now that we know we're looking, let's figure out what we're looking at. The most obvious thing looking at the bugs we have to verify is that some of them are going to take far longer than others. Verifying that the domain controller is running Windows 2003 instead of Windows 2000 is quick and easy. Verifying that some obscure race condition is fixed is going to take rather longer.

Unfortunately, that doesn't get me off the hook. I still have to estimate the release testing, including bug verification. Fortunately, I'm doing an overall estimate for "verifying defects", not one for each defect (which would take forever). So, what do we have to consider here?
  • How much can we verify at various development stages? The more bugs we can verify during the development cycle, the fewer we have to verify during the later stages of the release. This will cut down the amount of time needed for release testing overall.
  • How stable are the dependencies? If the installer, for example, doesn't work, it's going to be hard to verify any defects on an installed system. 
  • What percentage are automated test failures? If a lot of your bugs are found by automated tests, it's a good guess they can be verified by automated tests. This often means it's relatively simple to verify (look that the test passed, basically), but it means that you have to wait for the test to run again before you can verify. This affects your estimate by increasing the duration but reducing the effort required during that duration.
  • How many need install and upgrade testing? I find this easiest to break down by component, and to note that there are X bugs in the system management code that will need to be separately tested. These we have to count twice - once for install and once for upgrade.
Don't Forget the Little Stuff
Presumably you have some variant of a bug verification checklist. Glance over it and see how much is involved around verifying  the bug - 5 min to read the ticket and 2 min to update the ticket, times X number of tickets adds up.

Consider the Past
While "past results are not indicative of future performance", it's a good guide. Mine your defect tracking system: how long did defects sit in the Ready for QA state on average? Unless you've made major process or system changes, that's probably still pretty close to accurate. For this one, I average all bugs over the previous release and use that as my starting number.

Estimate in Two Parts
There are two parts to this estimate: one is how long it will actually take to do the verification work; the other is how much time must elapse before verification can be complete. You can't ship until you've got both done. Some of verification is waiting - waiting for systems to have more data in them, waiting for automated tests to run, etc. Count up that time (this is where you use the information you got from your defect tracking system). Then figure out how much actual work is involved. This is where you use the average work time you derived based on defect types, and overhead for each defect. Add up all that time, too.

The last step is to take your two numbers - say, 3 weeks elapsed time, and 30 hours work time - and reconcile them. For this to work, you have to devote 10 hours a week to defect verification, on average. If you can only spend, say, 6 hours a week work time, then you'll take 5 weeks, and that's your estimate. If you could spend 20 hours a week, well, your estimate is still 3 weeks because of the elapsed time required.


Estimates are a little bit of black magic, and you'll be wrong the first time. Basing your estimates on actual data, and following an estimation process every time will help make your estimates better and better over time. Happy estimating!

No comments:

Post a Comment