Wednesday, June 17, 2009

Balancing Coverage and Verification

Okay, you've achieved feature completeness. Hooray! Further, you've mostly got things integrated. Now it's time to start stabilizing for release. Time to fix some bugs, testers are probably going to find an increased number of bugs (hey, it's stable enough to find any problems - that's a good thing). Depending on your product this phase is going to last somewhere between a few hours and several months.

Things start off simply enough: testers set up a system (or several) with the candidate code running on it, and go through finding stuff. Combine that with the bugs that just didn't quite get fixed during feature development, and dev's busy knocking down their bug queues. All well and good so far.

Shortly after that, though, we start to have a bit of testing dilemma. The list of bugs awaiting verification is growing, and the list of areas of the product you still have to cover is shrinking but still definitely far from zero. What do you do first? Prove the fixes, or go looking at areas of the product you haven't touched yet?

In other words, how do you balance coverage and verification?

As with most things we talk about, the answer is "it depends" and "probably not 100% of either". There are good arguments for both actions.

Benefits of increasing coverage:
  • If there's a scary issue you haven't yet found because you haven't looked in that area, well, better to find it sooner rather than later.
  • The less that is untested, the lower your risk overall (I tend to believe that minimizing unknowns generally minimizes risk).
Benefits of verifying bugs:
  • It closes out the lifecycle nicely - you really know it's done.
  • If there are any bugs hiding behind the bugs you're verifying, you're likely to catch them here. You are still improving coverage by increasing your depth of coverage in that area.
 So both increasing your breadth of coverage and verifying bugs give you good things, and doing 100% of either of them is not a recipe for success. So how do we balance these things?

Typically, I'm going to start off a release really worrying about coverage, and over time we'll increase the amount of verification we do, finishing with a coverage-oriented pass. The schedule looks something like this:
  • Beginning of the cycle: 80% of the team's testing time is spent on extending coverage. Until we've done one pass through pretty much everything, I'll err on getting that one pass done. 
  • About 35% of the way through: Now that we've hit the basics, we start worrying more about finishing out some of this work - say 50% on verification and 50% on more general coverage increases. 
  • About 60% of the way through: We've flipped  and are spending most of our time verifying defects; we do get some incidental increased coverage but it's less of a focus.
  • About 90% of the way through: There's a big flip at the end of the release cycle. By now you should have verified pretty much everything that's coming in, and your find rate is probably pretty darn low, so we're just looking at coverage again. 95% coverage increase, 5% rounding out those last few bugs.
This is really just a general guideline. Based on what we're finding and what's getting fixed we'll change it up a little. If our reopen rate is pretty high we'll err more on the side of defect verification. If this release has touched some deep underlying parts of the code, we may err on the side of increasing coverage sooner to pick up side effects. The point is more to introduce some predictability - depending on where you are it's okay that bugs are piling up a little bit in QA, or it's okay that you haven't touched an area yet - and to make sure that you're setting expectations correctly.

In the end, you're going to need both defect verification and some level of coverage before you're through with a release cycle. Think in advance about how you're going to achieve both, and you're much more likely to actually do so in a manner that gives you the feedback you need as early as possible.

1 comment:

  1. I like how you discuss the forces involved. I wish more commentators would do that.

    I think you forgot to mention a very important force: the programmer. I key my testing choices strongly to the need for providing rapid feedback to the guys who make quality happen.

    Bug fixes tend to be a high priority testing item because the devs want to be done with them. If I delay in handling them, they may lose their train of thought.

    I might verify a fix on the devs own machine, and then reverify in the formal build, later on. Or I might look over the fixes and triage them into ones that must be verified right now and ones that can wait. Of course there are also fixes that can't be verified yet because we're waiting for some other component to be completed/fixed/configured first.

    Every day I ask myself, am I working on the things my clients need to hear about?

    -- James