Tuesday, June 2, 2009

Verify Or Cover

So here's an interesting release scenario....

You branch for release.
You start testing.
You  find a bunch of bugs.
You keep testing as people fix the bugs.

Now you have choices: you can verify the bugs, or you can continue to expand your test coverage.

Which do you do?

Arguments for increasing coverage:
  • You found bugs in the first stuff you looked at; you have to assume there are major bugs in the stuff you haven't looked at. Looking earlier will help you find them earlier.
  • Bugs tend to be found in clusters. Emphasizing verification only wears more roads in that clustered area - providing less new information.
Arguments for verifying bugs:
  • Fixing bugs may cause bugs - you need to find the collateral damage. It helps to find it earlier while the code is still fresh in everyone's mind.
  • Verifying bugs lets you finish out a cycle (in this case, the test -> find -> diagnose -> fix -> verify bug cycle), and that leaves fewer loose ends. It's satisfying for most people involved to complete something like that.

The real answer, of course, is that you don't do either exclusively. You spend some time on verification (and finding collateral damage), and some time expanding your coverage to areas of the product you haven't hit. However, in the need to balance both, consider where your risks are. If you have a high collateral damage occurrence, weigh toward bug verification. If you haven't hit much of the product yet, or if you have teams sitting idle, weigh toward increasing coverage. Take a look at your team, your product, and change your testing strategy accordingly.

6 comments:

  1. One question is why release discussions are happening when you've not achieved much coverage?

    ReplyDelete
  2. I'd be worried that there are areas of the product I haven't yet tested which, if they contain bugs, are important enough to halt a release.

    ReplyDelete
  3. Of course that's a risk. Right up until the day you actually ship, you might find something that stops you from shipping. I should probably explain the use of the word "release" here. We consider ourselves "in release testing" basically from the point we are feature complete until the point at which we actually ship the code. This can be weeks, sometimes. So, we may have just started testing these new features, and we're talking about whether the release date is likely to slip or not, and how to get coverage or verify, or prioritize all the things we have to do to understand whether this set of code meets all our criteria for releasing it to the world.


    Release discussions happen early because we don't have short release cycles (we plan a release to the field for months away, not days or weeks). If, for example, marketing is doing a big push around a release, they'll want (deservedly so!) to know when the release date is several months in advance. So we have to plan a bit further ahead than someone who's doing a smaller feature set with more incremental releases would. But that's a bit of a separate discussion, I suspect.

    ReplyDelete
  4. So if you have lots of time, why can't you plan to test fixes AND test in areas of new coverage?

    ReplyDelete
  5. We do both. It's more a question of order (or at least focus in the immediate future) than it is a question of doing one and not doing the other.

    ReplyDelete
  6. Ah I see. Then I think I'd be choosing to achieve more coverage on the basis that those areas hold a potentially unknown number of bugs. At least in the areas you've covered you've got a degree of understanding regarding the bugs and you'd hope there are no big showstoppers in there.

    ReplyDelete