Thursday, June 5, 2008

Failing Around Tests

We run a lot of automated tests. In general, these are fairly straightforward, as you would expect. They basically consist of:
  • Setup
  • Perform Test
  • Teardown
(I warned you it was straightforward!)

The problem comes when we start to talk about test failures. Sometimes, this, too, is straightforward: the test itself failed. Maybe you expected to get "2876" and you got "2875"; whoops, obvious test failure. Other times, it's less easy. So the question of the day is:

If a test doesn't setup correctly, did it fail?
If a test doesn't teardown correctly, did it fail?

To address the latter of these first, my opinion is that a test that doesn't teardown cleanly failed. If you can't clean up after yourself, then the test has some unintended consequence, and those generally translate to bugs. Usually a failing teardown means you aren't asserting on something, and that something is what changed. So this is a failure.

Tests that don't setup correctly are trickier. A test that didn't setup correctly may just be the victim of some prior test (that didn't tear down cleanly), but it might also be missing some setup step. So for these I look to see if the prior test running on that machine had a clean teardown. Assuming everything in the previous test came down cleanly, then you've got a setup failure and that's a bug.

So the moral of today's story is this: 

A test doesn't pass unless every part of it passes. 

Just having passing assertions (the test itself) is not good enough. The parts around the test - setup and teardown - have to pass, too.

How do you handle failures around your automated tests?

No comments:

Post a Comment