Thursday, February 25, 2010

Indirect Tests

It's good to test things.

Usually, that means we write tests for things. Unit tests, system tests, tests that test our test code, we produce a lot of tests. Sometimes, though, an explicit test might not be necessary. When you go to write a test, consider whether you already have an indirect test.

For example, we have a system at work by which we can install one of several different operating systems onto a test machine. You simply set the OS you want, run a script, and it shows up in about 5-10 minutes, fully configured on the new OS. It's code that does, this, so we should have a test, right? Well, sure. We kind of already have a test, though. We have a utility called a migrator. It monitors the lab, and when there are fewer than 10 available machines of any given OS, it automatically reserves a machine and moves it to the new OS. That migrator uses the OS installation code, and it tests that code. If the OS installation code breaks, the migrator starts failing (and emails and Jabbers its distress to the entire dev team).

The migrator is an indirect test for the OS installation code.

So, do we still need a test? Maybe, maybe not.

An indirect test is good enough when:
  • The indirect test uses the piece of code under test explicitly and directly
  • The failure of the indirect test is noticeable and traceable (that is, the "test" fails quickly and loudly)
  • The consequences of failure are not catastrophic
An indirect test is insufficient when:
  • A direct test would be much quicker to run and the code changes fairly frequently (in other words, the test should be such that dev would run it prior to checkin, and if the indirect one doesn't accomplish that, it's not sufficient)
  • The indirect test is difficult to run or runs intermittently
  • The indirect test does not provide the desired level of coverage
More tests aren't always better. In a situation where indirect tests exist, evaluate whether they give you the feedback and coverage you want before you put the effort and time into a direct test.

3 comments:

  1. There are also tests that test installing the different OSes.

    I guess a cute way to summarize what you were saying is that a test is only as useful as how frequently it runs (compared to changes).

    ReplyDelete
  2. Assar, thanks for writing that migrator by the way!

    And yes, basically if we're testing something as frequently as it changes, then it's a sufficient test, even if it's not named test_XX or XXTest.

    ReplyDelete
  3. This is good, but I'd go a couple of steps further.

    First, to me, a test is not something that you write. A test is a question that you have in your mind, and a test script is one means of answering that question. That is: the test script is not the test, and the test script is not what finds the bug. As Pradeep Soundararajan once elegantly put it, "the tester finds the bug, and the test plays a role in finding the bug," to which I would add, "and the test script plays a role in the test."

    On the subject of that question that you have in your mind, there's a timing issue. Sometimes that question is prospective—"I want to see what happens when we do THIS, so let's do THIS." Sometimes the question occurs to you in the middle of the test: "I'm running a test that does THIS, and I'm suddenly realizing that while the system is doing THIS, it's also doing THAT," such that THAT may or may not represent a problem, but problem or not, you're still testing and getting coverage of the question answered by THAT. There's also the case where you ran THIS test a week ago, and can frame THAT test as having happened. I bring this up because I think the latter two cases are examples of what you're calling indirect tests. Incidental tests provide incidental (or accidental) coverage of a particular test idea.

    ReplyDelete