Usually, that means we write tests for things. Unit tests, system tests, tests that test our test code, we produce a lot of tests. Sometimes, though, an explicit test might not be necessary. When you go to write a test, consider whether you already have an indirect test.
For example, we have a system at work by which we can install one of several different operating systems onto a test machine. You simply set the OS you want, run a script, and it shows up in about 5-10 minutes, fully configured on the new OS. It's code that does, this, so we should have a test, right? Well, sure. We kind of already have a test, though. We have a utility called a migrator. It monitors the lab, and when there are fewer than 10 available machines of any given OS, it automatically reserves a machine and moves it to the new OS. That migrator uses the OS installation code, and it tests that code. If the OS installation code breaks, the migrator starts failing (and emails and Jabbers its distress to the entire dev team).
The migrator is an indirect test for the OS installation code.
So, do we still need a test? Maybe, maybe not.
An indirect test is good enough when:
- The indirect test uses the piece of code under test explicitly and directly
- The failure of the indirect test is noticeable and traceable (that is, the "test" fails quickly and loudly)
- The consequences of failure are not catastrophic
An indirect test is insufficient when:
- A direct test would be much quicker to run and the code changes fairly frequently (in other words, the test should be such that dev would run it prior to checkin, and if the indirect one doesn't accomplish that, it's not sufficient)
- The indirect test is difficult to run or runs intermittently
- The indirect test does not provide the desired level of coverage
More tests aren't always better. In a situation where indirect tests exist, evaluate whether they give you the feedback and coverage you want before you put the effort and time into a direct test.