- ran and passed
- ran and failed
- didn't run
Often, we focus on the tests that run and fail. Those tests are describing problems or potential problems with your product. We analyze those results, we log them into our defect tracking system or backlog, etc. This is all useful.
We sometimes fail to differentiate the other two categories, though. If I look at a test run from two months ago, and I don't see a test failure, did the test pass? Maybe. Or maybe it didn't run for one reason or another. Either I got information that the software in this way conforms to our expectations, or I merely think I did, and the test didn't run so I got no information at all about it.
When we track test results, we need to track the successes as well as the failures. These should be recorded into a test run database, summary log file, wiki page (whatever you're using for reporting). This will help you more easily answer questions like:
- How long has this test been failing (which is analogous to saying when is the last time this test passed)?
- What kind of coverage are we getting on the sprocket module? (If you have a test and it's not running, you're not really covered.)
- Looking at a failure, this related test SprocketBend_t1 should also be failing if the problem is with the frobbles. Is it really passing?
Keep recording your failures. Just don't forget to record your successes as well. The green bar has about as much to say as the red bar.