Thursday, November 1, 2007

Verifying Automated Tests

Let's say you've reached near-nirvana. Not only do you you have tests defined, you've defined which ones should be automated tests. Then you've gotten developer buy-in and now every feature you receive comes with an implementation of the tests as you've defined them. There's just one more question....

How do you verify automated tests?

There are several things you have to know in order to know that an automated test is sufficient. You need to do the following:
  1. Define the tests before they're written. This way you know that the developer is working from a good spec. Don't expect the person writing the code to also identify the tests. Having more than one person involved will give you better tests because you won't be at the mercy of a single person's assumptions.
  2. Verify that the tests run. The first step once you receive the tests is to prove that they all run and that they pass. This gives you a baseline to know that the tests should pass; it's a simple sanity check. If the tests don't all pass, go back to the developer. There's an error in code or an error in the assumptions behind the tests.
  3. Verify that all defined tests are implemented. This is, in the end, a code review. Every test defined should be implemented completely. Sometimes certain tests or assertions are difficult or time-consuming, but they shouldn't be ignored. Check the setup, the logic, and the assertions in each test. Also, check that the test data is complete and exercises the code.
  4. Test the feature manually. As a sanity check for the tester, use the feature manually, as your users will. This will help catch anything that the pre-defined tests missed. After all, the automated tests are only as good as your original specification. So test that original specification by using the feature. Often you'll find you missed one or more tests. Occasionally you'll find that you have duplicate or extraneous tests.
That you have automated tests to verify is great. Just make sure that you don't trust the green bar* until you know you can trust the tests it's running.

* Disclaimer: Not all test frameworks have a green bar for passing tests, but it makes a good metaphor. No complaining; I fully respect your alternate green bar-like passing indicators.

No comments:

Post a Comment