How do you verify automated tests?
There are several things you have to know in order to know that an automated test is sufficient. You need to do the following:
- Define the tests before they're written. This way you know that the developer is working from a good spec. Don't expect the person writing the code to also identify the tests. Having more than one person involved will give you better tests because you won't be at the mercy of a single person's assumptions.
- Verify that the tests run. The first step once you receive the tests is to prove that they all run and that they pass. This gives you a baseline to know that the tests should pass; it's a simple sanity check. If the tests don't all pass, go back to the developer. There's an error in code or an error in the assumptions behind the tests.
- Verify that all defined tests are implemented. This is, in the end, a code review. Every test defined should be implemented completely. Sometimes certain tests or assertions are difficult or time-consuming, but they shouldn't be ignored. Check the setup, the logic, and the assertions in each test. Also, check that the test data is complete and exercises the code.
- Test the feature manually. As a sanity check for the tester, use the feature manually, as your users will. This will help catch anything that the pre-defined tests missed. After all, the automated tests are only as good as your original specification. So test that original specification by using the feature. Often you'll find you missed one or more tests. Occasionally you'll find that you have duplicate or extraneous tests.
* Disclaimer: Not all test frameworks have a green bar for passing tests, but it makes a good metaphor. No complaining; I fully respect your alternate green bar-like passing indicators.
No comments:
Post a Comment