Tuesday, May 18, 2010

Check In But Don't Run

When a system under test is a software library or other piece of software with no GUI, you find yourself writing a lot of code. If the means of using the system under test is an API, pretty much any test you do is going to be using the API.

You end up with a lot of automated tests, and the problem of "how much should I automate?" is greatly reduced. Whatever you write just has to be hooked in to the test running and reporting system, and the barrier to automation is pretty low. It's a lot of fun.

It's also dangerous.

Running tests willy nilly can give you a huge headache trying to create and maintain tests and test infrastructure, analyze and report on results. Not everything

There are three acceptable things to do with test code you write:
  • Check it in as an automated test.
  • Throw it away.
  • Check the code in but don't run it.
The most obvious course of action once you've written a test is to check it in and make it part of your automation suite. This is often the right answer. Do this when the test is well-written, when the test is fairly cheap to run, when the test exercises an area of code not already covered by other automated tests.

It's also acceptable to throw code away. Think of doing this for quick scripts that aren't well-written (e.g., with hard coded paths and the like). Also consider throwing this away if the test didn't find anything interesting about the code - bug or not. Lastly, if the code that this tests is going away or changing, don't worry about testing it again; go ahead and delete that test code.

Lastly, there is a class of tests that you need to run, but only occasionally. These are tests that are too expensive (time or resources) to run in your general test infrastructure. However, they describe important things about the product, and you will need them again in the future. Scaling tests, for example, or performance tests using particular hardware, often fall into this category. In this case, write the test well and check in the code, but don't hook it up so it runs in your test infrastructure. It's okay to have these hand-run tests to ensure that your expensive tests run consistently when you do run them - manually.

Next time you have some test code, consider whether it's an automated test, a throwaway bit of code, or something you'll want again, just not tomorrow.

1 comment:

  1. I find there are dangers with these tests.

    1. They don't actually work when you need to run them. They (might/will) not get updated when infrastructure or functionality changes.
    2. They don't provide you with the feedback that you need because you don't run them often enough.
    If it's important enough to write a test, don't you want the results?

    The best practice that I can think to control those risks are to have the tests that you don't run be parametrized versions of tests that run all the time. Try with 3 clients in continuous testing, and run the same test with a higher number of clients when you feel like it.