First, some background. As I've mentioned before, I work in an Extreme Programming shop, so when we write a feature, we write tests to expose that feature. Usually, these tests are automated*, and the default assumption is that a test will be written in code and run automatically every night or every week. Only if something is incredibly onerous do we start by assuming it's manual. We always manually test new features as well, looking for usability, things missed by the test code, etc.
How long does a test have to run to make the effort of that test automation "worth it"?
The particular feature we're discussing here is to "support third-party software WidgetX" (of course it's not WidgetX, although that would be really awesome. The specific software doesn't matter). Now, let's assume for the moment that we use manual exploration to learn enough about WidgetX to create some intelligent tests. There are two paths we could take:
- Manually re-certify WidgetX every release
- Write the automated tests and run them, probably weekly.
The advantage to automating the tests is that we get feedback more often as we code that we haven't broken anything. However, writing the tests costs us money and time, and running the tests also isn't free.
How long does a test have to run to make the effort of that test automation "worth it"?
In other words, under what circumstances have we recouped our investment? On the benefits side of the automation balance sheet we have:
- the likelihood of regression - pretty high, since XP involves a lot of fearless refactoring
- the importance of the software working
- the time saved by not having QA run the same old tests manually
- not creating an exception to our development process
On the costs side, we have the cost in man hours and in resource overhead.
I don't know where we're going to end up yet, but we have a framework to figure that out.
* By "these tests are automated" I mean that code is written to set up an environment, perform some action, and assert on the resulting state. This code is then put into our test framework and run automatically every night or every week. Human intervention is required only to diagnose failures.
No comments:
Post a Comment