Monday, September 29, 2008

Evaluating Test Code

One of the projects I'm working on is "auditing and evaluating" a set of existing test code. Basically, they want me to walk through the code and put together a list of items that are done wrong, areas that were missed, good practices to continue, etc. In short, it's a perfectly normal project.

So how do we approach a test code evaluation?

There are a few kinds of things to look for here:
  • Things that are flat out wrong. These just don't work. Examples include extensive use of timeouts, 
  • Things that are unmaintainable, etc. These are things that work now, but won't scale as the project continues. Unscalable infrastructure code is a huge culprit here. Unmaintainable code that's written very specifically is also a problem here.
  • Poor coverage. No matter how well-written your tests, if you're not testing enough, you're not done.
  • Stylistic things. Consistency of code layout, comment formats, etc. are big here. This is where things get a little more sensitive, since they're not likely to be wrong so much as picking one.
Obviously, there's a lot of depth to each of these. Leaving it at this level for the moment, though, there are land mines to watch for:
  • Make sure to distinguish between wrong and non-ideal. Both are good, 
  • Don't forget to consider changing standards. For example, in Rails two years ago it was all fixtures, then we went to foxy fixtures, now many people are avoiding fixtures, for example. If you're not current, don't take this kind of project.
  • Style matters. It's easy to say, "eh, whatever" about naming, spacing, etc., but it's a huge factor for maintaining your code over time.
These kinds of projects are really useful for checking your thinking about test code and test structure. However, they can also turn very political very quickly, since people have put a lot of effort into this code. So take them and document how to make things better, but do it politely.

No comments:

Post a Comment