Wednesday, July 9, 2008

Problems in the Small

Up front disclaimer: this is just an idea. I haven't actually tried this yet.

We've been talking a lot at work about extended tests and about tests. The idea is that you get close to a customer's environment (or a potential customer's environment), and then the wacky timing issues, edge cases, and other nefarious bugs will start to show up. The problem with this is twofold: (1) it takes a long time and a lot of resources to do; and (2) it can't tell you a thing works. It can only tell you that a given situation is either properly handled or hasn't occurred.

What if we looked at the problem the other way around? Basically, we're trying to simulate a large environment with the idea that we'll eventually hit the rough edges of the code. What if, instead, we looked at the problem in the small? Take one tiny area of code, and spend a lot of time putting random things around it.

An example might be useful.

Let's say we have a system that logs data. The client writes a data stream to it, and internally, the system stores files in a given format. At some point in the lifecycle of the system, we have an upgrade that changes the format of the files. This is a fairly standard thing, and many of the unit tests are pretty obvious:
  • add file after upgrade
  • read file added before upgrade
  • modify file added before upgrade
  • modify file modified before upgrade
  • delete file added before upgrade
  • delete file added after upgrade
  • modify file added after upgrade
  • etc...
But there are edge cases, and we're not going to think of them all. And there are race conditions and other things that simply aren't going to be caught by unit tests.

When we do our functional through the external client simulation, we're not even working on files, we're working on the log stream. We're just hoping that the system, through unit tests and random luck, happens to cover all the cases. By moving out a layer, we've abstracted ourselves from the thing we're trying to test - it's like trying to type with chopsticks!

So what if we do a really tiny randomized test? It might look something like this:
  1. do a bunch of operations - adding, deleting, modifying, reading, etc. - at the lowest level the system handles files.
  2. upgrade
  3. do a bunch of operations - adding, deleting, modifying, reading, etc.  - at that same lowest level the system handles files.
It looks a lot like our big test that "simulates a client environment", but it takes it down deeper into the code. This would be done at the internal file handling level, instead of the more abstracted logging level. We still get the timing issues that work themselves out in a semi-random simulation test, but we don't get the abstraction that hides timing issues.

I'm interested to try this. We'll see if it works!

No comments:

Post a Comment