Thursday, December 23, 2010

Just How Much Do We Need?

Sometimes when we write a piece of software it runs for ages and ages and it get used a lot. Other times, we write very "one off" software. Migrations, data cleanup utilities, etc. can often fall into this category.

A friend is working at a company where they license television airing data (think a TV guide) from a third party. They then massage this data and put it into a database, then do stuff with it. they recently switched providers for this data, and are having to do a few one-time things to get the data to be clean throughout. For example, there was about a three-day overlap in provided data, so for three days they have a lot of duplicates (e.g., two database entries that say, "Rehab Wars aired at 8pm on Tuesday"). So they need a one-time job to reconcile these entries, get rid of the duplicates, and update references to this data.

So how much testing is needed?

Well, let's see. On the "we need to test a lot!" side:
  • television data has lots of quirks (weird characters, subtitles for shows, overlapping air times)
  • we can't take down production for this, so it's going to have to be done as other items are changing, which means we can't simply restore from backup if it fails. And undoing it is going to be non-trivial (aka pretty hard).
On the "eh, let's not be dumb about it but let's not go overboard side:"
  • it's a relatively small amount of data
  • we wrote it so that it simply stops on any error, and we're pretty aggressive about error checking (so the odds of it running amok and hurting things are fairly low)
  • we can do it by hand (aka someone in the db reconciling SQL) in about 2 days, so inflating the project much beyond that is kind of silly
I don't actually know what they decided to do in this case. I do know that they finished the project (presumably successfully).

So what tests would I actually run?

Here are the parameters of the project:
  • This is a single SQL script that will be run on a production database once.
  • The script will fail on any error or data mismatch (that it finds) and is restartable from the fail point. (allegedly, of course - you may want to test for this)
  • The script identifies duplicates by name and air time and merges their records in the database.
So now it's your turn: what tests would you run in this situation? What would you skip? Why?

I'll tell you tomorrow what I would do.

1 comment:

  1. I'd write something that could swap in a history range from a temp table, and test it pretty thoroughly because that's simple and reusable, including for open vs closed intervals, what happens if the end of Daylight Savings is in my interval or is the endpoint of my interval, the whole bit. Than I'd write the filter and check (to be terminologically precise) that the results were what I wanted but not bother with any actual testing at all.

    ReplyDelete