Wednesday, February 17, 2010

The If It Goes Wrong Principle

Let's say you're doing a date-based release. You have a hard and fast date and whatever happens*, on that date you will ship. (* Yes, we all know there are exceptions for asteroid collision and nuclear war, but other than that, you'll ship on that date.) So you have a date, and you have a rough feature set consisting of whatever the team has committed to. You also have an implied minimum level of quality and stability. After all, you don't want to be embarrassed or deal with crashes and the like.

Given all these constraints, what do you test first?

Use the If It Goes Wrong Principle

The If It Goes Wrong principle states that you should consider what would happen if your test goes wrong, and put potential long fix turnarounds early in the test cycle. The longer errors will likely take to fix, the earlier you should test them.

For example, if you have a feature that you outsourced to another company, problems in that feature will probably take longer to fix, just because you have to go back to that other company, get them to fix it, have them run it through their process, and only then can you retest it (and hopefully declare it fixed).

For example, if you have a component that has changed a lot, you should put that component near the top of the test list because mistakes happen when you change code, and so code that's heavily modified is more likely to have problems and be a little less understood, and therefore take longer to fix. Depending on your developers, sometimes things take several tries.

For example, if you have a component that is a brand new design, problems in that code are more likely to be design flaws that show up only when you push the system. A major design flaw can take a long time to fix, so push the edges of the design early.

Using the If It Goes Wrong principle and combining it with an understanding of the risk in each feature or component (how likely that feature is to go wrong), there is an implied order of tests:
  • Tests in areas that changed in third-party code
  • Tests in areas with new design or design changes
  • Tests in areas that changed a lot, in particular for teams or developers that tend to have follow-on issues (bug A blocks bug B or the fix for bug A introduces bug B)
  • Tests in areas that didn't change, but that if broken will likely take a while to fix
  • Tests in areas that didn't change and that are generally easy to fix
There are numerous other factors influencing the order in which you run your tests, but as a general rule of thumb, move up the tests that trigger the If It Goes Wrong principle.


  1. Really well-put. I like this priority list -- it jives with my experience.