Monday, November 22, 2010

Chicken Little Eats Risk

Software testing is highly prone to chicken little syndrome. It's relatively simple to translate "we don't know what would happen if..." to "think of all the terrible things that MIGHT happen if...". This isn't a testing problem, per se. Nervous engineers and managers come from all over an organization.

For example:
"We don't know what would happen if a hard drive failed while the system was doing X"

can easily turn into

"If a hard drive fails while the system is doing X, it might crash! And if that happens at our biggest customer, they might decide not to use the system any more because if it crashes they can't trust it! And then they might tell someone they're not using the system any more! And then our other customers might get nervous and decide to stop using our system because if major customers don't trust it then the system must be bad! And then we won't have any customers and we'll go out of business!"

(I should note that doomsday scenarios tend to include a whole lot of exclamation marks.)

"We don't know" means that we don't know terrible things won't happen. But we also don't know that terrible things will happen. This is where risk comes in. For example, if the thing we don't know much about is the static HTML help, then it's probably not too likely that terrible things will happen. If the thing that we don't know much about is the core algorithm of our product, then that's a bigger deal.

This is where testers come in. Testers, after all, are generally the ones who actually work with the software as an entire system frequently. They know which parts of the system are generally well understood, and which parts have a lot of unknowns. Often, that means testers are the first ones to say, "well, we don't really know what would happen if....". This is the point at which we get to affect chicken little syndrome, and we get to do it by describing what we do know.

The panic of chicken little syndrome comes from the perception of a vast area of potential problems; the bigger the grey area, the more likely it is that someone can think of a huge problem. So to avoid chicken littles, make sure you bound the area of unknown as much as possible.

Using our html help example from earlier, we can say, "We don't know much about the static help; we haven't looked at it yet this release." And we can stop and chicken little will come running through the office looking panicky. Or we can keep going, "However, we do know that this is all static html, so the likelihood of any side effects from issues or of any problems beyond the content is pretty small." By restricting the area of the unknown, we have started to state our risk, and we have also decreased the likelihood that anyone in our reporting circle will panic.

So go ahead and report what we know and what we don't know. Just be sure that you report the boundaries clearly. Placing your report in context makes all the difference between calm handling of uncertainty and potential panic.

No comments:

Post a Comment