- Checklists describe the goal, not the steps.
- Checklists provide room for people to think; scripts deny testers the flexibility to see the whole picture.
- Scripts only check for what we thought to explicitly verify; checklists allow you to adapt your verification to what you're seeing in the system.
Checklists are scary!
Wait a minute. Checklists are tools that allow us to hint to testers and increase coverage while providing the freedom for them to think about and dive into a system. That's good!
The problem with checklists is twofold: (1) how do you know when you hit a specific test case?; and (2) How do you document what you actually tested? I'm not going to talk about the second of these; that's a whole separate blog post. But how do you make sure you hit a specific test case?
Let's step back for a minute and talk about why we might want to be certain we hit a specific test case. I can think of two things:
- We had a bug here and we need to be sure we haven't reintroduced it.
- It's a particularly sensitive action/configuration/setting and we don't want to go to our biggest (or most important, or loudest) customer without being very sure that this will work.
Now, what can we do about it?
There are a lot of options, ranging from "hope you hit it every time" to "forget checklists! scripts are safer!". Instead, we simply acknowledge the realities of the situation. These specific things are important, so we need to write them down in the test plan. They don't have to be in our checklists; they can be elsewhere. They just need to be there. And then we need to run these specific things. They can be manual or automated, as long as they get run.
So go ahead, use checklists where you want testers to actually dig in and think, and use scripts where specificity really matters. Take the benefits of your checklists and eliminate the areas that are scary by scripting them away. After all...
Checklists aren't scary!