They're the tools-oriented test plans. Basically, it's a test plan that says, roughly, "we're going to use these tools." The relatively good ones even say, "we're going to use the tools in these configurations.".
Reading a tools-oriented test plan is like asking your kid, "what are you going to play in the park?" and getting the answer, "there's a slide and a swing set and a monkey bars!"
Okay. That's nice. But what are you going to actually do when you get there?
The biggest problem with a tools-oriented test plan is that it doesn't describe what you're trying to accomplish. There are no success criteria. There are no intended areas of learning. There are no pass/fail or conformance criteria. Because it doesn't convey intent, you can't know if you've accomplished that intent. There's no way to know if you've succeeded.
The good news is that a tools-oriented test plan is often really good at making you aware of the existence of tools. It's also generally salvageable. You've written a lot about how you're going to accomplish things, so all you need to do is add and rework it to describe WHY you're going to use this tool and WHY these configurations are interesting or important. To get from a tools-oriented test plan to something usable, I do the following:
- walk through each section and ask the question, "what is doing this going to tell me?"
- make sure I put the answer to the question into the test plan
- create a section called "tools"
- put the tool name and a description of all of the relevant features into the tools section
- replace all the tool info with a reference to the appropriate feature(s) in the tools section
But if you find yourself reading and saying, "I see what to do, but what will that tell me?", you've got a tools-oriented test plan. Tools are great, but you don't have to let them run you. Instead, put them in their proper place as servants to the information you need.