I work a lot with companies doing evaluations of software. It generally goes something like this:
- hear about shiny new technology
- identify the internal need that the new shiny might fit
- schedule a validation
- put together a test plan
- do the validation
- write a report
And then life goes on. Now, there's risk in any validation. There are hidden agendas in any validation. However, there is one thing that I can almost always count on: the test plan will be large.
You see, no matter the real underlying agenda, the people soon the validation need to prove that they were thorough in order to successfully prove whatever point they're trying to make. So they have a large (and hopefully impressive looking) test plan.
A large test plan is not necessarily comprehensive.
Often, in fact, the test plan is limited in the things it does but does each of them in many configurations. For example, I recently reviewed a validation test plan that wanted to test every major UI workflow in each of 8 browsers. This is like a four week plan! It's huge! It doesn't at all address scaling or deployment or security or extensibility or code complexity or maintainability or .... in other words, it's big but not comprehensive. And plans like that aren't uncommon.
The fastest way I know to tell if your test plan is comprehensive at a glance is to see if it is a matrix. Matrices are great but generally filling out a matrix is not enough to tell you about all relevant aspects of a product.
So if you're the evaluator, check your validation test plan to make sure you're really covering multiple aspects of the software. and if you're the one being evaluated, don't necessarily fear a big plan: it may miss all your weak spots.
After all, bigger isn't always broader.