Monday, December 3, 2007

Customers Aren't Testers (Without Your Help)

Imagine if you turned on the TV and you no longer had channels. Instead, you just had lists and lists of programs you could watch. Some of them are available right now. Some of them aren't available yet. Some of them have really detailed descriptions.  Some of them just say "comedy, 30 min". Some of them happened in the past and probably won't come on again, but who can really say? It's just a huge flood of information.

This is roughly what happens trying to define tests in an XP environment. The focus of stories is very short-term and rather isolated to that story. Stories are also defined by the customer (or customer proxy) rather than a trained engineer or tester. Lastly, stories are very positive-outcome focused; they describe what will happen, not what shouldn't happen. What does this mean for the tester?

Your job, as the tester, is to help the customer write good acceptance criteria. You also need to make sure that the negative aspects of the story are accounted for; that all the things that could go wrong react appropriately. In short, you have to help the customer make sure that the story reflects what he wants, not just what he asked for.

I've come up with a simple framework for helping customers write acceptance tests. The goals of this framework are to:
  • Make testing accessible. If we bring in complexities, even just in terms like "boundary value analysis" or "threat modeling" or "equivalence class partitioning", we'll intimidate and ultimately lose our customer. Even if we're doing these things, we need to make them seem accessible.
  • Address the external first. Describe your acceptance tests as what the user will see or experience, then as external influencers on the system are affected. Avoid describing how internals of the system will react, because in XP this could change at any minute. Do keep in mind that external influencers might be other components of the same system. If I'm describing a change in my application, my database might be an external influencer.
  • Describe context. No story works in isolation, and unit tests should handle the story itself as it's being written. Acceptance tests need to not only confirm that the story does what it says, but also call out areas of integration and other modules with affected behavior. This is the context in which your tests run, the environment in which they exist. This could be as simple as a dropped network connection mid-test or as complex as a multi-part system failure with several different processes occurring in the background.
  • Be precise. Don't test "bad passwords". Test "blank password", "old password", "another user's password", "too long password". Often this alone will help better describe the feature and ultimately influence the implementation of the story.
Now that we've figured out what we want to do and that we don't want to intimidate our fellow testers, what are some things we can do? What kind of very light framework can we build?

I find that the hardest things customers have are (1) getting precise; and (2) understanding context. So let's address those:
  • Getting precise. This is easiest when there is a user interface. Ask the user to draw the interface on a whiteboard, then walk through it. Ask lots of "what if I..." questions. Repeat this often enough and you wind up with the user asking those questions themselves.
  • Context. I actually use (real) buckets for this. I put big labels on each bucket describing typical ways that the system might get exercised. Then I put good test cases in each bucket. For example, I might have a bucket labeled "HA failover" for a high availability system. Then all we have to ask ourselves is "if an HA failover happens during this story, will it change anything?". If the story is "new icon", then it won't, and we put the bucket away. If the story is "I can upload a 100MB file", then it will, and we run the test cases in that bucket. I'll talk more about the buckets in another post.
Having pre-made test cases tends to help people riff, and walking through things as the user will helps people feel a story before it's done. All this, and you walk out not only with acceptance criteria, but with a stronger story. Oh, and you haven't scared off your customers with intimidating test jargon.


No comments:

Post a Comment