This is a (slightly sanitized and simplified) story we've been working on recently:
Story: Cache state informationMotivation: Increase system management response times by not having to retrieve generally static information every timeDetails: (proprietary bits here, but basically, cache things like IP address of all nodes, etc)Acceptance Criteria:
- Add/remove/kill a node and ensure that the cache is updated
- Restart the system and ensure that the correct information is returned after restart
- Remove a node while the system is down and ensure that after restart the system is gone
- (Other state change type tests here)
- Ensure that the system responds in under 3 seconds 98% of the time under our defined Load Test
All in all, it's not a bad little story, and not bad acceptance criteria. But here's the important question: in 6 months, when someone comes back and says, "yeah, I didn't know what that test was getting at", will we be able to answer that question? What we have here is another case of the six month test.
It's useful to state what we're going to do - in this case, what tests we will run - but it's not sufficient. We also have to provide some insight into why. We put some thinking into developing these tests, and into identifying what the relevant things to test were. So let's write it down. In our story above, we're trying to show that introducing caching: (1) really does speed things up; and (2) still provides accurate information. So let's write down our test goal in the story, along with the tests that demonstrate that we've achieved that goal.
In six months, we'll thank ourselves.