This post is one of a series examining the role of test in various software development processes. Today we'll look at SCRUM.
How I got here:
First of all, SCRUM is not a software development process. SCRUM is a project management process; it describes how to control the development of software, not how to actually do it. That being said, SCRUM is becoming quite popular.
The role of test:
SCRUM does not address test directly. Test items can appear on the product backlog, and testers are members of the team who commit to tasks just like developers or any other team member. SCRUM does state that at the end of every sprint there will exist a shippable product. This implies that test will be complete.
SCRUM emphasizes a very flat team structure, which forces QA onto the same level as development. This can help eliminate contention between the teams. Ideally you wouldn't have this anyway, but it's useful to have the process you're using help it.
Scrum states that at the end of every iteration, the product should be "shippable". You may not choose to ship, but the option should be open. Back in the real world, where estimates are optimistic and tasks have complexities not known until they're implemented, well, dev often runs late. How is test to react in this case? There simply isn't time in a given iteration to perform all the testing you would like to be able to say the product is shippable.
In the end:
I think SCRUM has a lot of potential to bridge the accountability and thought of RUP while allowing more flexibility. I also like how adaptable it is - you can have a 2 man team or an 8 man team, and you can have a 2 day sprint or a 2 month sprint. Getting around the problem of test is something that I haven't seen solved effectively. There are several possible resolutions to this that I've considered or attempted:
1. Schedule a test iteration right after the dev iteration. This violates the principle that the end of an iteration leaves something shippable - the end of the dev iteration doesn't give you a shippable product. If you're doing 4 week iterations, you could be quite a ways from shipping.
2. Do not schedule dev work that affects product stability for the last n (% or days or whatever) of an iteration. This way everything is done at iteration end minus n and the last bit of testing can be completed. Meanwhile, dev can do the tasks that don't affect the product itself (setting up servers, environment maintenance, design tasks for future implementation). This one I like in theory. In practice, that time gets squeezed because of overflows in dev. Your test burndown also gets a little scary toward the end.
3. Put dev and test into separate iterations and offset them. This is actually my favorite way to handle this so far, even thought it's far from perfect. Basically, dev keeps going to the end of a sprint. Test then happens to have n days left in their sprint to finish everything off. The big downside to this one is if test discovers a showstopper defect after dev's sprint is over you throw off the next sprint.
All thoughts on processes are my own and are meant as reflections of my experiences with them, not with the theory of the process itself. No flamewars, please.
Other Posts in this series:
Test in Process: A Series (http://blog.abakas.com/2007/08/test-in-process-series.html)
Test in Process: RUP (http://blog.abakas.com/2007/08/test-in-process-rup.html)
Test in Process: XP (http://blog.abakas.com/2007/08/test-in-process-xp.html)
Test in Process: SCRUM (http://blog.abakas.com/2007/08/test-in-process-scrum.html)