Thursday, May 1, 2008

Development Cycle for QA

Any good software development lifecycle (SDLC) is about rhythms. It's about creating a predictable, repeatable* cycle. There have been a lot of articles written about the development lifecycle, but they're generally either software-oriented - what happens to the code over time? - or developer oriented - what does dev do now.

So, what does the development cycle look like for QA?

An Overview
Let's start with the system as a whole. I've gone with a basically agile lifecycle here just because it's most relevant to my current thinking.



So What's QA Up To?
On the surface, it sure looks like QA has a lot of free time! There's just one little segment named "testing". I think we all know by now that's not true. So let's break it down. Here's what QA is doing during each phase:

Initial Planning
This is a time for QA's planning as well. This is where you perform a lot of your projects that have a lead time. Some sample tasks:
  • Procure hardware and software.
  • Get a good understanding of your user; interview your clients, etc.
  • Gather your team and get to know each other. This may be new hires, contractors, a group of friends working on this project with you, or an existing team.
  • Figure out the kinds of testing that are going to be important to your product, your customer, and your company. Is security testing important? How about load testing? Does disaster recovery testing matter, or not yet?
Planning/Requirements
This phase is about understanding the details of what the engineering team (including QA) will do. Keep in mind that in QA you have to be prepared to test early, so some of the requirements work translates into implementation work for you. Some sample tasks here:
  • Help define acceptance criteria. These are requirements just as much as the GUI screenshot or the command line option.
  • Define your toolset. Have a log collecter that needs some work? Now's the time to think about the things it needs to do. Starting implementation is a good thing here.
  • Create your test run infrastructure. If you don't already have it, make sure you complete setup (and implementation if necessary) of your continuous integration system, of your test run infrastructure, of your defect tracking system, etc. You need to go into implementation so that development can start using this immediately.
  • Define your system tests. As the system comes together, you'll need to test it as a whole. This is where you create the test plan that defines that.
Implementation
This phase is where QA and development are really tied closely together. The goal of this phase is to have minimal time between doing something and seeing the test results; speed is important here. Patience is also important - things can get pretty broken in this phase, and that's okay as long as it doesn't stay that way. Some example tasks:
  • Accept features. As features are implemented, prove they work or identify the ways in which they don't work.
  • Verify bugs. This is an analog to accepting features.
  • Monitor automated tests. As automated tests run, QA should be always cognizant of the results. Are they failing? Are they passing but taking longer than they should?
  • Continue to improve the test infrastructure. You'll still be wanting to tweak and improve things here.
Testing
This phase is about working with development to ensure that the acceptance criteria are complete and adequate for the system as a whole. Example tasks include:
  • Running system tests. These are the tests that didn't pass during implementation because things were just a bit unstable. Now that things are settling down and most of the new features are in, it's time to exercise the system working together.
  • Stress new features. Now that the new features work, find their limits. Break them, use them with other features, stress them. It's all basically functional at this point, so find the limits.
Evaluation
This phase is about assessing how you did, both as a QA team and as part of the overall engineering team.
  • Figure out what you did right. Decide what worked really well. Did you have a great tool? Was the test order just awesome at finding bugs early?
  • Figure out what you need to do better. Decide what just didn't work. Were you too slow accepting stories? Did you spend a lot of time "clarifying" (or defining) requirements during implementation?
  • Figure out what you didn't do. Do you need another tool? Is there some mind-numbing manual process that you ought to automate? Did you spend more time keeping your massive stress test cluster running than the value you got out of it?
Deployment
Full disclosure: I depart from classic agile methodologies here. Theoretically, deployment should be just that - sending it to production. I insert a step here that I call "release testing". This is because I find that it's a bit overoptimistic to say that the general agile lifecycle gets code stable early enough for QA to prove it is stable and functional prior to deployment. It simply takes longer than your average iteration to thoroughly exercise a system. This doesn't mean you break process; it just means that QA spends some time working a release behind making sure things are as stable as they should be. Example tasks here:
  • Long running tests. Run your tests that require you to have a stable build up for a week or more. You didn't have any nodes
  • Manual tests. Run your usability tests and manual tests now. This is a good time to catch how the system feels overall as it works together.
  • Customer-specific tests. Run tests against any applications or any customer scenarios that your automated infrastructure can't handle.

And THEN you deploy.

* I started to say repetitive cycle instead, but repeatable sounds much less dreary!

No comments:

Post a Comment