Tuesday, August 25, 2009

Classification

There are many many ways to conduct tests - from scripts (executed by you or by a computer) to exploratory tests to ad hoc test to... All of these, though, ultimately require some division of the system into manageable test parts. Maybe these parts of features, maybe missions, maybe some aspect of the system. So, how do we divide the system? What is our test strategy?



The living world is a great mass of things from horses to beetles and from paramecia to roses. So we divide and classify things into categories and subcategories. The kingdom of animals is divided into the family of vertebrates is divided into the class of mammals, and so on. We can do the same with our system.

First we choose our kingdoms. These are the highest-level ways in which we will approach the system. Personally, I'm partial to the FURPS classification (originally out of HP, I believe). So our "kingdoms" are:
  • Functionality
  • Usability
  • Reliability
  • Performance
  • Sustainability (some people say Supportability instead)

And then from there we subdivide. "Usability", for example, is divided into families:
  • End user ease of use
  • Code testability
  • Operational use
  • Deployability

And from there we can subdivide again. "Code maintainability", for example, is divided into classes:
  • Strength of coupling
  • Code complexity
  • Conformance to coding style
  • Comment accuracy and relevance
  • Build environment and documentation

Please note that none of these items tells you how to test something. You can do exploratory tests. You can write scripts. You can use any technique you like. This merely describes a way to break your system down into testable parts.

For those of you playing along at home, I have put together my current system classification. It's not universal; you will need to make changes. Hopefully this will give you a start, though.


- Functionality
- Your System Features Go Here
- Usability
- GUI ease of use
- Time to accomplish tasks or workflows
- Ease of identification measures (e.g., "figure out how to log out" takes X seconds)
- Style guide conformance
- API ease of use
- Documentation
- Clarity of exposed calls and parameters
- Clarity of messages, configuration, return codes
- Operational use
- "Care and feeding" of the system
- Manual steps (e.g., reboot server every X, or manual log collection)
- Required downtime (planned and amount of anticipated unplanned downtime)
- Notification mechanisms
- Self-identification of error and warning states
- Integration into existing notification tools (e.g., SNMP trap, splunk)
- Resource requirements
- Number of servers (fewer are often easier to administer)
- Power and cooling requirements
- Hardware and software lifecycle requirements (e.g., upgrading every 6 weeks is harder than upgrading every 6 months)
- Code testability
- Availability of interfaces for mocking
- Integration with common test tools
- Presence of unit tests
- Deployability
- Package for install and upgrade
- Dependencies on external items (e.g., libraries on the system, other software, specific hardware)
- Reliability
- Failure sustainability
- Ability to handle system failures (e.g., crash)
- Ability to handle external failures (e.g., power loss)
- Long-running tests
- Full system behaviors
- Long running behaviors (e.g., log rolling abilities)
- Memory and resource management
- Valgrind and the like
- Resource (CPU, memory, disk) usage to benefit ratio
- Leaks (memory, thread, etc)
- Disaster recovery
- System recovery tools and utilities
- "Bringing the system back up" protocol and tools
- Redundancy
- Replication and mirroring
- Failover and failback mechanisms between site
- Cross site or cross system synchronization (data and control)
- Performance
- Throughput
- Latency
- Your System Features Go Here (with a "how fast/how many" twist)
- Load
- Sustained client load
- Peak load handling
- Sustainability
- Maintainability
- Code smells (cleanliness)
- Component interfaces and interactions (e.g., can you upgrade one component without huge overhead?)
- Upgradeability
- Upgrade mechanism
- Re-install mechanism
- Downtime requirements
- Supportability
- Issue diagnosis features
- Reporting
- Log collection
- Support access mechanisms
- Underlying components
- Hardware components (e.g., can we still get hardware for this?)
- Software components (e.g., is this library still supported?)

5 comments:

  1. Great post ctaherine.

    As you said, there is no hard and fast breakdown,but you really put an effort to compile all components.

    I really appreciate, Catherine, i mailed you last week to have some guidelines, you received my mail?

    ReplyDelete
  2. A system like this is known as a heuristic model. Specifically, in my community, we call these "guideword heuristics."

    I first learned about guideword heuristics when I stumbled across the HAZOP system (google it). From that I created my Heuristic Strategy Model, which is like your list but with a lot more stuff in it. (More is not necessarily better.) Guidewords are a special case of "generative heuristics" (heuristics that help you generate ideas).

    Like you, I encourage my students to create their own models, or use and modify mine. Mike Kelly uses an acronym "FCCCUTSVIDS", for instance.

    The idea of a compact outline of words or phrases that help you generate test ideas is extremely powerful. I take a tester much more seriously if he or she uses such a tool.

    Thanks for sharing yours.

    -- James

    ReplyDelete
  3. Kashif: yes I got your mail. I've just been a bit slow in responding!

    James: I wouldn't even begin to know how to pronounce "FCCCUTSVIDS"! (FURPS sounds a little funny, like maybe it's the sound that bubbling mud makes, but it's sayable. I generally do better when they're sayable.) But hey, everyone has their own and as long as it works I'm all for it.

    And you're right - some of them are bigger, some are smaller. Mine, for example, includes no security or penetration testing (not an area I'm particularly familiar with). Other people skip usability or deployability concerns because they're not relevant or they don't have experience in that area.

    Still, it's fun to see what people have come up with.

    ReplyDelete
  4. Thanks Catherine for sharing this and many other thoughtful articles
    Regards
    Lohit

    ReplyDelete
  5. Thanks catherine for your response, always looking forward to your response :)

    ReplyDelete