I fell into a discussion on the software-testing mailing list last week and over the weekend, about software metrics. One assertion that was repeated a few times was the idea that "we need metrics because management wants them".
Let's examine that. Why does "management" want a metric? (By the way, anyone with Lead, Manager, or Director in their title: you're officially management. Welcome to the party.)
Here's the first thing I learned about metrics when I joined the ranks of management: the number isn't what matters. What matters is the information within and around that number that lets me make the decision I need to make. It's a sound bite and when done well it conveys an entire tone and history that I can easily consume.
A metric is simply information distilled into a manageable chunk.
So if you don't like providing metrics like "number of showstoppers found per release" or "defects per thousand lines of code", that's fine. Find another way to provide "management" with what they need.
It's rather like the "quants" who use detailed statistical analysis of past market information in an attempt to predict future market movements. They're really just creating metrics with a whole lot of math behind them. And those metrics are for one thing only: to predict the likelihood of future events. In the "quant" case, they want to predict whether stock X will increase in price. In our case, we want to predict whether product Y will ship on the currently-targeted date with Z level of acceptable post-release work for handling any problems.
Without using metrics, then, how do we tell management what they need to know?
Let's take as a given the following:
- any number the team can measure and is tied to their "worth" (bonus, likelihood of remaining employed, continued donut Fridays) is a number the team will eventually figure out how to game
- "management" wants a measurement that isn't reliant on a single person's "gut" or ability to divine status and release readiness. I don't want my "magic release predictor" to leave the job because then I'm really out of luck.
- measurements are proactive, taken and identified prior to a release decision
Notice that there are certain things we have specifically not excluded:
- qualitative (i.e., non-numeric) metrics
- the reliance on our skills and expertise.
Metrics can have room for humans; it's not all numbers and systems (or it doesn't have to be).
Here's a metric:
Every week the engineering (dev, test, prod mgmt, etc) team takes an anonymous vote: "would you release the software today?". When that number passes some threshold (say, 85% yes), then the engineering team feels pretty good about the product, and that metric turns to go. (You could even baseline this by asking that same team, "Knowing what you know now, would you have released the last version? the version before?". After all, some people are so risk averse, they'll probably never say yes. Others are so risk accepting, they'll probably say yes immediately.)
Here's another metric:
What was the total cost of defects found in the field for the past few releases (say, number of defects, plus cost to handle, plus % of customers affected)? Is that number going up or down? If it's going up, then we've got a problem. If it's going down, then this team is doing better and better. Let's keep doing what we're doing.
Are these metrics that can be gamed? Sure - see the above assumption that any metric can be gamed. Do they have risks? Sure - and so does everything we do. If we want to stop hiding behind numbers, then let's do it. But let's recognize what management needs - information so that they can make the best decision they know how to - and let's figure out how to distill our knowledge, thoughts, and opinions down for them.
We're communicators, and a metric is an opportunity for communication.
Use your metrics as a distillation technique; that's all they are - a sound bite. It's lossy, but it's still communication. Embrace the chance to provide information.