Other Guy: "So what are your performance measurements for this?"
Me: "Like speed? We'll take our usual performance suite."
Other Guy: "No, I mean for the release. Like bugs found. What else?"
Me: "Oh! Well, what are you trying to measure? What are you learning from this?"
Other Guy: "Umm...the metrics for the release? How many bugs got found, how many bugs got fixed."
The problem here wasn't that the other guy in the conversation didn't understand how to measure the success of a release. The problem was that we were talking about how to get the information he needed, not what the information he really needed was.
In this case, he wanted to know what the expectation of quality was for the software, something that would help his group understand the ultimate question: "Is this software ready for release?". Now THAT we can work with. And we came up with a way to measure that information (based around test plan completion percentage, a risk-based weighting of the open blocker bugs referenced to fix rate, and the trend of the find rate).
Metrics is a particularly fraught area of testing (really, of everything). It's fairly easy to generate data. It's a lot harder to generate useful data. This is one case where you need to work from the result back. Figure out what you want to know, then figure out how to measure it.
Identify the what, then identify the how. It works for metrics, too.
Disclaimer: Yes, I know WHAT versus HOW is often a theme of this blog, but it's important, and in practice it's not always clear where the line between what and how really falls.