I'll pick sets of bugs all opened within a time period (a week, or a month) and just look at what their resolutions happened to be. I'll start with the obvious - x% were fixed, y% were marked unreproducible, etc. Then I look at specific resolutions - the comments in the tickets themselves.
So what am I looking for?
- ssh failures (timeouts, failures to connect, etc)
- mount/umount failures and other OS issues
- tests that were fixed by extending their timeout periods
- fixes in test assumptions or infrastructure rather than in code
- bugs in certain layers of the code across different tests; these are sometimes not obvious when looking at the functional areas that the tests assert on directly
- numbers of bugs marked "unreproducible"
No one ticket with this resolution comment would raise a red flag, but if you get a cluster of them, well, maybe there's something going on there. The increased timeouts may mean that the product is getting slower. SSH failures can be a sign of an overloaded network. Bugs in specific layers of the code may indicate an underlying weakness that's being masked by the layers above (or below) it. Not always, but sometimes I find that this is where the deep bugs are.
It feels kind of silly sometimes to simply scroll through resolved bugs, but that second level analysis shows patterns that a first level analysis simply missed. So I keep scrolling....
* Am I the only one who does this?