Thursday, January 14, 2010

Ticket Clutter

We use a ticketing system (Jira) that uses a pretty common method of displaying ticket information. Basically, it looks like this:

Ticket Summary: Bug in widget
Description: When the frobble in the widget grows to over 32 MB, the widget starts to wobble and it all falls down.
Comment 1: Failed in nightly on 2010-01-05
stack trace here
Comment 2: Failed in nightly on 2010-01-06
stack trace here
Same failure - maybe it's consistent?
Comment 3: (Assigned to Mr. Ed)
Comment 4: I can reproduce this 80% of the time if I grow the frobble over 40MB, but only 50% of the time at 33MB.
Comment 5: Failed in nightly on 2010-01-07
stack trace here
Comment 6: Failed in nightly on 2010-01-08
stack trace here


Jira is of course not the only system that uses this same basic format, many different systems do. For the most part it makes sense. You have a basic understanding of what's going on at the top, and then a running commentary of what happened and what people working on the bug are thinking and doing. It gets awfully cluttered, though. If a bug is around for a while, or if it causes a lot of test failures, you can get pages and pages of comments that basically amount to "yup, failed again", interspersed with comments from humans who are actually working on the issue and not simply recording that it still is an issue.

So how do we handle this?
  • Combine comments where possible. In particular the "still happening" comments all basically say the same thing, so they can be combined. If, for example, the same problem takes out 5 tests in the automated suite in one night, I'll make that one comment. That helps some, once you're sure they're the same problem.
  • Defer the test. If the test is failing consistently and not providing more information, I'll defer it until the bug is fixed. It's a little unnerving to decide to not run a test, but if it's not telling you anything new, then it's okay to not run it until something changes (probably an attempt at a fix or some additional debugging code goes in).
  • Filter comments. I haven't actually seen a good defect tracking system that does this, but I'd love it if I could look at a ticket and filter the comments to only show those logged by humans, or by a particular person. That way I can skip the "still happening" status updates (all logged by our automated system) and just look at the actual work being done.

How do you handle ticket clutter?

No comments:

Post a Comment