That's interesting. After all, where I work, QA looks at the output of the nightly automated tests and logs bugs (or updates them). But why? What does this exercise tell us?
I want to look at nightly results to:
- identify regressions so they can be fixed
- see how well new features (and their tests) work in the overall context of the system. Ever see module A checked in and completely - and accidentally - break module C? Often the automated tests show it.
- as a catch to identify race conditions or other infrequent bugs that are more likely to be found the more often I run a given test that has a chance to expose the condition
- look at coverage reports to see what hasn't been covered at all
- find areas that I should be exploring more. If the, say, upgrade tests, break frequently, then I know they're probably fairly brittle and so I go explore that area of the code in my own testing.
All of those, except the last one, are things that could be done by developers or testers. But that last one - guidance as to what I should be looking at more deeply - that I'm not willing to give up. I get information out of looking, and that's worth the 15 minutes a day I spend on the task. Of course, development is welcome to look, too! More eyes will see more information.