That's interesting. After all, where I work, QA looks at the output of the nightly automated tests and logs bugs (or updates them). But why? What does this exercise tell us?
I want to look at nightly results to:
- identify regressions so they can be fixed
- see how well new features (and their tests) work in the overall context of the system. Ever see module A checked in and completely - and accidentally - break module C? Often the automated tests show it.
- as a catch to identify race conditions or other infrequent bugs that are more likely to be found the more often I run a given test that has a chance to expose the condition
- look at coverage reports to see what hasn't been covered at all
- find areas that I should be exploring more. If the, say, upgrade tests, break frequently, then I know they're probably fairly brittle and so I go explore that area of the code in my own testing.
All of those, except the last one, are things that could be done by developers or testers. But that last one - guidance as to what I should be looking at more deeply - that I'm not willing to give up. I get information out of looking, and that's worth the 15 minutes a day I spend on the task. Of course, development is welcome to look, too! More eyes will see more information.
"Automated Results"? Why capitals? Did someone automate *results*? If it's about automated execution isn't it "check execution results" or "check execution log"?
ReplyDeleteCapitalized because I usually put titles in title case. Don't look for a conspiracy where there isn't one. :)
ReplyDeleteWell put.
ReplyDeleteIn an article I wrote for searchsoftwarequality.com I once said:
"Whenever a UI test fails, that means that something in the application has changed in some unexpected way. And where there is an unexpected change, there are often problems to be found. Remember, UI tests are shallow things. They are merely indicators that something in the software may not be correct. It takes human beings to evaluate the extent of any potential problem."
"Responsible agile teams will treat a failing UI test as a sign that some exploratory testing is needed in that area of the application."
(Also, I'm with you on the conspiracy business.)