During an interview, the candidate was discussing how the test automation she had worked on performed user actions against a file store. The tests as described were quite extensive and were very sure to hit all possible combinations.
Interviewer: "How did you find that your tests mapped to what clients actually did? Did the tests find most of the issues?"
Candidate: "Well, duh!"
(It should be noted that the interview took a major turn for the worse with those two words.)
Interviewer: "So, your clients didn't report issues that you hadn't found with your tests?"
Candidate: "They did. But we couldn't reproduce them, so we know our tests covered everything."
Now confidence I've seen before, but that level of arrogance was unusual. And in the candidate's own words, that arrogance was certainly not justified! Not being able to reproduce issues doesn't show that your tests are good. On the contrary, it shows that you're missing something - maybe a bug, maybe not - and you haven't got a test environment that truly matches what the customer does yet.
We will not be hiring this candidate.