I always test the system as a user would see it.
Automated tests are great and wonderful, but your users aren't automatons. They're humans, or other systems, and as such they have their vagaries. Your job is to see a feature as your customer (human or system) will see it. If you don't try it at least once as a user would see it, how do you know what they'll really experience?
I use these "user proxy" tests to look for:
- Usability issues. Is the time to accomplish a task just too long? Do I find that I'm clicking through to another screen a lot for information? Is three clicks really too many for this common action?
- Perceived performance issues. What does performance feel like to the user? Sure, it may start rendering in 2 seconds, but if it takes another 20 seconds to fully render in my browser on my computer, is that really okay? Note that perceived performance may be different from what my performance measurements gather.
- Context. Does this feature make sense with all the other features? Does it hang together when it's being used, no matter how good the screenshot is? Can I get at the feature where I expect to?
- Inconsistencies. Does this feature feel like an integral part of the system? Does it have the same UI metaphors? Do the messages - no matter how correct - match up with the messages other parts of the system display?
I'm certainly not advocating avoiding test automation. I'm simply advocating living with a feature for a while. Just like you don't really know a house until you've lived in it for a bit, you don't really know a feature until you've used it for a bit.
When I first wrote this entry I wrote it as "manual testing" versus "automated testing", but I believe these terms are imprecise. It's more "testing as an end user" versus "suites (scripts and other code) that test the system". I haven't come up with really good shorthand terms for this.