Much of the testing we do is to provide information about the product that is targeted for engineering or others involved directly in releasing and supporting software. We gather information about unexpected states (bugs) to help developers resolve issues. We provide deployment information to help services build successful production configurations. We help do root cause analysis on field problems and feed that into support for workarounds (and developers for fixes). For all of these things we're trying to come up with scenarios that might occur in the real world. They may be extreme or rare, but our goal is to provide information that is predictive of how the software might behave in the field.
Sometimes we're testing for a different purpose, though. Sometimes we're looking for tests that help marketing or sales. These are what I call "testing for marketing". These are tests you do not just because they're valid but also because you need to them to help you look good. For example, marketing may be producing a white paper about your system performance. They want the fastest performance numbers that are reasonably possible. This doesn't mean you should fake anything; this means you need to build your tests appropriately. You may use data that is "real" (e.g., non-generated) so that it's understandable to an analyst when you explain it. For example, "We get 60 MB/s on data that is real office data. We took a couple of backups from our file server" sounds better than "We get 60 MB/s on data that we created." (This is true even if you created data with real-world characteristics.)
As you're creating your tests and your test data, remember all your audiences, and you'll better e able to provide usable information.