These are one of the benefits of a good test automation stack; it becomes fairly easy to run lots of variations on the same thing, and that can result in a lot of learning about how system behavior changes as various things flex. For example, I recently took a performance test we run and tweaked the setup to make it run with different memory allocations - and in not too long I had a much better understanding of how system throughput changes as a result of adding or constraining that particular resource.
This isn't the kind of test i need to run frequently. I run it every once in a while to make sure the system behavior in this area hasn't changed, but it's overkill to run it every single night with a lot of our automation.
So here's the dilemma: Do I check in the test with my modified configuration?
Pros to checking in:
- Next time I run it, I'll be sure I have the same setup, so my results are comparable
- Someone else could run the same test and gather their own information from it (or a variation on it)
Cons to checking in:
- Checked-in code that doesn't run is likely to get stale and stop working as other code changes around it. This goes for test code, too!
- Someone might try to run it more frequently, and that will take up a lot of machines for very little benefit
There's no particular right answer here. Which one you choose will depend on how hard it was to set up, how much consistency matters, whether it's really likely to stop working, and how frequently you do want to run it. Pick the one that works for you; just make sure it's a choice you make, not something you fall into.