Consistency is a hallmark of QA. Consistent behaviors are comforting, consistent results are good, consistent steps to reproduce the problem so we can understand it are a goal of most bugs.
But how much consistency is too much?
Consistent production environments are generally common. Consistent test environments are also common. Consistent development environments are often a goal. Consistent requirements generation environments are.... well, funny, that doesn't usually come up!
So let's walk through the development cycle (backwards). Assuming an enterprise deployment (not a consumer application):
- Consistency in Production Environments: GOOD. Assuming you control this, more consistency is useful; the only changes you want are those you control, and usually fall into the configuration category.
- Consistency in Test Environments: GOOD. These should match production with certain known exceptions (usually test tools like Wireshark).
- Consistency in Development Environments: LIMITED. Good developers have a setup that works for them. Within limits it shouldn't matter if one person likes Eclipse and another strongly prefers TextMate. Consistency enforced here is generally for common tools (source control, unit test frameworks), consistent product (matching code styles) and for ease of working together. Beyond that, the product is more important than the environment in which it's created.
- Consistency in Requirements Creation Environments: NO SUCH LUCK. This s one of those things that doesn't get asked about. As long as the product (spec, story, PRD, whatever) is consistent and correct, who cares how it gets there?
Be consistent enough to make your life easy, but not so much you make your life hard.