Generally when someone's going to ask us to run a test, we hear the words "we should".
"We should measure performance across 15 nodes."
"We should try multiple Active Directory servers."
"We should put faster processors in this node and see what happens."
What is it going to tell us?
If you can't describe what you're going to get out of a test, you haven't really defined it. This doesn't have to be complex. It could be as simple as "We should put faster processors in this node and see if our ingest performance improves in single client or multi client scenarios". Other times it may be more specific to a potential customer need, or to a direction marketing wants to take the product, or to some external requirement (a regulation or the like).
In any case, make sure you know what you're going to attempt to show. That will tell you what to measure and what to look at.
Running tests is great, and having other people suggest tests is a good way to broaden what you've thought of to test. Just make sure you all know what you think you're going to get out of it, or you may be wasting everyone's time.