All of these things would be interesting. For most of those things, you can design a test to show you the answer, too. That doesn't mean we're going to run the test.
For it to be worth running, a test has to be both interesting and useful. Knowledge for the sake of knowledge is great, but ultimately that's not what we're here for. We're here to ship software and to help make good decisions. Our tests need to further those goals, not just provide data.
When you come up with an interesting test idea, before you design and run the test, ask yourself, "What am I going to do with the results?" Until you can answer that, you're not ready to run the test. If you can't describe what you'll do with the outcome, it's likely that you don't really know what kind of outcome you're looking for. And if you can't describe it, you're not likely to actually see it when the test happens. You may measure widgets fitting into frobbles larger than you'd ever actually ship, or in a configuration that your customers don't have. You may wind up measuring the performance on a connection that your customers would never even consider using.
Identifying the usefulness of your test results will help you define a more precise test that is still interesting. Interesting is great. Useful is even better. Interesting and useful? Now you have a test worth doing.