And how do we pick those tests? By the quantity of bugs found, and by the worth of the bugs found.
Quantity of bugs found is a fairly straightforward one. If you have a set of tests that flush out a lot of your bugs, running those early helps. The earlier you get those bugs into dev (and management), the earlier you can react to them. So test the target rich areas of the application near first.
Worth of bugs is a bit more complex. Some bugs are simply things that your customer doesn't care about, or wouldn't see. Other bugs are "kill your business" kinds of issues - crashes, data loss, that kind of thing. We want to find high value bugs - those that are worth a lot to your customer - early. So run tests that produce high-worth bugs first.
Reconciling the two priorities is always interesting. One thing we have to do before every release is validate which tests and techniques in our arsenal are likely to produce many bugs, which will produce worthful bugs, and which will produce both. This will change by release. If an area of the product doesn't change between releases, a technique that found lots of bugs there in the previous release is unlikely to find lots of bugs there again. So we go hunting in another area of the product, or through another technique. If there's one lesson, it's this:
Repetition is unlikely to produce equally valuable results. For maintain test effectiveness, change either the software under test or the technique used.
For example, if I ran a boundary value analysis technique against a form in the last release, and that form hasn't changed in this release, I won't use that technique first. I'll attack that particular form with another technique. I may return to the boundary value analysis technique, but I'll do it later, since I'm not likely to find as many or as worthful issues with it.
Don't solve the same problems with the same methods every time. Vary your problems and vary your methods, and you'll find things you never expected.