Let's break it down:
Usually for us sales or support is asking. Sometimes sales has a client who wants to use a program and wants a guarantee it will work. Other times sales has a client who has not picked a client and wants to know what we recommend. Alternatively, support might have a client who's attempting to use a program and finds something they don't like about it (doesn't work, works too slowly, etc).
Let's say you're not in a storage company (some of us aren't!). This could be a browser, if you're a web app. It could be a reporting or monitoring tool (anyone else ever had a client ask to point Crystal Reports directly at your database?).
Either way, someone's now looking for a guarantee that a client program will work with our software.
I'm usually afraid of the word "guarantee". You can do all the testing in the world, and a new patch of a client program will come out and break in a truly spectacular manner. Or the customer will use an obscure undocumented flag you didn't test and... kablooey! (tm Calvin and Hobbes). Or the client will install it on some totally unsupported hardware and scratch his head when it doesn't work. "Guarantee" is a very strong word that means "it's totally my problem to fix".
I usually get around this by saying, "here's what we've tested" rather than "we guarantee this".
There are a number of different things you can do and call it certification.
- The standards approach. This is where you point to some external standard and say, "we conform to this. Any client that works with this will work with us." By external standard, you should make sure you choose a public standard: NFS v3, or W3C compliance, or whatever's appropriate for you. In this case, you don't actually have to test the client. However, you'd better be darn sure you conform to the standard, or this one may eventually bite you.
- The "we test this" approach. This is where you offer up the version and configuration you test, and you say that has been tested and will work. Any deviance from that configuration or version may work but isn't guaranteed.
- The "certification program" approach. This is where you turn it around on the client application, and offer a certification program. The idea is that they conform to you, rather than the other way around. You offer a set of criteria, test systems (or a lab for people to come test in), possibly scripts and reporting mechanisms, and you let people run your tests. Then you analyze their results, and either put your stamp of approval on or not (think "runs on Vista", etc). If you're large enough and important enough, people will do the compatibility testing for you. This doesn't work so well if you're kind of a tiny nobody in your industry. I've not done this one personally.
So What to Do?
In the end what you do is driven by how much team, time and sensitivity you have. The real goal here is customer (or potential customer) comfort. So you do what you have to do to achieve that customer comfort, within the bounds of the worth of that customer.
My first approach generally is to do a test for that client. If this is an important client, we can get from them (or create if they don't know), a configuration that will work for their situation. Then we test this (and retest it on new versions of our code). It's client-specific, but it gives us the comfort to go to the client and say, "follow our advice and this will work."
My next approach, if this starts to get to be too much volume, is to publish a "known good" configuration (including version) of the client software. We test that client config with every release we do. We tell clients what works, and then let them experiment from there, if they need to.
These two approaches have gotten me far enough, so far. In the end, there's no substitute for trying it at the customer, but short of that, you can give them comfort. And all "certification" really means, at least in this sense, is comfort.