Monday, October 5, 2009

Always a Requirement

When you ask a customer or potential customer what their requirements are for your system, they've generally got a list. It needs to support 50 concurrent client connections. It needs to not lose their data (hey, we are a storage company!). It needs to support ingest and reads from their client application of choice. And so on.

But that's not all. That's just the ones they've thought about.

Most sales teams will poke at this a little bit. They'll go through the potential system configuration with the customer and identify some more requirements, like required replication, or limits on the amount of power draw. They will also likely identify some areas where the customer doesn't feel there are requirements. In particular, it's not uncommon to here the phrase: "There are no performance requirements."

This is a trap.

Even when there are no performance requirements, there is always a performance requirement.

The customer has expectations about the performance of any system. They may not expect it to be fast, but they expect something. To be fair, the customer probably doesn't actually know what his performance requirement is. But that doesn't absolve you of responsibility. When it takes 2 minutes to open a file off the system, that's going to be a problem. The customer probably doesn't expect 2 seconds, but 2 minutes... nope. Same thing goes for ingest, replication, page load times, pick your relevant metric.

So now that you know you have a performance requirement, your job is to suss out what that requirement is. You have to back into the customer's performance needs. To do this, look at two things:
  • Any mention of times in the project. Consider backup windows, data flows that include your system (e.g., stored on primary for 30 days and then on your backup product for 60, then on tape forever - means you have to be able to take in one day of generated data every day because that's what'll come off primary to keep primary at 30 days), the speed of current solution (if any), any number you can get. From this, you can calculate performance requirements. Number of messages passed over an interface divided by hours available for batch feed processing equals required messages per hour for you.
  • Find comparables. Most things, particularly performance, are relative. If a customer is opening a file off your system, look at how fast the customer can open a file off another network drive. That's your implied performance requirement. If a customer can open his web-based EMR page in 3 seconds, then your web-based scheduling system probably ought to load pages in about 3 seconds. You can be off from comparables a bit, but you want to be pretty close to keep your customer happy.
I'll note that you may not meet these performance requirements. Your potential customer's expectations may be completely unrealistic. The point is not to blindly meet the implied requirement; the point is to figure out what it is. If you can meet the requirement, great. If you can't, you need to reset your customer's expectations appropriately. It'll save angst later.

Any other tricks up your sleeve for implied or unexpressed requirements?


  1. Performance requirements are unique as :

    1. They are typically non/para functional hence people tend to attend them only after application works "functionally".

    2. Performance requirements (mostly) are tested (only and directly) without being built or engineered into a software product.

    3. Performance testing as it happens is mostly a last phase in SDLC - untill that changes, we will continue to crib about performance

    Shrini Kulkarni

  2. Bug reports. Hey, it's ugly, and it's not kind to devs, but desperate times call for desperate measures.

    Example: Function Zeta has no discussion at all of performance requirements, expectations, nor even a hint of same. You time it, and the response time feels.. adequate.

    File a bug. Explain that Function Zeta is taking too long. The developer will come back with his justification of why Zeta takes as long as it does, which provides you with what you were really after all along; some sense of how long the task should take, and why.

  3. Shrini, I'm not sure I understand your second point. Can you elaborate?

    For your third point, I don't think that has to be true. Where I work, for example, we run performance tests every night on whatever currently is checked in to head. Occasionally they bomb out due to some functional issue, but that's not particularly common. More often we'll check out the graphs and go, "hey, last night was a lot slower. What did we check in?" We've caught a number of things early that way. Conversely, "hey, that was a whole lot faster!" has uncovered a couple of functional bugs as well as some actual desirable performance improvements.

    I'd urge you work with your team to try to keep the code checked in stable enough to run at least basic performance tests. Your BVT (or whatever smoke test you're using on builds) should be sufficient to let you run some performance tests, at least on the happy paths.

  4. I touch on a few thoughts here:

  5. According to me software performance area can be measured by developing a software so you have to do software testing for that and update the necessary requirements as descibe in your blog.