- client sends in data
- our client services team processes the data using internal libraries and scripts
- our client services team creates a report
- our QA team checks the report
- report goes to the client
On the surface, it sort of seems like a reasonable process.
In practice, 90% or more of the reports were rejected by the QA team. Almost all of those were for obvious errors: misspellings, data that was missing completely, wildly improbable conclusions (e.g., "factor F went up 700000%!"). Two iterations on a report was most common; three iterations happened on occasion.
Clearly, if it's almost never right the first time, there's a problem.
So what do we do about it?
First, we have to figure out why on earth we're failing so consistently. Is this a people problem, a data problem, or a process problem? Is the client services team suffering from script blindness: an unwarranted faith that what the scripts produce must be right? Is the client team just plain not looking at their reports? Is QA rejecting things incorrectly? Is the inbound data really just terribly dirty? Are client services and QA looking at different specs? The right way to find out is to review each incident and figure out where it went wrong. Hopefully, a pattern will emerge. Until we understand the problem, we can't fix it. it's also sadly possible that there is more than one problem.
Once you know what's wrong, you can fix it. Maybe it's as simple as telling client services to look at their reports before they send it off. Maybe it's fixing the libraries and scripts to avoid or at least yell loudly about errors. Maybe it's fixing where specs are kept and how they're understood. Maybe it's a combination.
Lastly, give yourself time to see the results of your changes. The first report we did after making some process and people changes failed. The second one did, too. After about the fifth one, though - and a few more tweaks - we were seeing QA reject rates go way down.
In any case, if it's never right, it's time to fix it.