Friday, May 29, 2009

Work Around the Unreproducible

It happens to the best of us. Some issue comes up, and we see it once or twice, but darned if we can pin it down. Maybe it's hard to reproduce, maybe "that shouldn't happen!". When this happens, and a release is coming up, the question is likely to arise:

What do we do about this issue we don't understand?

Well, you've got choices:
  • Delay the release until you do understand the issue. This could be days, weeks, months. Generally this is untenable in practice.
  • Go without it. We call this the "hope" method!
  • Deal with the effects of it. Find a workaround or a way to handle the effects so that the issue is there but has less effect on the customer. This can be in code or in policy.
Let's look at an example (this is made up, by the way): users who don't have passwords can't use a new "remote logon" feature because that feature depends on SSH and passwordless ssh isn't working for some reason. We don't know why it's not working.

So step back, think of another angle. What is the effect of this bug? Well, some users don't get to use a new feature. There are a few things we could do here: (1) we could just say, "okay, sorry, create a password if you want to use the feature"; or (2) we could force all users to set passwords on their first login after upgrade. Neither of these fixes passwordless ssh. Both of them work around the problem (and don't cause account corruption or anything nasty). In other words, we are working around the unreproducible.

Sometimes the answer to a bug is not to change that portion of code. Keep in mind that your ultimate goal is to make the problem go away without negative repercussions. It's okay to be creative in how you do that, as long as you're sure it's complete.

Thursday, May 28, 2009

Amusement

There are three background facts you need to know:
  1. I'm on triage this week. That means that it's my responsibility to come in every morning, analyze the output of the nightly automated test suite, and log or update bugs as appropriate.
  2. I checked in some code this weekend. It isn't product code, but it's part of the infrastructure we use to log bugs and handle our lab.
  3. We test our infrastructure and triage the output just like we do our product.

Here's where it gets amusing: There was a bug in my code (whoops!). And the automated test suites caught it.

So now I have:
  • written some code
  • been the QA type finding a bug in the code (With the help of the automated test infrastructure)
  • fixed my bug
  • ...
There's definitely something wrong here. In general, it's a really bad idea for someone to both fix an issue and verify that it's fixed. Fortunately, someone else on the QA team will be doing the verification.

That's pretty funny, though, that the situation even came up.

Wednesday, May 27, 2009

Inflicting

There's what you do, and there's what effect it has on others. This happens all over:

You're in a bad mood. You can snap at everyone and take your funk out on them... Or you can put your head down and work extra-hard on being polite when you do have to interact with people.

You want to start gathering information about what kinds of errors you're seeing in the field, and you've come up with a brilliant tagging mechanism. You can walk around asking people to go back and tag their old bugs.... Or you can write something that attempts to tag it and simply fix it.

The point is that there are two sides to most things: what's going on, and what the effect is on other people. In general, what's going on is internal to you, or to your team, or to your group, or to your company. The effect it has is something that you can control. You can do things to impact people in a positive way, or you can do things that inflicts your changes on them (that would be bad).

So think about what's going on, yes. Also think about how you're reacting to other people, and the experiences you're asking them to have. Affect, don't inflict.

Tuesday, May 26, 2009

The Data Move

Over the weekend we moved from one defect tracking system to another. As with all projects, by the end I was getting impatient - I just wanted it done already! So what took so long? Two things, really:
  • This is nobody's day job. So at the end of the day, testing the actual software we ship generally trumped working on this defect tracking migration project.
  • We had to migrate all the old data.
The second bit is quite important. Did we really have to migrate the old data? Well, no. We could have worked out of both systems. New stuff in the new system, and finish out the old stuff in the old system. But...

That kind of behavior adds up quickly. Your defect tracking system, then your wiki, then your project planning location, etc. All of a sudden you wind up like a friend of mine, who has to look for everything in:
  • old wiki
  • basecamp pages
  • file server
  • defect tracking system
  • cross-project tracking spreadsheet
  • email
  • IM
It's amazing anything ever gets found, much less that he figures out what is the correct information.

So even though it delayed us, we moved all of our data from the old defect tracking system to the new one. Better to move the data than to leave partially-useful detritus scattered behind you.


Friday, May 22, 2009

Break Out Of The Rut

There's a life cycle to jobs:

For a while, and everything's new and exciting. You're totally lost, but darn it, you're having a blast!

Then you get your feet under you a bit, and your contribution goes from questions to code/tests/ideas/etc. It's still cool stuff, and life is good. Ideally, this goes on for months or years.

And then things start to drop a little bit. You've been around a while now. You know the product and the projects. You know the people. Let's face it, you're getting into a bit of a rut.

There are three choices from the rut. You can stay, you can go, or you can break out of the rut. Choosing to stay is fine, as long as you're not getting bitter and kind of rude about the boredom. Choosing to leave starts the cycle over again. Choosing to break out of the rut, though, that's an interesting option. How, after all, can you make your job different?

There are a lot of ways to address this, some of which will be available to you and some of which won't, depending on your work environment. But if you want to break out of a rut, consider other things you can do:
  • Write a tool that does something way outside your normal job. Maybe it's a custom meeting reminder that finds the email inviting you and shows the agenda. Maybe it's report using your SCM system that creates a "most lines of code checked in" leaderboard. Maybe it's something else. Make it random and fun.
  • Audit another team. Are you a dev? Try spending a couple hours sitting with support. Are you a tester? Try your hand at fixing a low priority bug (and get it well reviewed before you check in). Looking to stretch out farther? Do some research and write a fact sheet about a competitor for product management, including technology comparisons.
  • Start a semi-related side project. Take a couple days off and do something on the side. I test storage software, so for me it might be developing tests for a webapp. Distinguish it from a vacation by doing something that somewhat relates to your job skills; that way the things you do all day are still supporting you, just with a bit of a twist.
The trick is to use your skills to move just a bit beyond what you do all day. Just jump out of the ruts a bit; it's a lot of fun sometimes!

Thursday, May 21, 2009

Plausible Scenario

Some days I spend a lot of time doing what I think of as plausible scenario imagination. It goes something like this:

Given X, what could have happened to make that occur?

For example:

Given that replication failed, what could have happened to make that occur?

The link could have broken between sites. The replication could have finished and thrown an erroneous failure. Maybe someone deleted the thing being replicated mid-replication. Etc... 

This kind of imagination is one of the first steps to figuring out what may have actually happened. Particularly when you don't have a clear error showing exactly what happened, you're positing scenarios, then hopefully testing them.

Over time, it becomes a habit. You hear "we would like a shorter commute", and your mind immediately goes to plausible scenarios that would shorten the commute. But habits aside, don't be afraid of the internal speculation. Somewhere in your plausible scenario list is a kernel of truth.

Before you go wading in too far, think about what might have happened. Be prepared to discard it, but having a scenario (or 5 or 10) can save you time over the long run.

Wednesday, May 20, 2009

What Is Tomorrow's Revolution?

I'm amazed sometimes when I hear people talk about the incredible things that they've seen or that they've done. People who built a little website that now serves millions of people. People who just wanted to make their lives a bit easier and wound up inventing entire new processes. Heck, the guy who first used the term "white-box test" and "black-box test".

In retrospect, these were big things. In retrospect we talk about the Rise of Ruby and Rails, or the messy explosion that is Agile or SCRUM, or the Gang of Four and their design patterns. At the time, though, we didn't always know these were going to be revolutionary. Sometimes we did (the guys who were launching an Apollo moon mission were pretty darn sure it was new and different), but sometimes we didn't.

Sometimes revolutionary is only visible in retrospect.

I wonder, sometimes, what we're doing now that is revolutionary. What things are we doing today that we'll look back on in 20 years and say, "this was the start of something big"?

So I turn it over to you: what are we doing now that is revolutionary?

Tuesday, May 19, 2009

Blockers

This is the tale of Yet Another Meeting(tm). But it's a good one!

During the development and release cycle, we hold a meeting that we call the blockers meeting, and it's been a really helpful tool.

Purpose
This is a meeting to discuss new knowledge about the software to be released, and to go over progress on the items that are currently blocking the release. "Blockers" are anything that would cause you to not choose to ship the product. A blocker can be an unfinished feature, a bug, a regression in test coverage, a performance change, hardware supply chain problems, etc.

Recurrence
We do this meeting quite often:
  • during the initial development cycle, once a week
  • between feature freeze and code freeze, three times a week
  • after code freeze and before release, daily (weekdays only!)
I know it sounds like a lot. Bear with me, it's not that bad.

Attendees
You need the people who are materially affecting the release here. For us that's the dev leads, the QA lead, someone from support, and someone from product management. If the release has a customer beta we also include someone from the sales engineering team.

Agenda and Duration
This meeting happens often, so it had darn well better be quick. We have work to do! We do this:
  • All new items discovered (5-10 minutes). We're here to decide whether they should be blockers or not; that's all. In addition, it's okay to propose an alternative route that may affect whether this is a blocker (e.g., "Let's document our way around this."). This is not the place to talk about how we're going to fix them.
  • All items we couldn't decide on before (7-10 minutes). Sometimes we have an issue we don't understand well enough to say whether it will block a release. We revisit those and figure out what we need to know to determine whether it's a blocker, and hopefully move it at this point.
  • Update on any in-queue items (5-10 minutes). We don't go through every issue, but anyone who wants to bring up an issue can here. Usually this is for something that's transitioning state (e.g., "We fixed this and are handing it to QA"). Sometimes it's a request for help with the problem. A few times changed understanding will change whether it's a blocker or not, and that's discussed here.
Total elapsed time is 30 minutes or less. We usually run about 15 minutes, and sometimes toward the end we'll have nothing new to discuss, so we wind up with 0.5 minute meetings ("Hi, I've got nothing. Anyone else? ... Cool. Have a good one!").

Sounds a lot like a standup, and it is, really (yes, we stand up). It's just a standup with a special release-oriented purpose. And it's amazing how much insight we get just from having to stand up often and say, "here's what we've found and here's what we're doing about it". It's a great way to prevent problems from festering in a corner, and to make sure that everyone is working with as much information as possible. Give it a shot!

Monday, May 18, 2009

No Opinion

Anyone who knows me quickly realizes that.. well... I usually have an opinion. 

Should we slip a release to get a bug in? I have an opinion. Is that really a bug? I have an opinion.
How expensive is this feature? I probably have an opinion. Of course, my opinion isn't always the way we wind up going, but that rarely stops me from having one!

Sometimes, though, I really don't have an opinion. Sometimes there isn't one option better than another. Sometimes I'm simply not informed enough to have an opinion. Sometimes I don't even know there's a choice to have an opinion about. In all of those situations, I could form an opinion, but it would just be more likely to be wrong. Better to not have an opinion than to jump to a conclusion.

It's okay to not have an opinion, as long as you know why you don't have an opinion. It lets you know what pieces you're missing.

Of course, having no opinion only works for so long. At some point a decision has to be made and that's probably based, at least in part, on your opinion. The trick, then, is to figure out why you don't have an opinion, get the data you're lacking, and arrive at a well-formed opinion.

Therein lies a the root of a good decision.

Friday, May 15, 2009

Listen First

It's not uncommon for me to get pulled into a discussion halfway through. It starts easily enough: a forwarded email or a cc on a reply... with a long scrollbar. Congratulations, you're in the middle of something!

You were probably cc'd because someone wants to ask you a question. Or they want you to do something. Don't. Just for a minute, don't.

Listen first.

It's critical to get some context so you can answer not just the immediate question, but the underlying thing that is needed. If someone asks about, for example, Windows 2003 Server compliance, what they really probably care about is whether they can use some application that only runs on Windows 2003 Server with your product. That's great - and your answer is right if you both happen to comply with Windows 2003 Server and works with that application.

Most of the time your answer won't change because you have the context behind the question. But sometimes - and these are the real doozies - your answer will be different once you understand what's going on around the question. Those times, you'll be really glad you listened.

Thursday, May 14, 2009

Hope

I've been reading Beautiful Teams, and some of it has me thinking. The book is full of stories of problems solved, or teams taken from bottom-of-the-barrel to rockstars

When things are going badly, it can take some time to turn around. You're not going to fix a bad team dynamic overnight. You're not going to clean up a bunch of spaghetti code in a day. You're not going to take your code coverage from 5% to 90% in a few days. All of that takes time - weeks, months, even years.

But... there is one thing you can change right away.

You can create hope.

You can help a team believe that the dynamic can be fixed, and that everyone on the team actually wants it to be fixed. Trust will come later, but now you have hope that at least trust could possibly one day be achieved. It's not about trust falls or showing ways in which showing trust is useful. It's about getting the team together, acknowledging the dynamic is bad, and getting public commitment from every member to try (by the way, this takes a lot of advance preparation). You haven't actually fixed anything, but you've provided hope.

You can show a team that the spaghetti code will be cleaned up. Take an hour - just one hour - and clean something with the team. Then reveal your plan for how you've sold product management on allotting time in every release cycle to clean up code, or how you've added that to your feature estimates. You've barely touched anything. But you've provided hope.

In the end, you don't have to finish something to make it better. You just have to:
  • Notice a problem
  • Define the problem publicly
  • Fix a tiny tiny portion of it
  • Be specific about how you're going to fix it
All those things together create hope. And hope, ultimately, is how you get things to change. People with hope are people who will commit to making things better, and people who will work toward a goal... people who can become an effective team.

Wednesday, May 13, 2009

I Like X

There are two classes of arguments: winnable and unwinnable.

Winnable arguments are backed by reasons. You can tell these because they say why. For example, "We need to make the text a different shade because the colorblind population we're trying to address can't read it against that background." Here we have an argument that points out the need of the users and makes a case why the proposed change will meet that need better.

Unwinnable arguments are backed by "because I said so". You can tell these because they don't show why something is better, merely that it is different. For example, "I'd like the text to be navy instead of grey. It just looks better." Here nothing is better or worse, just different. We need more than different to go spend effort.

If you want to be persuasive, always be prepared to explain why. It gives you a much better chance of success.

Tuesday, May 12, 2009

The Cost

Sitting in QA we get a lot of demands on our time. Requests come from everywhere: support wants a new procedure, dev has a feature that needs to be looked at right now, a customer is having a problem in the field and we need to reproduce it to fix it. Oh yeah, we're also trying to ship a release.

You plan for this, of course. In your schedule you've left 30% of your time for "other stuff" that comes up on top of your main task. Maybe you've left 10% of your time because you have a separate maintenance team who handles customer and support requests, or maybe you've left 70% of your time because planning in your organization leaves something to be desired. Anyway, you've left some time for all the other things that get in the way of doing your current primary task. Great!

But...

That doesn't mean you can just do the other stuff while ignoring the cost. It still behooves you to be efficient about it. For example, if you need to reproduce a customer issue, you have to set up a test environment with the version that customer is running. While you've paid the price of the overhead of configuring the environment, is there something else you should be trying to reproduce? Any performance data you wanted to get? A behavior that you kind of thought had changed that you wanted to check? Do it now; pay the overhead once, not twice.

Do you have to look for efficiencies? No. You can just do the "oh wow we forgot X we need it now!" as they come in. It's a good way to wind up tearing out your hair. The trick is to help all your requestors understand that you're trying to meet their needs as well as the needs of everyone else who wants or needs something. You're not saying no; you're saying, "what is the most efficient way we can accommodate all of it?" Sometimes that means dropping everything to meet this need because it really is critical. More often, it means waiting a week to pick up three things in an effort instead of one (which means your requestors are going to need to think ahead a little bit).

Fair warning: this will not make you popular.

After all, you're the tester. You ride to the rescue in a lot of situations. And being told "not now" is a rather different kettle of fish. You need to do it anyway. Part of your job is helping the organization think ahead a little more. Your contract with development, sales, support, even customers is to meet their needs in a sustainable manner. Firefighting isn't sustainable, so you need to help construct an environment where urgency is balanced with correctness and with efficiency. Testing, like every other aspect of a business is about the cost and the benefit. For every request, meet the need and minimize the cost.

This is your job - both need and cost. Consider both, then proceed.

Monday, May 11, 2009

If A Bug Exists....

You can tell I'm into cliches.  And today I have another one for you:

If a tree falls in the forest and no one's there to hear it, does it make a sound?

Or to put it in a more tester-oriented way:

If a bug exists and no one finds it, do we care?

My testing team has been together a while now, and we've settled into a lot of patterns: how we work with the automated test suites, how we approach testing in general, how we verify bugs, etc. Now it's time to start thinking about the impact of our work.

When we think about impact, there are several types of bugs:
  • Bugs we find that also hit (or would hit) the field
  • Bugs we don't find that hit the field
  • Bugs we find that no hits in the field
  • Bugs we don't find that no one hits in the field
The first of these - bugs we find that would be found in the field - is pretty much the normal scenario. We found something (and hopefully got it fixed) before we shipped it and before it affected customers.

The second of these - bugs that we don't find that are found in the field - are leaked defects.

The last two are sort of interesting. If a bug isn't going to be found in the field, what value is there in finding it internally? And who cares if you miss them?

On the one hand, a bug that won't affect your customers doesn't matter. If they're not going to hit it, well, who cares? On the other hand, just because your customers haven't hit it doesn't mean they won't hit it in the future.

The more subtle thing is what you can learn about the usage model by what your customers find. If you find a bug that, for example, a WebDAV interface doesn't work at all, and none of your customers find that bug, you can guess they don't use the WebDAV interface. At least, not currently. That's useful information.

Personally, I'm a data junkie. I'd rather know, pretty much always, even if the bugs aren't found by customers. That being said, if I know customers don't use a feature, there's value in testing that later or cutting corners there rather than in your most used feature. I think of it as a way of setting priorities: first I'm going to do the real basics; then I'm going to do the things my customers do; then I'm going to hit the other stuff.

It seems I do care if a bug exists and no one finds it! I suppose that means that a tree falling in the forest might actually be darn noisy, too.

Friday, May 8, 2009

Coded Phrases

It's amazing how much can be said with a few simple words. Take this exchange, for example:

"Hi. How're you?"
"Good! It's Friday."

The number of things contained in just two words - "It's Friday" - is huge. "It's Friday" and I'm really looking forward to sleeping in tomorrow. "It's Friday" and that means I don't have any meetings, which pleases me. "It's Friday" and I like the casual jeans day.

It's not quite jargon, really, but certain words and phrases acquire meaning. "It's Friday" is probably pretty darn common. Others will be specific to your workplace and your job or product. These phrases and words are shorthand for a lot of context and allow you to communicate quickly and effectively... as long as you're in the know.

For example, one of our coded phrases is "large directory." Now, if you google "large directory", you get a whole lot of nothing. But for us, it had a specific meaning: a single directory containing many inodes, where many is typically considered over 2000 but in context may be over 20,000. So when we're working with services to size a deployment, or helping support with a problem at a client site, we can simply ask, "Are there large directories?" and proceed based on that information. It's a fast, easy way to say something that would otherwise take a while to clarify.

Over time, this accumulation of phrases and shortcuts becomes part of your company's institutional memory. Spread it wide enough and you've got jargon.  Don't shy away from the shorthand, though. Just make sure you can explain it to others, and then embrace the efficiency!

Thursday, May 7, 2009

Test Roulette

I'm a big fan of fun in the workplace. Let's fact it, sometimes our work can be a bit dull (I have to do another upgrade?! Oh, man!). I figure sometimes anyone's work gets a bit dull. Having fun at work helps keep things fresh and entertaining.

In a lot of ways, we need to be sure to extend this to our jobs. Sure, naming conference rooms or goofing off with some toys is a great diversion, but there are things we can do to keep our actual tasks fun, too. (Yes, of course, they're inherently fun, but keeping it up is more what I'm talking about here.)

One of the things I get into personally is Test Roulette. When we're in a release cycle in particular, and to a lesser extent during the development cycle, we're faced with choices about what to test. To a large part these choices are driven by the perceived risk of the feature, the stability of the feature, etc. Often, however, within those categorizations there is a set of things and it doesn't really matter in which order you do them. In the end, who really cares if you test LDAP integration before NIS integration or after, as long as both are simple regression tests and not code under change? At least as a manager, I don't really care what gets done first as long as they both get done.

Enter Test Roulette: right now I do this manually. I put each of my choices on a slip of paper, shake 'em up, and pull one at random out of a hat. I'd love to do this in software. I'd want something that had all my test requirements in it (features, missions, cases, however you're breaking it down), and I'd want to enter a constraint. Then push a button and out pops a test case! 

It's a tiny, silly little thing, but it's a lot more fun than just staring at my list and feeling my eyes glaze over. Just that little bit of anticipation makes a big difference. It keeps me out of a rut - I'm not doing 42 Active Directory tests in a row - and helps keep my brain fresh and thinking across the system as a whole.

What do you do to keep your testing activities fun?

Wednesday, May 6, 2009

Meant To Be a Compliment

I said this to a developer this morning:

It's the one thing I like about testing your code.... it's generally something of a challenge.

That was meant to be a compliment. A testing challenge means you haven't left me any of the easy, "silly mistake" style bugs. At the heart of it, a good tester likes the interesting problems, too. Finding "yet another non-validated input" is kind of like a developer implementing "yet another login dialog".... boring.

I'm all for more challenges!

Tuesday, May 5, 2009

We Found That NOW?

We're going into another release cycle, and I thought it was prudent to remind myself that every release has two scares.

My ultimate goal is predictable releases: predictable quality, predictable timelines, predictable experiences. I don't want a perfect release (I'd be holding my breath a long time for that one!); I want incremental improvement and I want all the customers - dev, marketing, support, our clients - to feel safe with releases. Basically: "No surprises, please!"

Big bugs found in the release cycle - whether real or not - are surprises. We can (and should!) work to avoid them, but in the meantime, let's plan for them so they're not as surprising.

How do we handle these release scares?
  • Allow some time for them to happen. Yes, pad your schedule a bit.
  • Create a reporting structure that allows for rapid handling of bugs late in the process. Standups, automatic reporting (can your defect tracking system send emails?), and proactively talking to affected groups can make handling these things a lot easier and - yes - more reliable
How do we avoid these release scares?
  • Test early. The earlier we test, the earlier we will find the major issues that we presume are there.
  • Measure and improve coverage. This is a whole other blog post that I haven't thought through well enough yet.
  • Release carefully. Don't put your release on your biggest, weirdest customer first. Put it on the customer site that has usage that closely matches your test usage, and on the customer that doesn't regularly overload the system. You want a "friendly use case" first.  Once you've seen it work there you can start expanding who gets it.

Release scares will happen. If we can't prevent them, we can at least handle them. And that's better than constantly being surprised.

Monday, May 4, 2009

Estimating Verification

One of the things I've been looking at recently is how long it takes us to verify a defect is fixed. Specifically, I'm trying to estimate defect verification for various purposes.

Why Bother Looking?
For any effort greater than zero, I like to first figure out if it's going to be worth the effort. Otherwise I'm sure I can come up with something else to do! So, why do we care how long it takes us to verify a defect?

There are a couple of reasons I'm looking at this:
  • Going into a release, we have a target for what defects we're going to fix. How long it takes us to verify them directly affects how long it takes to get the release out the door.
  • All of dev has a goal of getting under 25 bugs in all the queues. This means QA has to verify as dev bugs come in, and we have to figure out how much time to allot to this.
Consider Defect Types
So now that we know we're looking, let's figure out what we're looking at. The most obvious thing looking at the bugs we have to verify is that some of them are going to take far longer than others. Verifying that the domain controller is running Windows 2003 instead of Windows 2000 is quick and easy. Verifying that some obscure race condition is fixed is going to take rather longer.

Unfortunately, that doesn't get me off the hook. I still have to estimate the release testing, including bug verification. Fortunately, I'm doing an overall estimate for "verifying defects", not one for each defect (which would take forever). So, what do we have to consider here?
  • How much can we verify at various development stages? The more bugs we can verify during the development cycle, the fewer we have to verify during the later stages of the release. This will cut down the amount of time needed for release testing overall.
  • How stable are the dependencies? If the installer, for example, doesn't work, it's going to be hard to verify any defects on an installed system. 
  • What percentage are automated test failures? If a lot of your bugs are found by automated tests, it's a good guess they can be verified by automated tests. This often means it's relatively simple to verify (look that the test passed, basically), but it means that you have to wait for the test to run again before you can verify. This affects your estimate by increasing the duration but reducing the effort required during that duration.
  • How many need install and upgrade testing? I find this easiest to break down by component, and to note that there are X bugs in the system management code that will need to be separately tested. These we have to count twice - once for install and once for upgrade.
Don't Forget the Little Stuff
Presumably you have some variant of a bug verification checklist. Glance over it and see how much is involved around verifying  the bug - 5 min to read the ticket and 2 min to update the ticket, times X number of tickets adds up.

Consider the Past
While "past results are not indicative of future performance", it's a good guide. Mine your defect tracking system: how long did defects sit in the Ready for QA state on average? Unless you've made major process or system changes, that's probably still pretty close to accurate. For this one, I average all bugs over the previous release and use that as my starting number.

Estimate in Two Parts
There are two parts to this estimate: one is how long it will actually take to do the verification work; the other is how much time must elapse before verification can be complete. You can't ship until you've got both done. Some of verification is waiting - waiting for systems to have more data in them, waiting for automated tests to run, etc. Count up that time (this is where you use the information you got from your defect tracking system). Then figure out how much actual work is involved. This is where you use the average work time you derived based on defect types, and overhead for each defect. Add up all that time, too.

The last step is to take your two numbers - say, 3 weeks elapsed time, and 30 hours work time - and reconcile them. For this to work, you have to devote 10 hours a week to defect verification, on average. If you can only spend, say, 6 hours a week work time, then you'll take 5 weeks, and that's your estimate. If you could spend 20 hours a week, well, your estimate is still 3 weeks because of the elapsed time required.


Estimates are a little bit of black magic, and you'll be wrong the first time. Basing your estimates on actual data, and following an estimation process every time will help make your estimates better and better over time. Happy estimating!

Friday, May 1, 2009

Friday Conversations

You know it's Friday when.....

the casual conversation of the day is, "What would our company song be?"

A few Fridays ago it was, "What should we name our conference rooms?"

The winner of that particular one was Monsters of Greek Myth.
  • Hydra for QA (many heads!)
  • Cyclops for the Storage team (these guys are the classic cave-dweller developers who don't like light)
  • Cerberus for the File System team (their portion "guards the gates" of the store)
  • etc..

Is it work-related? Nah.
Is it fun? Yup.

I think this is what we call a "team-building exercise". It's not the most productive thing ever, but banking this kind of friendly conversation makes the harder conversations easier to have. When you need it, at least you have a friendly base to work from.

Sure, it's easier to put your head down and work, but being part of a team means being part of the silly stuff as well as the software stuff.

It's work. Don't forget to have fun, too!