Friday, July 31, 2009

Marketing Numbers

Someone walks up to you and says, "how does it perform?"

Don't answer. Not yet. Hang on just a second.

Consider Your Definition
"Perform" has a lot of possible meanings. It could mean response time for a single request. It could mean how many users you can handle. It could be a request to describe latency. It could be an interest in the performance characteristics of the system, including a discussion of potential and measured bottlenecks.

If you say, "50 users", you may not be answering the question. Whoops.

Once you've clarified what they're asking for, then it's time to consider what to tell them.

Consider Your Questioner
Different people have different needs for answers. For example:
  • Your marketing guy needs the best answers he can have to blow competitors out of the water
  • Your sales guy needs the fastest numbers on realistic data sets
  • Your support engineers need numbers for a long-running semi-loaded system, more average than fastest. They can then use these to determine if customers are running slow.
  • Your developers need consistent numbers that match field averages and peak, so they can see changes over time.
And then open your mouth and answer the question... "how does it perform?"

Thursday, July 30, 2009

A Different Test

Sometimes the test you do.... is not the test you intended to do.

Case in point:
Today I went to do a simple upgrade test. On a given platform in a given configuration, upgrade from the most recent released version to the new version that's coming out. Simple.

I completely forgot that we'd tested a "power event" on that system earlier this morning. (Note: for "power event" read "a tester went in and pulled the plug on the power strip for that entire system"). The system handled the "power event" just fine, but I hadn't yet gone in and completed the products startup validation process.

So my system was not in a good state when it started the upgrade. And the upgrade failed exactly like it was supposed to.

Not exactly the test I intended to do, but it was a perfectly valid test. So what's the moral of the story?

If you're attempting at test, and it does something unexpected, take a quick look. You may have done a different test.

Now off to do the test I meant to do originally!

Wednesday, July 29, 2009

Software Is Not All You Deliver

When a customer buys your software product, they buy much more than a piece of software.

Think of all they get, express and implied:
  • software (of course!)
  • a recommended deployment configuration
  • support
  • an upgrade path they can accommodate (number of releases and upgrade procedures)
  • all the benefits your sales guys tout: cost savings, improved operational efficiency, etc.
As you develop software, and as you define your processes for release, support, sales, etc., consider what this will do to your customer's overall experience. Your customer thinks of the entirety of his experience with your software; so should you.

Tuesday, July 28, 2009

Quiet Place

Working in an open plan office is generally a really useful thing. Having the QA team sitting all together, and sitting right next to part of the dev team, really increases the level of communication all around. It's very easy for someone to be puzzling on a problem and just ask the room at large, or for a conversation between two dev teams to expand to involve a third team - thus making sure a crucial change in the third team's code gets made.

But...

It's important to have a quiet place as well.

As a manager, some conversations have to be held in private. Would you hold an interview in the middle of a team work area? Unlikely. Even if you're not a manager, sometimes you need peace and quiet. Maybe you have to schedule a doctor's appointment, maybe you need to rant and rail for a while.

Despite the open plan, not everyone needs to know everything. So make sure your open plan has a quiet place.

Monday, July 27, 2009

When To Make a Change

We recently migrated from RT to Jira for defect tracking. At the point where we did the final cutover, we were approximately 5 weeks from shipping a release. We had completed feature development and were in a stabilization phase concentrating on bug fixes and integration testing.

RT is not the best defect tracking system for viewing progress. It's hard to pull summaries out, and it's a bit intimidating for people who don't use it constantly merely because it plunges you into detail. Because of this, we had been maintaining a page on the wiki that listed "must fix" bugs for the release ("showstoppers", "blockers", pick a name, any name). It had extremely basic information about each: a quick description, whether it was a "must fix" or not, and whether it was open or closed. It was effort to maintain the page by hand, but it was a convenient place for anyone on the project team to see where we were and what the status of their pet bugs were.

But then we moved to Jira. We were able to set up a shared dashboard that showed the open "must fix" issues, a table of issues found that weren't considered release stoppers, and a graph showing the number of closed versus open issues targeted for the release. In short, it did everything the wiki page did, and with more colors. Time to ditch that hand-maintained issues page!

Err... no. Not yet.

Software development cycles are really about routine and consistency. The act of building (and testing and releasing and marketing) software itself is rife with uncertainty, so having a stable process underneath it can provide us with the tracking and planning mechanisms we need. And while the SDLC should absolutely be matured, we have to be careful about when and how we make changes.

In this case, we had the whole team - from the VP Engineering and the product manager to the support engineers and developers - already used to using the wiki issues page. Changing it is certainly possible, but habits die hard, and there was an opportunity for missed communication. ("I didn't see it on the wiki!" "It's on the Jira dashboard" "Yeah, but I forgot we were using that, and I know it's kind of late now but this really is a big problem".) So we decided to maintain the issues page through the end of the release. This gave us the opportunity in development to refine the Jira dashboard and work out kinks around handling new issues as they came in ("untriaged", we call them) , etc. It also made it easier to educate people, since we had the entire next release (which was just starting development at the time) to get the larger team used to looking someplace new for status.

In other words, just because we could switch didn't mean we had to switch. Better to make a change later and more smoothly, then to make a change as soon as you can simply because you can. So please, make changes in your progress. Just make them when the time is right.

Friday, July 24, 2009

Heard In Passing

I heard this in passing today while I was out getting some lunch:

"That's not really a limitation, that's just something the product doesn't offer."

I had to turn away so they wouldn't hear me giggle.

Thursday, July 23, 2009

Who Do You Tell When?

You've found something. Something that's a bit scary. Maybe you're late in a release, maybe it's something that on the surface looks like it ought to affect a whole lot of your customers, maybe it's a potentially huge flaw in that feature that marketing's out there touting right now. This isn't your run of the mill bug.

You've got to get the word out, and the sooner the better. So let's go running through the halls! Or... wait.. maybe not.

Let's step back for a second and look at the groups of people you a tester interact with:


Mostly, you work within QA - working with, talking with other testers. A bit less often, but still a lot, you work with the developers on your project. Still less often, but regularly, you'll be in touch with the project team as a whole; think documentation, project managers, support, etc. And least often you interact at the whole company level - executive management, other project teams, etc.

Information flow works the same way. The closer to your inner circle, the more you share knowledge about the meaning and implication of what you do. The farther out, the less context you share.

Take, for example, the bug you just found (say, the product core dumps when more than 20 users try to change their passwords simultaneously):
  • Your fellow QA types will understand how long a change password operation takes (and therefore how likely collision is), will know that this is an area of code that is usually pretty shaky and that fixing the bug is fairly likely to introduce something else. They can also help figure out how reliably this happens.
  • The developers you work with will understand that this is an area of code they've been desperate to refactor and that touching this is a bit scary. They won't have all the details, but they'll know enough to walk around it when possible. They'll also want to know how often it happens.
  • The project team will know that this is a fairly rare operation and that marketing is planning to announce the release on the ship date at a huge conference, so slipping would be Very Bad. They'll also know that dev is a bit gun shy about fixing this one at this point in the release.
  • The company will know... well, probably nothing. This one would most likely stop at the project team.
In general, your inner circle is best equipped to handle a lot of details and a lot of uncertainty. The further out you go, the fewer details and the less uncertainty are plausible. The project team, and especially the company aren't in a position to truly evaluate the implications of "we don't know yet". The closer someone are to you, the more uncertainty your conversation can have and still have meaning. As you tell people who are less and less involved with all the details of what you're working on, the more you have to know and the more you have to explain.

So who do you tell about a bug and when?
  • Tell your inner circle as soon as you found it. Let them help you track it down more.
  • Tell your next circle as soon as you have a handle on it and know what the unknowns are. You don't have to have all the answers yet.
  • Tell the project team as soon as you can summarize it eloquently, including effects and frequency of occurrence.
  • Tell the company only when you know what is going on, what the effects on the project as a whole are, and what you're going to do about it.

Long story short, tell people about an issue when you have codified it enough that it will have meaning to them.

Wednesday, July 22, 2009

Dream Dashboard

Okay, so this has turned into a mini series about keeping dashboards and what some of the dashboards I use look like. Unfortunately, none of these dashboards or home pages is perfect.

What are we really looking for in a project dashboard or home page?
  • Overall project status. This is the kind of thing that should take no more than 1 minute to communicate verbally (but of course displayed in writing/graphing/etc).
  • Recent events and immediately upcoming events. Everyone working on the project should have the ability to see what just happened, and what's coming up real soon now. This kind of immediacy helps put people in context of the project. Thus, a person can quickly understand what he needs to do or react to right now.
  • Table of contents. A project of any size will generate more data than can possibly be held on a dashboard or home page. However, that home page needs to point to the locations of the other data; it needs to be the place people go to find information about the project, even if the information is actually stored elsewhere. This can be one or more of: documents, wiki pages, bug lists, etc.
  • Ease of update. Hand updating and maintaining things leads to (gasp!) human error. So things that auto update, or can be updated from scripts (say, for example, the list of current open bugs) are good.
Basecamp and wikis (we happen to use mediawiki, but I've used Confluence elsewhere) are the two best things I've seen for this.

My dream tool, and I don't think this exists, has a top level overview, and clicking on each item displays more detail. Sort of a Google Maps for a project.

What do you want? And what do you use?

Tuesday, July 21, 2009

Some Dashboards

I wrote yesterday about keeping project statuses up to date. I heard from a few people asking what I use for a dashboard. The short answer is that it varies a bit by project. So I thought I'd put a few up and talk about them.

In all cases I've redacted names and sensitive text.

Here's one where we basically have a project dashboard in Jira:


Project Notes:
This is a late-stage project that is in bug fix and release mode. The people working on this are pretty much all technical - engineers and technical management.

What Works:
  • Quick access to project details
  • Pretty graphs!
What Doesn't:
  • This isn't a good way to handle development. It handles bugs really well but really gets you in the weeds of details.
  • Some of the less technical people involved have some difficulties constructing filters, etc. There's a learning curve there.

Here's one where we're using Basecamp for overall project views:

Project Notes:
This is a greenfield project that is accessed by the software builders and a client. Logins range from developers to the executive sponsoring the project.

What Works:
  • Good overview of a project that includes several different things - documentation, code, UI designs, etc.
  • I love the "what's coming in the next 14 days" right at the top.
  • Not intimidating for non-technical types.
  • It emails you digests of what changed and/or specific changes as you request. Can't get away from email as a notification tool, so it's nice to embrace it.
What Doesn't:
  • I don't know that I'd do bugs here. The general bug workflow (log, fix, verify, deploy) doesn't map well to the done/not done simplicity of Basecamp.
  • This works better for smaller projects; a project with dozens of people involved tends to get very chatty and need different UI metaphors - the list view can get very long very quickly.
Here's a project home page on a wiki:

Project Notes:
This is an internal project that is being worked on by sales, dev, marketing, exec management, and basically a smattering people across the whole company. It's a fast moving project that's in prototype phase, so it's got a lot of possibilities and not a lot of certainties yet.

What Works:
  • Everyone's used to a wiki, so it's a really friendly format
  • Wiki watching lets email updates occur whenever the page changes
  • It's really really flexible
  • It grows well - as it gets larger you simply spawn things off to more and more pages and keep the "home page" light.
What Doesn't:
  • Hand maintaining this is something of a pain
  • Lack of a defined structure means consistency is difficult to maintain, and it can be hard to find things
So... have I found the perfect project dashboard? Nope. And I've certainly discovered (again) that different tools are useful for different contexts. So this is what I do, and I'm sure I'll change it as I discover new tricks.

What are your tricks?

Monday, July 20, 2009

Un-Stale

I just started a new project fairly recently. And I love the feel of a fresh project. I set things up for communication and transparency:
  • A project home page or dashboard
  • The task list (or card board or project plan) all updated
  • My code all nice and checked out onto a freshly installed system
  • Any documentation - requirements, UI designs, etc - all referenced from the home page to their storage location
It's great.

Then I just try to keep things from getting stale. Keeping project communication up to date is really about just a few things:
  1. Add new data
  2. Eliminate old data that is no longer correct or relevant
  3. Modify data that's changed
So in the end that's all I try to do. Keep documents updated as new versions come out. Update the home page when we pass a milestone. Mention that new requirement when it comes up.

I know that when I go looking for information, I want to know that I'm looking at the right thing. I figure it's incumbent on me to help make sure that the project lets me do that easily.

So don't think of keeping a project up to date as a huge high overhead thing. It's not that bad... just a matter of getting rid of the stale stuff and adding the fresh stuff.

Friday, July 17, 2009

Customer's Oracle

In testing, we're attempting to state what a product does in various scenarios, and often to compare that to the behavior we expect in those scenarios. The differences generally turn into bugs, altered expectations, documentation, etc.

In a few cases there are specific requirements. Maybe you have a screenshot of what the product is supposed to look like, for example. Perhaps you have the exact algorithm that is supposed to be used to calculate the projectile's trajectory.

In many cases, however, we don't have an exact indication of what this thing is supposed to do. So we search out oracles. There are a lot of oracles out there, including:
  • Another piece of software you'd like to emulate ("do it like Windows does")
  • The sales guy or dev who's been around the block (and presumably knows what the customer does or wants)
  • A competitive product (even if you'd specifically like to not do what they do)
  • The spec for a standard you're claiming to implement
  • Etc...
Finding and using oracles is a supremely helpful thing. But.....

Is your oracle the same as your customer's oracle?

If you decide that you're going to do something "just like Windows", and your customer thinks the mac way of doing that thing is right, you're going to have some disappointment.

If you decide that you really really hate how many steps it takes to create a user in your competitor's product and you're going to do it in half that many, and your customer has trained some people to do that workflow efficiently, they're going to be really annoyed that they can't apply that training to your product. (This one is particularly true of products where admins and the like are highly used to keyboard layout and shortcuts.)

In short, you can (and should) pick an oracle, but make sure it's the same oracle your customer is using, or at least an oracle that agrees with your customer's oracle.

Thursday, July 16, 2009

Definitive Source

I got this question yesterday:

"Do you have a good resource defining standard defect severities?"

Let's parse that for a moment. There are three interesting points:
  • "good resource"
  • "defect severities"
  • "standard"
"Good resource"
Do I have a good resource for this question? Absolutely. I have my own experience. I have a community of testers who are more than happy to answer any question I have - all it takes is a communication to a couple of mailing lists and/or forums. In this case there has been a lot written about the topic in blogs, books, mailing lists, classes and training materials. I can also look at "reference implementations" - in this case the default severities available in a variety of defect tracking systems.

I have copious resources here, and many of them are trustworthy.

"Defect severities"
When I get asked about severity, the first thing I consider is whether the person means severity, priority, some mix of the concepts, or something else entirely. Context of course matters here - who's asking and why. Sometimes my first answer needs to be a follow-up question about what information they're seeking.

In this case the questioner was an engineer who I know can differentiate between severity and priority, and has the same basic understanding of the meaning of those terms that I do. So I can skip the "do you mean severity or priority" follow up.

"Standard"
Here's where it gets tricky. There isn't exactly a single governing authority for testing. There's not even a generally accepted association or membership group. Lawyers have the Bar Association, physicians have the American Medical Association (or equivalent in your country), testers have... well... I don't know of one.

So we have to proxy standards with experience, or with the concept that "a lot of people" think X. We find important sounding groups - I eventually dug up an IEEE spec to answer this question in a way that gave him the ammunition he needed - or we fall back on "general practice", or we quote someone we respect in the field who has talked about this. The same problem holds true for software workers of all stripes, I think. There's no "official standard" for Perl style, for example; people just make it up for themselves based on what they've seen, or they point to respected developers and groups. There's no standards body for DBAs, no near-universal association of UI designers.


I wonder sometimes if the lack of an "official standard" for so many of these concepts hurts us, even if many of us basically agree on the concept. Having a "standard" has problems, but so does not having a standard.

Hmm...

Wednesday, July 15, 2009

A Little Bit Nostalgic

We're nearing the end of a release cycle. I was in the lab today running an install and as I was waiting for the installer to run I cleaned off the QA desk a bit. And I found a bit of nostalgia....

A stack of CDs of older builds of this release. All these were builds we'd tested and weren't shipping for one reason or another (not feature complete, contains must fix bugs, etc).

It was a fun moment of thinking back over the release and where we'd come.

Pre-Build 04/30: Not even the actual software, just a build to show we could install the base packages
Build 8: the first build we'd been able to install and actually form into a running system
Build 25: some really wacky configuration bugs in this one
Build 64: our last good installer for a little while - we used this one a lot while an installer bug was being fixed
Build 86: all the pieces worked, this build was about finding the ways in which they didn't play well together
Build 115: getting close here; we used this build to verify a lot of bugs
Build 122: pretty solid so far....

Release cycles can be pretty intense, and they're definitely forward-looking; we want to see and spend most of our time thinking about the next build and the current build and what's coming and what's changing. It's interesting, though, to take a few minutes and think about how you got to where you are.

That stack of CDs and the memories of the builds on them is almost a mini postmortem. It's a reflection of the path we took, with all its twists and turns, all it's steps forward and steps back.

And then my install completed. Onward and upward!

Tuesday, July 14, 2009

Close-Hauling

In sailing, we sometimes find ourselves close-hauling. This is where we want to sail as close to the direction the wind is coming from as possible. We keep our sails in very close to the boat and get as close to the wind as we can. The problem is, if you push it too far and get too close to the wind, your sails start flapping and then you're not going anywhere (plus it's really loud).

So the fastest way to get to a point that's upwind is not to sail directly at the wind, but to sail just off the wind.

I've come to the conclusion that this is true in software development, too. We have a goal: shipping something on X date, or with Y features. And we want to run straight at it. But dropping everything and working on those features or that date - the equivalent of sailing straight into the wind - is actually slower in the end.

You see, much as we like to call them distractions, we need other things, too, to work effectively. There are other things we have to do:
  • planning work for what comes after we hit this goal
  • oversight and calibration that our goal is still a good one and doesn't need to shift a bit
  • water cooler talk and friendly conversation (after all, we are still humans, not software-creation machines)
  • helping support the prior release, if there is one
  • training both new hires and expanding the skills of the current team
Concentration is great, and working toward a goal is a wonderful thing. Just remember that in order to get where you're ultimately going, you have to take care of the past and the future as well as just the goal that's right in front of you.

Monday, July 13, 2009

When It Messes Up

Messing around with infrastructure is sometimes a bit scary. Making a config change to the defect tracking system; updating (or patching) the mail server; changing the way a feature works in the core of your test infrastructure: all these are very public failures, if you don't do them right.

Of course, you try it beforehand. You create a test system, or you have a backup mail server you upgrade first. But it's still a bit scary. After all, you might mess up. And here's a hint: if you work on this kind of thing, one day it will go wrong. It might or might not even be your fault, but it will go wrong.

How you deal with it after it's messed up is what's important.

So there are two things you need to do:
  1. Have a backup plan before you start.
  2. Communicate well.
First of all, did you notice that neither of those was, "don't try?" That's deliberate - you need to change and you need to try. Stagnation will eventually not fit your needs, whether it's a mail server or a test infrastructure.

So, then what do you do?

The backup plan is very simple. The show must go on. So have a way to back up if you need to. Make sure you know how to back out your changes. Install a backup mail server and migrate all the data to it before you attempt changes on your primary mail server. Back up your config files so you can get the system back to the way it was before. That way, if you get in real trouble you can back out.

Second, and perhaps more important is communicating what's going on. Before you start, make sure you tell anyone who might be affected that you're going to be making a change. This could be a downtime notification, or just a "heads up" that we're changing X Y Z to improve A B C. Then, if it goes wrong, tell people. Don't hide it and leave them wondering why the downtime is extending. Acknowledge that there are some issues with the change and you're working on them. Keep the updates frequent until the problem is resolved (and you've finished or you've backed out).

So please. Make infrastructure changes - keep it up to date, make it better, do what you need to do. Just remember that when you make a change have a plan for it to go wrong. And talk about it. It'll be better in the end than if you didn't do it at all.

Friday, July 10, 2009

Go Crazy Day

Ruts are a hazard of our profession. It's not hugely difficult to wind up doing the same thing day after day, release after release. It's easy. It's comfortable. It's dangerous.

Sometimes we need a go crazy day. This is a day where you can do whatever you want, as long as it's something you've never tried before.

It's kind of amazing what lengths you go to when you're specifically trying to do something you haven't done before. Sometimes, for the record, this is a total bust. But a lot of the time you find something really interesting or really scary or really good.

And that's why it's good to go crazy... for a day.

Thursday, July 9, 2009

Feedback In Time

How many times have you sent an email like this?

"Here's a draft of X. Please review and provide your feedback by next Tuesday. Thanks."

The recipients will, of course, vary based on exactly what it is, but almost all the time, you really do want input. However, as usual, you're on deadline. So you have to get feedback in what people call "a timely manner". Hence, a deadline. A lot of times you'll get some feedback in time; some of your audience is both sufficiently deadline-oriented and interested in helping achieve the goal of getting a good X together.

Others, well, not so much. Deadlines come and deadlines go, and you hear back quite late or not at all. (By the way, this is rude unless you somehow warn the person requesting feedback and let them know when you really can have it to them.) So what do you do about the people who missed the feedback deadline?
  • Ignore them. You can simply not incorporate their feedback and chalk it up to a lost chance to vote. This is generally viable in cases where the person's feedback is likely to be covered by another, or whether the feedback is tangential. If this is your boss, or someone who brings a unique and important view, this may not be something you can do in practice.
  • Set a new deadline. Send a follow up mail basically saying, "okay, this time it really is important. Please review by this Friday." Most of the time this doesn't work. The person is no more or less incentivized to do it now. The exception is if your reviewer was simply unavailable (on vacation or something) and is now available; then you might get a response.
  • Send a preemptory reminder. A day or so before the initial deadline you set, send out a reminder to those whose feedback you have not yet received. This works best if you send it directly to that person without a cc list; that way you start to overcome any "well, someone else will get to it" mental block. This one actually works rather well; it lends immediacy and personal relevance to the request.
  • Whine. This one I really don't recommend, but it is a possible approach. Basically, go to that person's boss (or publicly to the project manager), and try to convince their boss to go make sure they provide feedback. This has a very "whining to mommy" feel to it, and isn't likely to help you get thought-out feedback.
In the end, those are really your choices. Which path you take will depend in large part on the people providing feedback, and the individual(s) from whom you haven't yet seen a response. No magic here, unfortunately, just make your request, follow up on it, and understand that no matter how often or how politely you ask, some people are just not going to bother.

Wednesday, July 8, 2009

Don't Care

There are a lot of things at work that I care about:
  • whether my team is happy and productive and learning, and how to help them be more so
  • how testing for the current release is going
  • how the releases in the field are faring and what we're learning from customers
  • planning for the next release to at least try to not make it a sprint
  • looking out for duplicate rates, reopen rates, and bugs bouncing between groups - all signs of inefficiency
  • monitoring and looking to improve what a "fully implemented" feature means so we spend less time thinking of things late in the cycle
There's more, but you get the idea.

However, there's another group of things that, frankly, aren't worth worrying about. These are the things where you're welcome to ask, but my response is going to be something along the lines of, "Look, I just can't bring myself to care deeply about that. It's not worth arguing over." I know some people may be hugely invested in them, but I have better places to put my energy. Examples of this include:
  • Ensuring that every team uses the "new" and "opened" bug states in the exact same way. (Hey, as long as ya'll get it, it really doesn't affect the testers - neither of those states means "go look at this")
  • What order someone chooses to do his test mission, as long as it all gets done on time. (Okay, maybe it doesn't match what I would have done, but we are two different human beings)
  • What time of day we tag the end of an iteration. (No skin off my nose whether it's 10am or 10pm as long as the code is of the stability level we expect when it's tagged)

Your list is probably a bit different than mine, and that's okay. The point is that there are things you should be worrying about, and things where worrying about them really doesn't gain you anything. We all have only 24 hours in a day, and we all have only a certain amount of energy. Deciding that you're not going to worry about something is an action that gives you the freedom to work on what you really care about.

Deciding what to not care about is as valuable as deciding what to care about.

Tuesday, July 7, 2009

Start Early

Upfront disclaimer: this is not strictly about testing. Sure, it applies to testing, but it's also about dev, testing, requirements elicitation, deployments, a lot of things. I'm going to use the word "release" throughout, but you can consider that to mean release or deployment or PRD or whatever applies in your situation.

Let's say we have a release. And we've decided that we need to do 10 things in that release. For simplicity's sake, we're going to call them things 1-10. Further, let's say you have a team with all the right stuff to do these 10 things. Congratulations.

Now, when we start the release, we have a decent idea of what things 1-10 are, and we have an estimate for each of them. Depending on how much you know, exactly, those estimates are what I think of as 30% estimates, or even 10% estimates. In other words, you're close (within +/- 30% or even +/- 10%, depending). But... you're not quite there. So you have 10 items, and each of them could be 30% short. That's a pretty significant schedule overrun. Maybe you were wrong the other way, but let's be pessimistic for a minute.

Basically, at the point when you start the release, you have schedule risk. The risk is based on all the things you don't know about each of the features - that third-party dependency you have that really isn't as easy as the documentation makes it look; the poor colleague who gets the stomach flu and is out for a week; the algorithm change that should have been simple but doesn't actually work so we have to rethink it. How do we increase risk? By knowing more. By developing. A feature is lowest risk after it's done.

Fortunately, we're all smart people and we know this. So we look at the perceived risk of each feature and we try to do the higher risk features first. And usually it works to a degree, but sometimes that "easy" "low risk" feature that you left until the end just blows you and your delivery date out of the water. Whoops.

So what can we do to mitigate this?

Start early.

There are several distinct points at which you discover that something is harder than you anticipated, and they cluster. They're right at the beginning, when you dig in and immediately figure out that this is hairier than you'd anticipated. They're about 2/3 of the way through, when you start to actually run this thing you've started to create and discover that it just isn't hanging together. And they're at the end, when you're working through the edge cases and discover that there is a major flaw you missed. What's interesting is that if you look at the frequency of those occurrences, the beginning and 2/3 through problems are much more common than the ones you find right at the end. (Note: Not scientific; this is just my own observation.)

So the trick is to get through as much of the total overall risk as quickly as possible. Which means you have to get about 2/3 through each feature as soon as possible. As you're planning your release, plan to reduce your risk as quickly as possible by doing your high risk things, and the high risk portions of all features as early as possible.


A note on risk points: risk at the beginning of implementation and risk at the very end are pretty universal in my experience. The third risk point is 2/3 through implementation for where I'm at, but you'll have to discover/confirm that point in your situation. You can find this by looking at the story and discovering where your burndown chart for that feature changed, or by looking at where the bugs start to show up, or by looking at where in the story the status goes from "making progress!" to "well, still working on it". In short, ask your project manager (or equivalent); he/she will have a good idea.

Monday, July 6, 2009

AGAIN!?

Sometimes you have a problem (not good). And sometimes that problem goes on for a while (even more not good). When that happens, it's easy to get frustrated. For everybody.

"You want MORE logs?!"
"Okay, you really want me to reproduce this AGAIN?!"
"Reopened!? Again!?! No change?! Yikes!"

It all comes down to a collective cry of "Aren't we getting ANYWHERE?!".

Yeah, that point. We've all been there.

Here's the thing. Frustration is just like any escalating behavior. It's perfectly legitimate to be frustrated. Generally, you're not the only one who is feeling that way. And it's inevitable that it will come out. You'll start to see the signs: extremely precise parsing of sentences; the formation of we and they (with "we" being the injured party); intimations that "I'm not going to do anything until someone else does some darn work around here!". None of this is fun, but it's all pretty normal.

The negative of the spiral here is that someone else's frustration will only increase yours, and venting your frustration will only increase theirs. You have to stop the spiral.

So how do we do this?
  • Do not let their frustration make yours worse. Take a deep breath before you reply to anything.
  • Acknowledge the frustration, out loud and ask for the venting to stop. Note that you're frustrated, too and that you're all working together to try to help. Venting isn't going to help anyone.
  • Explain why. If you're asking for work (more logs, a new debug build, reproducing this again), describe what you're hoping to get out of it. It gives the action purpose instead of simply making it look like you don't have a clue.
  • Solve it. Ultimately, frustration levels will decrease when the problem goes away.
Frustration is part of daily life. Systems are complex enough and generally finicky enough that there will be hard problems, and problems that take a while to solve, and that's frustrating. How you handle that frustration - yours and someone else's - will help determine how quickly and well you get to a solution.

Wednesday, July 1, 2009

Show Stopper

We talk sometimes about "show stopper" bugs. These are the things that you look at and say, "Oh my. We couldn't POSSIBLY ship with that."

Oddly, a "show stopper" was originally something (a song, dance, monologue) that caused the entire show to stop due to prolonged applause (and cheers, etc).

Somehow I can't imagine applause for a show stopper bug. At least, not usually. Relief you found it, maybe, but not applause.

We call 'em "must fix" bugs or "blockers" instead.