Tuesday, June 30, 2009

What Makes a Good Tester

James Bach's comments on my post yesterday considering how a tester can align his perception of himself with reality got me to thinking. What makes a good tester?

The short answer is that I don't fully know. It will vary a lot based on the project, task, and my current team makeup. However, there are a few things I always look for, and a few red flags. The rest - and there is more - will depend on your project and your team.

So what do I think makes a good tester?
  • Curiosity. Testing is ultimately seeking information day in and day out. Someone who enjoys seeking out information is going to last a lot longer than someone else.
  • Detail sifting. A vaguely-worded bug, or feature, or overview of a release's current status and risk points is worse than useless; it looks like it provides information but it really masks some level of ignorance. Conversely, too much detail creates a problem where you can't see the actual information because there's just too much data in the way. A good tester can sift details for relevance and communicate them.
  • Translation abilities. Usually testers will wind up working with people having very different understandings of the system, different technical facility, and different needs out of a communication event (n.b., fancy speak for "conversation"). A good tester can quickly discern the other party's focus and skill set, and tailor the conversation to that.
  • Good memory. I can't count the number of bugs that I've seen found because, "hey, didn't we try that and see something funny last week?" A good tester will remember what I call the "niggling things"; all those little details that weren't important at the time, or that we haven't gotten back to investigating yet. Those niggling details are where your most interesting digging lies.
  • Logical. If I have to explain every detail of a system to someone, I'm going to go insane (and may take you with me!). I want someone who, given a pattern or a basic understanding, can go figure things out. Further, this tester has an ability to say "if X and Y, then possibly Z", even without the system actually being in place. This is invaluable in considering the ramifications of a feature change, or thinking about what may have changed in a system due to a configuration change, or considering where we may find bug clusters.
You'll notice I haven't said, "Finds a lot of bugs", or "Has a low duplicate find rate", or even "Can find the scary issues". Those things are side effects, not attributes of a person.

So there are lots of wonderful things to look for. What about the red flags?

There are certainly things that scare me:
  • "I always wanted to do this". Someone who "always" wanted to be a tester is already prone to exaggeration (at 5 you wanted to be a tester, not, say, a firefighter?). In rare cases that may be true, but it's more likely that you've got someone who's just trying to appear eager. And I'd rather have truth without exaggeration - what are you going to say when you talk about the system?
  • "I'm good at finding bugs". Okay, so you can pick things apart. Finding bugs is part of the job. What about the rest of it? Tell me about that, too.
  • Automation engineers. I expect all my testers to be able to approach a system with a test mindset. That includes those who will spend most of their time working on test infrastructure/scripts/automation/etc. You still have to be a tester. Note that for this one, I could imagine a situation where this wasn't a negative, but only in cases where I'm looking to fill a very specific position on a team that has the coverage in other areas.
This is, I suspect, a touchy area. I know that the things I value in a tester are not necessarily the only things - or even the same things - someone else might value in a tester. So let's hear it: what do you think makes a good tester?

Monday, June 29, 2009

How Good You Say You Are

So here's the thing. Some people are not so good at their jobs. Some people are good at their jobs. Some people are great. And there are a few who are just out-of-this-world at their jobs. These are the testers you get to work with once in a career, or the developers who blow your mind and write maintainable code while doing it.

And then there's an entire separate piece - the self-assessment. How good does this person think they are? And again this is all over the map. There are people who are convinced they're a mistake waiting to happen, and people who think they're probably pretty good. Then there's the cohort that can be heard saying at cocktail parties, "Well, because I'm very good at what I do." And lastly there's the gang that you hopefully only ever hear rumor of, who are convinced they are the best (whatever profession they are) that the world has ever seen.

The correlation between actual quality of work and perceived quality of work is very small.

So two questions:
  1. Who cares?
  2. How do we overcome this, assuming someone cares?
The short answer to the "who cares" question is: whoever is judging themselves. If my self-assessment is wrong, I will either be underestimating my contributions, or I'll be setting expectations higher than anything I can possibly meet. Underestimating means I don't get that job or that plum assignment I want. Overestimating... well... we call those "bad hires". It's okay to be a little wrong - we're probably all a little wrong - but when you get way off there's a problem. And you'll start to see it: you'll be passed up for things you could do easily; or you'll be hit with a string of disappointing performance reviews.

So how do we avoid this erroneous self-assessment trap?

First listen. You'll hear about how you're doing. Do you keep getting the hard bugs thrown your way? Do you get "good find" comments? Got a raise or a title bump? That's a sign you're probably not half bad. Go with it. Alternatively, are your performance reviews not as good as you would expect? Do you keep getting asked to redo work or explain your assertions? You may not be as good as you think you are.

Avoid extreme pitches. Are you billing yourself as "tester extraordinaire"? That's an awfully high bar to cross. You might want to back off that a bit.

Ask someone you trust. Find someone you trust - a colleague or a boss - and ask them how good you really are. It'll take a little convincing to get them to be blunt, but it's invaluable.

Your face on the world is first your self-assessment and second your actual work. Make sure your self-assessment gives you the opportunities you deserve and sets you up for success.

Friday, June 26, 2009

Unpredictable

In general, I think that one of the characteristics of testing is seeking for predictability. We find a bug, and we want to find a way to reproduce it every time. We look for inconsistencies in a product, seeking to normalize the user interaction. We seek to reduce customer surprise by providing knowledge.

Unfortunately for our seeking, we live in an unpredictable world. And thus we find ourselves attempting to normalize, or at least predict the unpredictable.  These are the things we find in the field that you never would have dreamed of. There's the drive that's failing but doesn't actually have any bad sectors, so it wreaks havoc without anyone getting notified by the bad sector detector. There's the narrow race condition that you don't fix because it's so narrow... and then it happens in the field. There's the customs official who decides to hold your shipment for two weeks' inspection, when the client needed it in three days and you shipped it overnight. There's the potential deal you really shouldn't win but you do.

In the end, we can't predict the unpredictable (it is, after all, unpredictable!). But we can predict that something unpredictable will occur. We can look at history and say that unpredictable things happen about X often, or about Y times a release. And that's something we can plan for. Sometimes the project plan needs to include "major unpredicted event (2 weeks)". We're not sure what it is, but something is sure to come up.

It's not complete predictability, but the unpredictable deserves your consideration. Don't waste time trying to imagine what it might be. Just understand how you're going to deal with an event dropped right in the middle of your carefully constructed plan and your carefully constructed product.

Thursday, June 25, 2009

You Can Have Some Of What You Want Easily

Over time as you work somewhere you build up systems. You have an automated test infrastructure and a way you interact with it. You have a way you use your test systems for manual (or semi-manual testing). You probably even have QA types who typically use the same systems ("Hey, Bob, can I try something non-destructive on your test box?").

Over time, though, your needs change. That automated test suite is large and getting larger... and now you have a whole new project that needs some time on the infrastructure. Your system configuration is now a two-machine config, and you have to share test systems a bit more. In other words, life happened. Time to catch up.

There's good news and bad news here. The bad news is that you are probably looking at a fairly large project to make major changes to accommodate your needs. To rejigger your test estimates to account for having fewer net test systems will have an effect measured in weeks. To change your automated tests and the infrastructure in which they run to be more efficient will take weeks or even months.

The good news is that you can have about 80% of what you want with a lot less effort. Sure, it'll be a bit cruddy, but it's probably a viable internal solution. But you can find a way. You can reorder your test plan to do non-conflicting things in parallel. You can run just a small piece of the new stuff alongside the existing automated tests.

Some jobs are big jobs. And eventually you're going to have to do the big job. But in the meantime, look to see if you can get part of it with a lot less effort. It'll save you a lot of interim pain.

Wednesday, June 24, 2009

Over-Reproducing and Under-Reproducing

There's a fine line between helpful and obnoxious sometimes. Take reproducing a bug, for example. There are two exchanges you commonly hear:

Exchange 1:
QA: "I found a bug"
Dev: "Well, hmm... that shouldn't happen. And it didn't happen when I tried it. Can you reproduce it?"

Exchange 2:
QA: "I found a bug"
Dev: "Ok"
QA: "The bug still exists"
Dev: "We haven't fixed it yet"
QA: "Look! After 5 more builds the bug still exists"

These two exchanges show the opposite problem. 

In the first exchange, you have under-reproduced the bug. You didn't try it again from far enough back, and you may have missed something in your writeup. It may really be a bug, but there's more subtlety to it than you actually wrote up.

In the second exchange, you're over-reproducing the bug. If no one's tried fixing it yet, of course it's still going to be a problem. While it's true that sometimes dev will look at a bug and think it might be fixed, that's not really a common case. So what are you really accomplishing by looking for a bug that has likely not changed state? In practice, not much. You're pretty much wasting your time recovering old (and unchanged) ground, and you're being obnoxious. So don't.

It's occasionally a bit difficult to know whether you should spend time and effort reproducing a bug that hasn't been fixed yet, but in general, if your initial writeup is good (and you reproduced it then), it's probably not worth the effort unless something has changed that is likely to affect the bug.

Easy lesson for us all today: Reproduce a bug until you know how to make it happen and you understand it, then stop until something material changes.

Tuesday, June 23, 2009

Too Big Task List

One of the things that's fun about prototypes is that they're fast and they're small (at least, usually). It's very much a case of "let's see if we can do this". And then the big moment comes. It was a good prototype; let's make it a product!

Did you notice your task list?

It just went from this:
|--|

To this:

|---------------------------------------------------------------------|

Whoo boy.

Your task list has a lot of things on it now: market research, API development, product development (hint: it's quite likely that you're throwing out the code you've written in the prototype), documentation, testing, performance evaluation, design, etc etc etc.

Okay. Slow down. Yes, you have a lot to do. Now, do just two things:
  1. get a high-level list of what you have to do between now and release
  2. figure out what you have to do this week
It's most important when you're starting to productize something to simply start. If you've shipped product before you kind of mostly know where you have to end up. So get that big picture out of your head and on paper (or virtual paper of your choice). Then get the details that you need right now. That's it.

One of the best things you can do to keep that excitement and productivity from the prototype process going is to simply keep going. Keep coding, keep designing, keep coming up with use cases. Don't let the analysis you need to do for the product stop that entirely. It's important to do the analysis, but it's okay to do other things as well. Do the fun and the analysis - you really can have both.

Monday, June 22, 2009

I'm Going to Teach You That

We, like most companies, use a process that's frankly, "our own". It's most like XP, with some elements lifted from SCRUM and a few other places. We've definitely adapted, though. Even if you've worked in an XP shop before, you probably will still find some of our practices a bit different than you're used to. Most people haven't worked in an XP shop, so they're going to find it a even more weird than what they've done before.

No matter what your background, I expect to teach you our software development process.

You do have to bring some background to the table. You're probably temperamentally inclined to work the way we work in general. And you may have experience with a lot of the techniques we use, from the concept of automated tests to the use of continuous integration to the understanding of how to review code.

But I don't expect you to know that every checkin has a reviewer, and how to ask for a review. I don't expect you to magically know how much pairing we expect, or when we write a test for something and when we don't. You probably don't know our branching and merging techniques, or what it means when the orb turns red (although that one is pretty obvious!). I expect to teach you all of that.

And that's why I don't really care too much about your process background. I care about your temperament - can you work in our environment? - but not that you've studied the details and memorized the Extreme Programming and Pragmatic Programmer books. You bring the right attitude; I'll bring the details.

Friday, June 19, 2009

Verifying a Bug

You found a bug, someone fixed the bug, and now it's sitting back here in your queue awaiting verification. A build containing the fix has rolled off the build system and you're all configured. Awesome! It's verify time...

One of a few things is going to happen here:

It all works great, and nothing else broke.
This is ideal. It's well and truly fixed, and your product is now just a bit better. Great. Mark the bug as verified and move on.

It works great, but something else broke.
This happens to the best of us. Sometimes the fix itself broke something else, and sometimes fixing this bug merely exposed one that had been hidden before. For example, before you couldn't collect logs at all, and now you can (fixed!), but it collects the wrong thing (whoops!). Either way, you're not quite there yet. However, the bug as reported is fixed. So mark the original bug as verified and open a new bug for the new (or newly exposed) issue. You get bonus points for linking the two issues, or at least putting a comment in both bugs that fixing defect X resulted in logging defect Y. Don't go reopening the bug; the behavior of the system has changed, and your defect tracking system should reflect that.

It's useful to keep track of when this occurs, because a true fix should be more complete. Particularly if fixes are actually breaking other things, that points to brittle code that could probably use some cleanup, or to a developer who maybe isn't as careful as he should be. Both of those are problems you want to find sooner rather than later.

It does the same thing it did before.
This also happens, but it's more unusual than a fix with unintended consequences. In this case, the first thing to check is that you really have the fix. Did you take a build from the right branch, and are you sure it contains the fix? After all, for someone to attempt a change and have exactly zero effect on the behavior of the system is rather unusual.

Assuming you do have the right build, take it back to the developer. Don't just bounce the bug if you can avoid it; this is a time for conversation. You may find that the developer didn't understand your initial bug report and made a different fix, or that there's a difference between the developer's setup and yours that's causing the problem to recur, or that a later checkin accidentally reverted the change. After you've talked the bug will probably go back to the developer, but it's worth the conversation first.

It does something different but still not right.
This falls into the "well, we tried" category. The behavior changed, so you know you're looking at a change, but it's just not there yet. This is very similar to the above category. Unless you're working with a truly disengaged developer, it's not too likely that they just didn't care enough to finish the fix. Generally any engineer worth their salt is going to at least run through your steps on his environment and make sure that works. It's more likely that you're looking at another issue, or a conflicting checkin, or a vague writeup in the first place (did you really put ALL the steps in to reproduce it?), or a difference in environment. Here, too, have the conversation before you just go playing ping pong with bugs.


Note that there's also a special case of defects found in automated tests, but there's a bit of a different heuristic for that.

Regardless of what happens, when you verify a bug, it's useful to consider both the system behavior and also the nature of the change in system behavior.

Thursday, June 18, 2009

Range of Benefits

In general, when we change things, we hope to derive benefit from the change (Shocking!). Sometimes the benefits are greater than others, but that doesn't mean we shouldn't at least consider doing them. After all, your potential benefits when trying to solve a problem cover quite a range:
  • Do no harm. Okay, so you're not getting anything, really, but at least you didn't make it worse. Like a doctor, as an engineer working with a system, it's kind of the least you can go for. Sometimes this is the unintended effect of a change ("but we really thought it would do something!")
  • It's not much, but it's easy. This is the entry to the land of non-zero benefits; you're getting something. As long as the cost isn't much, it's not necessarily bad if the benefit is low, too.
  • It won't fix it, but it'll help. This change is intended to be partial. Diagnostic changes generally fit in here. You're not really trying to fix the problem, but it might help the user and it will help you get to the bottom of whatever's going on.
  • It hides the symptom. Here we're trying to alleviate pain, even if we don't resolve the underlying problem. In some cases, this is good enough; it really depends on what the underlying problem is.
  • Fixed! This is usually what you start out aiming for. Sometimes you can't get here, because fixing the problem might have unacceptable side effects or untenable requirements, but often you can achieve resolution.
One of the classic ways to make a problem more frustrating is to keep making changes and not fix the problem. A lot of times this frustration happens because you're not clear on the intended effect of your changes. So slow down for a second, think about what the change is going to do, and communicate it to the person (or client or team) having the problem; it'll help keep frustration down. And that's good for everybody.

Wednesday, June 17, 2009

Balancing Coverage and Verification

Okay, you've achieved feature completeness. Hooray! Further, you've mostly got things integrated. Now it's time to start stabilizing for release. Time to fix some bugs, testers are probably going to find an increased number of bugs (hey, it's stable enough to find any problems - that's a good thing). Depending on your product this phase is going to last somewhere between a few hours and several months.

Things start off simply enough: testers set up a system (or several) with the candidate code running on it, and go through finding stuff. Combine that with the bugs that just didn't quite get fixed during feature development, and dev's busy knocking down their bug queues. All well and good so far.

Shortly after that, though, we start to have a bit of testing dilemma. The list of bugs awaiting verification is growing, and the list of areas of the product you still have to cover is shrinking but still definitely far from zero. What do you do first? Prove the fixes, or go looking at areas of the product you haven't touched yet?

In other words, how do you balance coverage and verification?

As with most things we talk about, the answer is "it depends" and "probably not 100% of either". There are good arguments for both actions.

Benefits of increasing coverage:
  • If there's a scary issue you haven't yet found because you haven't looked in that area, well, better to find it sooner rather than later.
  • The less that is untested, the lower your risk overall (I tend to believe that minimizing unknowns generally minimizes risk).
Benefits of verifying bugs:
  • It closes out the lifecycle nicely - you really know it's done.
  • If there are any bugs hiding behind the bugs you're verifying, you're likely to catch them here. You are still improving coverage by increasing your depth of coverage in that area.
 So both increasing your breadth of coverage and verifying bugs give you good things, and doing 100% of either of them is not a recipe for success. So how do we balance these things?

Typically, I'm going to start off a release really worrying about coverage, and over time we'll increase the amount of verification we do, finishing with a coverage-oriented pass. The schedule looks something like this:
  • Beginning of the cycle: 80% of the team's testing time is spent on extending coverage. Until we've done one pass through pretty much everything, I'll err on getting that one pass done. 
  • About 35% of the way through: Now that we've hit the basics, we start worrying more about finishing out some of this work - say 50% on verification and 50% on more general coverage increases. 
  • About 60% of the way through: We've flipped  and are spending most of our time verifying defects; we do get some incidental increased coverage but it's less of a focus.
  • About 90% of the way through: There's a big flip at the end of the release cycle. By now you should have verified pretty much everything that's coming in, and your find rate is probably pretty darn low, so we're just looking at coverage again. 95% coverage increase, 5% rounding out those last few bugs.
This is really just a general guideline. Based on what we're finding and what's getting fixed we'll change it up a little. If our reopen rate is pretty high we'll err more on the side of defect verification. If this release has touched some deep underlying parts of the code, we may err on the side of increasing coverage sooner to pick up side effects. The point is more to introduce some predictability - depending on where you are it's okay that bugs are piling up a little bit in QA, or it's okay that you haven't touched an area yet - and to make sure that you're setting expectations correctly.

In the end, you're going to need both defect verification and some level of coverage before you're through with a release cycle. Think in advance about how you're going to achieve both, and you're much more likely to actually do so in a manner that gives you the feedback you need as early as possible.

Tuesday, June 16, 2009

What Really Is Nice?

I got a lot of comments on my post about being nice, and it occurred to me that there were a couple of things that I should follow up. Specifically, it occurred to me that I didn't address what the actual act of being nice is, just that I think it's the difference between being good at what you do, and being great.

And I suspect we may have several different definitions of nice floating around. So here's my definition of being nice at work...

Being nice is:
  • Reaching out to people. Say "hi, so and so" in the morning instead of the nod-and-smile.
  • Not attacking people, but discussing ideas. It's perfectly fine to disagree with something, but any disagreements should be of the form, "Here's a hole I think I found in this idea, let's figure out what's wonky in that area" rather than "Who on earth would think that is a good way to do X?"
  • Giving credit publicly. Someone else came up with that cool idea? Say, "yeah, that's a good idea" out loud in public.
  • Letting a bad day slide. Everyone has bad days, frustrating times when they resemble a snapping turtle more than a human. On days like that, not escalating is the best way you can help anyone - including you - get through it.
  • Remembering the little details. Every Tuesday one of the teams here brings in bagels. Being nice is saving the last whole wheat bagel for the guy who really likes them and taking an egg bagel instead because you're really indifferent between those two flavors.
  • Really listening. Close the laptop, stop mentally preparing your response, and actually listen to the person speaking.
Being nice is NOT:
  • Letting people walk all over you. You still get to have opinions (of course!); you just need to express them respectfully.
  • Hiding bad news. If a release has to slip, or a major defect leaked to the field, or if you have an employee with a performance problem, it's not nice to hide it. Better to confront it, but make sure you're confronting the problem, not the person. And make sure you're ready to look for a solution, not just start moping about this major issue.
  • Telling people what they want to hear. In the long term, and even in the short term, the truth is the nicest thing you can say. Sugar coating something (or worse: lying!) is a recipe for disaster.

To me, being nice is really about attitude and phrasing more than it is about the basic underlying message. It's about taking the information you have to convey and making sure you convey it in a way that is respectful (Jeroen, that word's for you!) and considerate of your audience.

Monday, June 15, 2009

Okay, But Where Are We?

Your typical daily standup - in SCRUM or otherwise - consists of each person talking about two things:
  • What did I do?
  • What am I going to do?
  • Anything stopping me?
This is all good stuff. I walk out of there sometimes, though, wondering if we're actually on track. I feel like I have a lot of trees, but I'm not at all clear on the forest.

This kind of overall progress is supposed to be charted on your burndown chart, if you're keeping one. But that doesn't always measure everything. It can only tell you if what you've done so far is on expectations (or behind or ahead). It can't tell you if there's something on the horizon.

I kind of want to add a fourth question to our daily standup:
  • How do you feel about the release overall?
This is a chance for the person to say, "I know I just started this story, but it's looking like it's going to be really hairy", or "Now that I'm into this feature I was scared about, I think it'll actually be pretty straightforward." I think of it as the early warning system for what your burndown chart will tell you in a few days. And the earlier the warning, the more you can do about it.

That extra 15 seconds is worth it to me. So ask what happened and what you're doing, but also look at your place in the overall effort by simply asking  about it explicitly.

Friday, June 12, 2009

Be Nice

Your attitude is one of the most powerful tools in your professional arsenal. How you choose to interact with people will ultimately govern how well you can do your job. And there are two things you control there: the circle of people with whom you interact, and how you talk (write, etc) to them.

I don't care how good you are at programming, finding bugs, whatever. If you're rude, or if you speak poorly to people who don't understand your... quirks.... you will wind up being shunted to the side. No one wants to work with someone who makes them feel beat down all the time, or someone who they simply can't understand, or someone whose reaction to every issue is to start wailing about the end of the world.

There are two circles you work with. First there are those who you work with constantly and who understand that when you say, "what moron would check in something like that?!", you really mean, "there might be an issue in this code" and further that there's a good chance the moron was you. You can let your hair down around this group a bit. There's still a line, but it's further away.

Then there's the circle of people who you work with less often. This group isn't exposed to you every day and doesn't necessarily appreciate that your passionate attack is about the work rather than the person.

So what's the secret? Easy. Figure out your circle, and then:

Be nice.

Be relentlessly nice.

It's a high bar to cross. You have to be informed. You have to be able to consider ideas quickly and effectively. You have to be able to translate concepts into language sales, marketing, and development can all understand. You really do have to be good..... and you have to be nice.

You'll feel provoked sometimes (not everyone is as good as you are about considering their coworkers); be nice and refuse to engage. You'll be frustrated sometimes; be nice. In the end you'll get pretty good at it. (Good sales guys, by the way, are amazing at this in front of clients.)

Being very good at what you do makes you just that: very good. Being very good and being nice: that makes you great.

UPDATE 2009-06-16: It occurred to me that I failed to define "nice", so I posted an update attempting to clarify my thoughts.

Thursday, June 11, 2009

What If Your Customers Aren't Agile?

Without getting into the whole "agile" "Agile" "Agile(tm)" "postAgile" "flexible but not, you know, Agile" debate, here's something I've been trying to work through in my head.

Let's posit for the moment that you want changes in your customers hands as soon as possible.  Let's further posit that you've decided the best way to do this is to do lots of little releases. If you're using SCRUM, you're trying to release after every iteration... or maybe every other iteration.

Question: Are your customers prepared to handle your release schedule?

The answer to this may very well be yes. If you're the stereotypical webapp, your customers get updates automatically and as long as it continues to work, great! More features? Bonus!

Other times your customers may not be so flexible. If your customer is a large enterprise, they probably have policies around upgrade frequency, not taking .0 releases, etc. Plus, making any change probably requires the approval of three other departments and takes several weeks. Unless it's an emergency, "fast" is a bit of a relative term.

When your customer has a six month upgrade window, having a monthly release cycle doesn't make a whole lot of sense. You may wish to go through the motions, but if you actually go trumpeting things to these conservative bi-annual updaters, well, they'll start to laugh at you.

If you want to be agile, that's your business. Please feel free to do so. But you don't get to force your customers into agility. If they want to, they will. If they don't, well, you have to work with them.



(Note for those agile proponents: Yes, this is just one part of being a flexible development team. There's a whole lot more there that I'm ignoring for the purposes of this example.)

Wednesday, June 10, 2009

Two Part Fix

There are times you have a problem in the field.  These times suck. While it's going on, you're doing everything you can to fix the problem, whatever it is. And this is good. But keep in mind there are two parts to fixing the problem:
  • alleviate the customer's pain
  • resolve the underlying issue
Particularly when the issue is non-trivial or will take some time to fix, it can be a while before you are able to resolve the underlying issue. In this case, consider alleviating the customer's pain in the meantime. Ask yourself the following questions:
  • Is there a workaround?
  • Can we reduce the frequency of the issue?
  • Can we reduce the effect of the issue when it occurs?
If the answer to any of those is yes, you can make your customer happier while you go in and fix the problem once and for all. Generally, unless the fix is going to happen really soon now, alleviating the customer's pain is going to be worth your time.

When you have a customer issue, don't forget to look at both halves of the fix: (1) making the problem hurt less; and (2) fixing the actual issue.

Tuesday, June 9, 2009

Meetings as Prod

Meetings have several purposes (no, really! Sometimes a meeting isn't a bad thing). Sharing of information, getting consensus and making decisions, etc. can all happen in a meeting.

But sometimes a meeting is just there to force you to actually work on something.

Need to prepare a presentation and know you're a procrastinator? Schedule a review meeting a good amount of time before the presentation. You'll have to do the presentation for the review meeting. Procrastination moved!

Dread the "let's come up with some goals" process that happens every year or every six months? Don't worry about it... yet. Just do the prep work before your review meeting. That's your real deadline.

It's not that you don't have to do things - you do. And it's not that you have to enjoy everything you do - you don't. It's just that sometimes you need a bit of a nudge to get them done, and you can let a meeting be that nudge. 

In the end, I'm basically doing the work to avoid being embarrassed that it isn't done. And when it's work I don't take pleasure in inherently, well, that's okay. It still gets done. Thanks, meetings!

Monday, June 8, 2009

What You Meant

Humans are human, even at work. Emotions will get involved. Sometimes that's good - pride in a job well done, seeking of praise and approval, etc. But there are about a million ways to insult a human, intentionally or unintentionally. This is why we watch not only what we say but how we say it.

For the most part, this is straightforward. Other times, you can say something with the best of intent, but what the other person is likely to hear is totally different. For example:

"I know you're too busy to review this presentation"
  • What you meant: I'm trying to save you some time.
  • What they heard: You can't handle your current work, much less something else. Oh, and I don't really value your work or opinion anyway.
  • What you should have said: "It would be great if you could review this presentation, but I understand if you can't, or if someone else should. Just let me know."
"I couldn't ask you to work this weekend"
  • What you meant: I'm trying to respect your time.
  • What they heard: It's not like you're going to step up, anyway. OR You're looking run down from how much you've been working lately, and I don't think you can handle it any more. (NOTE: In certain situations, where the person has, say, their wedding, this does not apply - what you mean is what they'll hear)
  • What you should have said: "The team needs to accomplish X this weekend, and Bob and Sue have been stepping up lately. Do we have any other volunteers to come in and help out?" (This is known as the attempt-to-guilt-trip-the-rest-of-the-team approach.)
"Way X is the only logical way."
  • What you meant: Way X makes more sense, so it's what we'll do.
  • What they heard: What kind of illogical moron would come up with way Y?
  • What you should have said: "Way X gives us A, B, and C, with the downside of D, which we will mitigate by E. Way Y has the downsides of way X, but we don't get C and we can't really mitigate D. Way X is the way we'll go." (Translation: Explain why, and attack the idea on the merits, not by simply calling it stupid)
There are other phrases you have to watch, of course. These are just some of the nefarious ones. The point is that although we'd all like to think we're egoless engineers, and although in some environments knocking people down is an acceptable thing, most of the time, we need to try to not insult other people. And you know the things that secretly hurt you even though you kind of know they didn't mean to? Everyone has those, that secret little cringe when they feel just a bit insulted, even if they know it wasn't meant as an insult.

So mind your phrases, make sure you say what you actually mean, and that you're not taking shortcuts that will only hurt someone's feelings.

Friday, June 5, 2009

Everything Is a Reaction

"Reactionary" is a denigrating term, at least in the places I've worked. To say that something (or someone) is reactionary implies that it is incompletely considered or totally unconsidered, that it has a diminished chance of success, and that we really could have planned ahead better here. "Proactive" is the opposite of reactionary, and it's generally a point of pride.

I don't know that it has to be so black and white, though. In some sense, everything we do in a business is a reaction, from starting a business, to adding a feature, to introducing a new product, to exiting the business.

Founding a business: "I started WidgetCo because I really needed a Widget and no one made one that lasted more than two months."? You started a whole company as a reaction.... to a need. And now it's a thriving business.

Adding a feature: "Our customers are really asking for CIFS support. That's coming in version now + 2." You're reacting here... to customer needs. If your customers and potential customers really do all (or most) need that feature, you've expanded your possible footprint. This is how you achieve success.

Introducing a new product: "Sales of our primary product were down and the market was saturated, so we're coming out with a new product". You're reacting here... to your sales pipeline and market conditions.

That's not to say that reacting is always good. Far from it. Adding a feature just because one customer asked for it doesn't mean it will be useful to anyone else (and you may never recoup the cost of building it). Making your product "Web 2.0" just because you read that term in a lot of magazines lately might not make any sense (what does it even mean for your mobile-platform based synchronization product?).

So don't automatically follow your immediate reactions. But don't condemn them either. Give them due consideration as you would any idea, no matter where it came from.

Thursday, June 4, 2009

Summations

Quick! For every person in your group, provide a one-sentence description of that person as a team member and contributor.

(For the record, I'm totally making these examples up).

Frank: Really smart, but has trouble focusing on work.
Melvin: Can't explain things well, but is a wizard with the highly intermittent, hard-to-track-down problems.
Claire: All around good, but not as fast as she thinks she is.

When I ask people to do this, they'll generally come up with a good and bad balance. "Bad, but good", or "Good, good, but bad". This applies to either individuals or entire teams. If you have a situation where you can't think of a good thing about a person, or if you can't think of a bad thing about a person, then there's a problem. People and teams are, well, human. There are always good things and bad things about their work. If you can't see both sides, ask yourself if you really have someone who is that extreme. More likely, you're biased in some way, for or against. It's these types of biases that can blunt our effectiveness as managers - we're not looking at the whole person or team. And yet we do summations all the time, explicitly (during review season, for example), and implicitly (when deciding to whom a task should go).

Overcoming the snap judgements about someone's effectiveness is hard, but most of it comes down to committing yourself to a balance. For someone you think is all bad, find something that they do really effectively. For someone who can do no wrong, look closely and don't stop until you find something that they're not as good at (that mythical perfect team doesn't actually exist, unfortunately). Use a written pros and cons sheet, and remind yourself of the whole picture when you need to come to judgement on someone or some team.

Be mindful of how you're considering people and teams, and make sure that you're not hurting yourself and the team by letting yourself see only one side. There are two sides to every person, and to every team. You owe it to them to look at the whole picture.

Wednesday, June 3, 2009

Event Modeling Over Time

I had to haul this one out again today: time-based event modeling. In a nutshell, I was looking for patterns, and I was looking for some sort of variation in the patterns to figure out what on earth might have changed. It's useful for a case where no one thing went wrong; the system just slowly churned itself into the ground. And so I figure out how this slow spiral started by looking at the heartbeat of the system, and following it down.

By the way, this is hard work. Make sure you have some time to focus when you go to do it.

Tuesday, June 2, 2009

Verify Or Cover

So here's an interesting release scenario....

You branch for release.
You start testing.
You  find a bunch of bugs.
You keep testing as people fix the bugs.

Now you have choices: you can verify the bugs, or you can continue to expand your test coverage.

Which do you do?

Arguments for increasing coverage:
  • You found bugs in the first stuff you looked at; you have to assume there are major bugs in the stuff you haven't looked at. Looking earlier will help you find them earlier.
  • Bugs tend to be found in clusters. Emphasizing verification only wears more roads in that clustered area - providing less new information.
Arguments for verifying bugs:
  • Fixing bugs may cause bugs - you need to find the collateral damage. It helps to find it earlier while the code is still fresh in everyone's mind.
  • Verifying bugs lets you finish out a cycle (in this case, the test -> find -> diagnose -> fix -> verify bug cycle), and that leaves fewer loose ends. It's satisfying for most people involved to complete something like that.

The real answer, of course, is that you don't do either exclusively. You spend some time on verification (and finding collateral damage), and some time expanding your coverage to areas of the product you haven't hit. However, in the need to balance both, consider where your risks are. If you have a high collateral damage occurrence, weigh toward bug verification. If you haven't hit much of the product yet, or if you have teams sitting idle, weigh toward increasing coverage. Take a look at your team, your product, and change your testing strategy accordingly.

Monday, June 1, 2009

Weirdest Bug of the Day

I sat down at my desk this morning and noticed a dark pixel. And then the dark pixel started moving. I had a bug on my monitor.

Except then I tried to brush it off and I couldn't feel it at all.

I have a bug in my monitor.

I can't decide if that's cool or gross.