Tuesday, February 24, 2009

If It Hurts...

"Doctor, it hurts when I do this!"

Never mind. We all know where that one's going.

Seriously, though. Sometimes a problem doesn't have to be a problem. Sometimes the problem is really a choice, and it's possible to choose something else. So when you're banging your head against something, find out if you're solving a problem or making a choice. Maybe it's easier to simply make a different choice.

For example, in a product I'm currently working on, you can choose to use TCP or UDP. These are different layer 4 protocols, both of which are capable of routing NFS traffic. Our product simply wants the NFS traffic; which protocol it uses is up to the client mounting the NFS export.

Is there a difference between choosing to use TCP and choosing to use UDP? Of course. UDP is lower-overhead; TCP guarantees that packets will arrive in order. Is one protocol inherently better than the other? Nope.

This is a choice.

Sure, there are good reasons to make one choice over another. But that doesn't mean that everything guiding you to one option is a problem with the other option.

Worry about the problems.

Don't sweat the choices.

Monday, February 23, 2009

Must Be a Writer

If you're going to work for me, you must be able to write.

Wait, what?

I mean you must be able to communicate clearly a variety of technical and business concepts in such a way that your audience both understands and is satisfied with the information.

Just this past week, my team has had to produce the following:
  • several status reports for a release that goes to support and manufacturing Monday
  • logged bugs based on automated tests that failed
  • a summary of all the issues with one area of our product, including bugs, ideas for improvement, and "gotchas" that are related to customer configurations
  • an explanation for a client about why an Active Directory user that our system uses must have certain privileges
  • a script and accompanying documentation to handle diagnosis and cleanup of machines left in a bad state by nightly tests ("handling" here is logging a bug with appropriate information and logs)
  • written results of a log analysis of a third-party product that is a client to our product and was having some problems interfacing to us; this was sent to the support team of the third-party product
There's no way one person can write all of this on their own. Producing a report of what you're doing is an essential part of being a good tester. I don't care how good you are at doing things; if you can't effectively talk about them to a variety of audiences you are less effective.

So next time you're bringing a new person on to your team, think about their testing skills and their ability to integrate with your group. Just don't forget to consider their writing skills as well.

Friday, February 20, 2009

Completionism

Our product has an upgrader (shocking, I know!) and we of course have to test the upgrade. There are a number of different scenarios, so we wind up with a matrix like so:



Basically, we have version we're upgrading from, hardware type (Alewife or Davis), and object store type (M or EC). The specifics don't really matter; these just happen to be ours.

So the first thing a tester is going to do is to make up a complete matrix and then start running down the list, right? Err.... let's think about that for a second. We've kept this matrix tight, so it's not too long if we did want to do it all, but that completionism is not the best use of our time.

We have an enterprise product, and one of the side-effects of that is that we know what our customers are running in the field. So let's use that: for a given combination of variables, if no customer is currently or will ever run it, we can skip it. For example, no customer is running 3.3EFS1 on all Davises and we know that no customer would be upgraded to that release while they're running Davises (they would go directly to 4.x), so we don't have to do any testing in that scenario.

Completionism is attractive, but it may not be the most efficient thing to do. Sure, set up your matrix, but think about it as you go through it - don't be a slave to the boxes!

Thursday, February 19, 2009

Frazzled

We all have days when there is way way too much going on and it's all urgent. Sometimes that way too much includes personal things, and your coworkers may have no idea something's happening. Sometimes it's all work stuff, but for different groups, and each group needs something now now now!

You're frazzled.

One thing I sometimes forget is that it's okay to tell people that.

You can say, "I need 5 minutes to gather my thoughts," or "there's a lot going on right now and I will get back to you, but please have some patience."

So when you're frazzled, go ahead and tell people. Just make sure you:
  • acknowledge their need and assure them you won't forget them
  • tell them that you have a lot going on (no specifics necessary!)
  • let them know how long you need
  • don't add to your own frazzle by trying to do it all at once - get to a stopping spot with each task before you move on to the next one, even if it means you have to ignore your phone for a bit.
We all can sympathize with frazzled days, but no one will know unless you tell them. So be calm and firm and just let all the needing people around you know that you will get done but today is a busy day.

And then get through it.

Wednesday, February 18, 2009

Scratch the Biggest Itch

Take any sufficiently perfect system, add a bit of familiarity - knowledge plus usage -  and it will develop cracks. Call this Catherine's Law of Closer Looks.*

So now we've got a system that we'd like to change. Maybe we're talking a little spackle and paint to smooth out the edges, maybe we're talking tearing down walls. Doesn't matter. We've got ideas!

Now, in school they taught us just what to do next.... brainstorm. Except I think this sort of sucks in practice. You get lots of ideas, true... and then what? Life starts to interfere - a customer problem, a new release that has to go out, additional problem areas, etc. And all you've got is a list that might be prioritized.

So skip the list. Instead of holding an afternoon-long brainstorming session, stop. Say to yourself, "what hurts me the most?". Spend an hour or two (no more!) figuring out what single thing causes you the most pain. Then fix that.

That way, before other things have had a chance to swoop in and pre-empt you, you've made one change that actually helps. It's still not perfect, but it's better. And with any system, better is a pretty darn good goal. Over time, each little better adds up to a huge improvement in your system.

Find your biggest itch and scratch it. Repeat. Eventually you won't itch any more.



* For the record, I'm sure someone has said this before; Google's just not helping here. If you can find it, I'll be happy to credit appropriately!

Tuesday, February 17, 2009

Small Bugs

Sometimes we'll hit a bug that's so small it hardly seems worth bothering to log it. Maybe it's a missing icon in an uncommon user type in an obscure area of the application. Maybe it's a message that erroneously shows up in a log and is never visible to the customer. Maybe it's something else. In any case, it sure doesn't seem worth the 2 minutes it'll take you to write it up (including a screenshot or logs).

Doesn't matter. Write it up.

Test is not for the lazy. This bug may never become important enough to fix, or it may be really quick and a developer just knocks it out when he's looking for an easy thing to do on a summer Friday at 4:30 pm. But if you don't log it, that bug will always be there.

In the end, one small bug doesn't really matter. When you have several dozen or several hundred small bugs, though, your customers start to lose confidence in your product. It just looks sloppy and adds up to something your customer doesn't trust because it doesn't "feel solid".

No matter how small the bug, it's worth putting it on a path to being fixed.

Monday, February 16, 2009

Mondays Are For Ambition

Mondays are great days. You have the whole week in front of you - 5 whole days in which to accomplish many good things.

I love Mondays.

Mondays are when you can tackle that project that's going to take several days because, hey, no weekend to derail your thought. You can put the petty things off a couple days because you can do that and it'll still get done this week. Mondays are for ambition.

The trick, then, is how to preserve that ambition. How do you get in and actually do all these things you think you can do?

I don't claim to have all the answers, but here's what works for me:
  • Take Monday morning to clear the decks. I happen to have a Monday morning meeting, so I get that out of the way. I look at automated tests from the weekend. I put things that came up over the weekend into the appropriate buckets.* All this takes me through noonish.
  • Start on Monday. Monday afternoon I block a big chunk of time out on my calendar for projects. I tell everyone no interruptions unless the building is burning down for four hours on Monday afternoons. Email goes off, IM goes off, I put my headphones in. This is project time - let's do something with that ambition.
  • Wrap up the day with little stuff. I reserve an hour at the end of the day for little stuff that has accumulated. This last hour is when I'll answer emails, deal with crises not involving burning buildings, etc. If I don't take this hour I find I spend the previous four hours worrying that I'm going to miss something and make Tuesday suck. Giving myself time to deal with it on Monday frees me to focus before I have to deal with whatever it is.
There's no rocket science hear, but I do love when you can start a week with getting something substantial done. Use the ambitious Monday!



* My buckets for prioritizing tasks are labelled "now!", "should happen soon", "that's nice" and "eh".

Friday, February 13, 2009

On Decision Making and Influencing

Michael Hunter (aka the somewhat-less-Braidy-than-before Tester) wrote a few days ago about the fallacy of the tester holding a release. His point, if I'm understanding correctly, is that somewhere our tester heads get filled with the illusion that we can go running down a corridor shouting "don't ship!" and the product will actually not ship. (He put it more eloquently than that; I'm paraphrasing.)

First and foremost, I agree with Michael if the tester we're talking about here is a member of the test team and not the manager or lead or whatever you call the most senior tester in the group. If you're a tester, you have a test manager for a reason, and foisting this kind of political hot potato off onto him/her is part of that reason. Take advantage of it. And if your test manager doesn't understand that shipping is not your decision to make, you have my sympathies and I hope the job hunt goes well.

However, if you're the test manager, does that illusion still hold? Do you still really get to believe you can decide whether to ship or not? And do you really still not get to make that decision?

I would argue that the problem still occurs. How many times have we heard the questions:
"Is that bug a blocker?"
"So, are we ready to ship?"
"When will you be done testing?"
"Is this build ready for [client]?"

There are a couple of things you can do here:
  • Leave. This has happened at every test job I've ever had, though, so I'm not sure leaving will fix it.
  • Refuse to play the game. Simply answer something like, "I don't know. I do not possess sufficient knowledge of the business circumstances to be able to make an effective decision." This stonewalling is not likely to make you friends.
  • Change the circumstances. Accept that you're going to be asked and spread your risk by making your voice just one among several contributing to (but not making) the ship/no ship decision.
I generally vote for changing the circumstances. Which is nice, but how?

Recognize why your knowledge looks special
It helps to understand why you're being asked these questions. At this point in the product life cycle you the test group are the ones with the most knowledge about the actual product. Product management can talk about what it ought to do, development can talk about what they tried to do, support can think about how to interact with it, but test is the group that can say what it actually does (at least in some cases). You have first hand knowledge about the product and that gives you a loud voice. It would be stupid of the major stakeholders to not at least seek the opinion of the group that has more experience than anyone else with the product, even if that experience is not yet sufficient to make an informed decision.

Make the decision communal
This is the single biggest thing you can do to alleviate the "testers pick what ships" fallacy. You can call it a release board, or a product team, or a release management committee. This should include all major release stakeholders: sales, marketing, product management, support, and development (with QA). And this group is what says yes or no on a release (or in some cases, makes a single ship/no ship recommendation that upper management then signs off on).

Getting this started is always fun, and you'll need your boss's help to make it more formal, but it's eminently doable generally because it sounds reasonable. Plus, getting all the stakeholders in a room to make a decision lets you say I don't know. After all, you're not saying, "I don't know." You're saying, "I don't know but here's how we can find out," and that's a much more actionable response.

Discuss likelihood and workaround
When you go to talk about untested areas in a release, lack of knowledge is not a very compelling argument. Instead, talk about likelihood of there being customer-affecting problems in untested areas, and what kinds of workarounds can be done. For example, if we haven't tested writing using WebDAV, can we ship the product to our CIFS-only customers first and ship a bit later to those using WebDAV?

Similarly, when you talk about a bug, all anyone really cares about is how likely it is to happen, how the customer will be affected (are we talking total meltdown or a small error message in a remote area of the product?), and what can be done to avoid or fix it. So talk in those terms. Saying, "We will see this at [customer], they will lose data, and we don't know of a way to recover it" makes your issue really likely to get voted a stop-ship problem.

Educate yourself
Remember, you're one of the loudest voices in the room even if the decision isn't yours. Your analysis needs to be educated and fact-based. Talk with sales and with marketing to minimize surprises. When the schedule is first created, ask why those dates? Is this feature based, or is there a major trade show or some revenue we really want in that quarter relying on shipping this software? Criteria for shipping can definitely change based on what's waiting for the shipped product and how urgently. This will also allow you to identify reasonable workarounds and be prepared for questions and concerns.

Don't involve the whole test team
In general it's not good to have each tester (or each developer for that matter) running around talking about whether a release is shippable. This is what you have a test lead or a test manager for, so use that. Testers should make their case to their manager, and let the manager speak for the collective team. This will minimize the political playing off each other that mostly takes a lot of time and gets teams upset.

Rely on your reputation
Being asked to make or strongly influence a decision is not a good position to be in, despite the frequency with which you the test manager will end up there. You need to have a good strong reputation so when you talk about risks you get listened to. If you've been right quite often (regardless of whether the ultimate decision went the way you wanted it to), then you will be listened to, and when you say you need more time to reduce risk you're more likely to get it.

Work offline
Don't save these kinds of decision requests for meetings. In a public forum like that it's easy to back someone into a corner and being reasonable flies right out of the window. A quiet word of warning about the state of the release can turn a public disagreement into a private convincing - and the private convincing is much more likely to be effective.

Thursday, February 12, 2009

Ask Why

One of the more valuable tools in a tester's arsenal is a single question:

"What are you hoping to accomplish with that?"

This is particularly true when working with people who have a tangential relationship with the product. They have a problem, they think they know how to solve it, they start, and then they find a step they can't do. So you'll get a question about that step. Answering the question is great, but that's not their real goal.

So to be helpful, answer questions.

To be really helpful, understand the goal and then give a complete answer, which may or may not include answering the specific question.

Wednesday, February 11, 2009

Translator

The test group often finds ourselves the gateway between dev and other groups. Support, for example, comes to QA first when they can't figure something out. Sales comes to QA with a scenario and the question, "Can we handle this?" Product management comes to QA to ask about the state of a feature that marketing wants to demo.

As a consequence, we find ourselves translating a lot. 

Sometimes it is just different terms:

Marketing: "Permabit Enterprise Archive"
Dev: "clique" or "object store"
Support: "object store"

(Marketing changes names sometimes; dev just goes with what we've generally called it.)

Sometimes it's precision of ideas:

Sales: "When the object store is busy"
Support: "During an some authority transfers resulting from server removal"
Dev: "In phases 4 -7  of an uncontrolled leave"

Sometimes its other things, like avoiding sloppy the use of the word "crash" because that's a scary term to users and to sales.

A good test team will handle this translation seamlessly, using terms and phrasing based on the audience. Much like we are polyglots in the languages we use to code, we need to be polyglots in the language we use to describe features and problems. So look at your "to" list, or at the other attendees in your meeting, and choose your words considerately.

Tuesday, February 10, 2009

Sins of Omission

Ogden Nash wrote about two kinds of sin: the sin of commission, and the sin of omission:
"It is the sin of omission, the second kind of sin,
That lays eggs under your skin.
The way you really get painfully bitten"
Problems have a tendency to loom. And when you created the problem, avoiding it is a sin of omission. I think we've all done it, too. Of course, it doesn't actually work very well.

Let's say you screwed up reporting some performance numbers - you meant to say 12 MB/s write rates, and you accidentally said 21 MB/s. No one reads those performance reports anyway, right?

Wrong.

In a month, the sales engineer is going to be doing a quote for a client, and he'll look up some recent performance numbers... and all of a sudden your customer is expecting to be able to fit in his backup window just fine because he only needs 18 MB/s. Only he's not going to get 18 MB/s.

That small reporting problem you didn't fix because it was embarrassing is now a client issue that has support, development, QA, and the account manager involved. Before it was "whoops". Now it's "how are we the company going to deal with this?". "Whoops" is a much better place to be.

So, when you find yourself picking up the corner of the proverbial rug and getting ready to sweep your mistake under, stop. Fix your mistake. Be forthright but not loud about it. And then move on. In a week you'll be the only one who remembers.




Monday, February 9, 2009

Just One Hour

The week is generally hectic - get up go to the gym go to work rush rush through work go home stop by the store make dinner eat do dishes go to bed (phew!). Weekends are different. Sometimes they're hectic, sometimes they're computerless (thank you, rural Vermont!). But man weekends are quieter than the weeks. Those are the times where I can stop and breathe. And during those times, I ask myself:

What is one thing - just one thing - that I can do that will make tomorrow easier?

It makes a great start to the week if I can go in feeling caught up or even just a little ahead on the game. Something that takes me 30-45 minutes on a quiet Sunday will take me an hour or more while I'm in the office, by the time I do it in between all the other things that need to be done. So I can take the 30-45 minutes on a Sunday, and I can walk into the office Monday morning knowing that one item on my list is already checked off.

It's not for every weekend, and it's not for every person, but when you know you're going into a busy week, taking a little time on Sunday can be the difference that tips you from "overwhelmed" into "busy but can handle it." Enjoy your weekend. And use your weekend to help you enjoy your week.

Friday, February 6, 2009

So Many Ways to See Things

There are so many ways to see something. Take something as standard as a map of the world.

You can look at it in the "normal" way....



You can look at it in an old-fashioned way (Check out California. The understanding of California is definitely outdated)...


You can look at it overlaid with other information (this one happens to size countries based on population)...

You can turn it on it's side and look....
You can look at it with a different center of the world (this is an Al-idrisi map)...

You never know what you'll see!

So when you think you understand something or see something, rearrange it. Turn it upside down, move your vantage point, try looking at the way you used to understand it. Just change it around and see what ideas a new perspective can generate.




P.S. Yes, I'm a little bit map obsessed.

Thursday, February 5, 2009

On Not Lashing Out

No matter how professional we are, when a really bad day comes around sometimes it just kills you. Frustration upon frustration, nothing comes out right, everyone wants something, you screwed up but good and there's no space to fix it, and you just don't even have a moment to yourself to think! If something doesn't give real soon now you're gonna blow.

There's standard advice here: It's just a job. Take a deep breath, go for a walk, and come back refreshed. (Or if you're like me, grab some flour and yeast and water, and pound away at some bread dough for 20 minutes or so.)

There's some validity in that. The real trick, though, is to not say or do anything stupid until you can get out of there and blow off some steam. And the first part of that is recognizing that you're starting to lash out.

So, how do we know things are going downhill fast and it's time to get out?
  • You don't know of one thing that's gone right today (and you certainly don't expect that to change!).
  • You get some emails and boy do you feel attacked (is it paranoia if they might actually be lashing out at you?).
  • You're flushed and no one else thinks it's hot in here.
  • Most of the things you want to say to your coworkers are to show how wrong they are.
  • Someone asks if you're okay (generally this means you're already giving off signs you're not okay).
Conflict and difficulties are part of working, and of course we're all professionals here. But no matter how bad one day is, it's not worth damaging the relationship you have with your coworkers and your customers. Your first priority is not making your world worse. Your second priority is fixing the problem. Getting the second priority right will not fix the first priority.

Don't turn a bad day into a bad environment. Get out before you lash out.


And I hope tomorrow is better.

Wednesday, February 4, 2009

Look Even When You Know

I was working on accepting a story earlier today. The upshot of it was that nothing actually had to be changed - the story was as specified without any code change at all. So I read the story and thought about it, and said, "Sure this makes sense." Story accepted!

No.

Let's just go take a look at the related area of code. In this case, the story had to do with the way we were configuring samba, so I opened up smb.conf in a couple different scenarios (after upgrade, clean install, etc). And then I got confused about the story.

You see, when I actually looked at this, I saw some details that I had simply glossed over when reading the story. Nothing was too major, just questions about how it behaved across upgrades, and what types of changes would induce the desired behavior. And in the end there was a bug in there - we really did need to do some work and we hadn't thought of it.

The moral of the story is this:
Look at something, even if you already know what's going on. Some detail may surprise you.

Tuesday, February 3, 2009

Test Estimates

Usually when I work with estimates, I'm very careful to include estimates for the entire effort: design, implementation, testing, bugfixing, shipping. After all, just part of that is no good to our customer; only the whole feature is worthwhile, and ultimately an estimate is an attempt to answer the question "When will our customer be able to have and use feature X?".

Within that, however, I'm ultimately responsible for testing. I'm not going to implement it, I don't run the team that designs it, and no one in their right mind would put me in charge of marketing launch! So how do we do test estimates? How do we estimate the part that we are ultimately directly responsible for?

There are a number of things to consider:
  • How we will approach this feature or change
  • How long the actual tests will take
  • How likely we are to find issues and how much of our test it will delay
  • How much testing we're likely to have to redo
  • The level of acceptable risk to the company for this feature
  • Other pressures, in particular around dates
Let's break each of these down a little bit:

How we will approach this feature
We have to design our tests for a feature. Will we need a new tool? Are we doing exploratory testing? Is this a regulated API with a compliance program already provided? Is there a spec we have to meet or is this something with a little bit of give in the requirements?

Here you're looking at a relatively coarse-grained level for large items and for long lead times. New tools, new techniques, or major changes to your overall test strategy will extend your estimate.

How long the actual tests will take
At some point you're going to sit down to do or run your tests. How long will this take? Is this an installer that takes 20 min to run? How many missions do you have and at what durations? If it's a test prerequisite to have a full system, how long will it take you to fill a system?

This test is where you add up your design and see how long just the doing will take.

How likely we are to find issues and how much of our test it will delay
Here's where things get fuzzy. Are you going to find problems? Probably. Will some of those problems prevent you from doing tests you'd like to do? Probably. How many problems and how much delay? Is development going to fix the problems immediately or will there be a delay before dev gets to it?

This is very organization dependent. Sometimes we simply don't include this in the test estimate at all (it goes into the implementation estimate). Other times we take a stab at an estimate for this. If you're including it in your test estimate, I'd strongly suggest calling it out as a separate line item, and talk to dev about it before you present your estimate. You're likely to get a lot of pushback in this area, either due to a belief that this time the software will be better/less buggy, or because they think you've overestimated how quickly development will react to bug reports. Let history be your guide here as much as possible; it's a lot harder for reasonable people to push back when faced with data.

How much testing we're likely to have to redo
This comes directly from the point above. Since the software is likely to change, some of our testing is likely to have to be redone. However much this is, you have to add in that one-hour (or one day or one-week) test again.

The level of acceptable risk to the company for this feature
Testing can, of course, go on pretty much forever. At some point the benefits to shipping outweigh the risks that something will go on. To provide a good test estimate you need some knowledge of that point. It's no good including time to check every comment for spelling errors if there's no source code licensing and therefore the risk that the defect will matter to the customer is very low, for example. At the same time, if you're testing the space shuttle, you probably don't want to skip a whole lot of tests.

Other pressures, in particular around dates
It's not uncommon to find that a feature has been promised to a customer on a date, and that date is essentially immovable; maybe it's due to revenue recognition, or some compliance date, or whatever. In these situations, go ahead and do a test estimate, but if it comes out longer than the fixed date, well, then you've got a dilemma. In extreme cases, when the date doesn't move and all preceding items are either already done or also immovable, consider simply not providing an estimate. After all, if everything else is fixed - inputs, ship date, and thing to be shipped - then risk is the moving variable. In these cases, better to just note the situation and start testing.

Tying it all together
In the end a test estimate is just part of an overall feature estimate. There are a number of variables that are very company specific; unfortunately there simply isn't a universal formula for a test estimate. The main point is consistency - make sure your estimates account for the same variables every time, and then consider that your task is to be more and more accurate at hitting your estimates. After all, what goes in doesn't matter nearly as much as whether the outcome is consistent and accurate.

Monday, February 2, 2009

Uncertain of Your Uncertainty

When I was younger my family went to Versailles; this was 15 years ago or so. I remember that we were walking up the path to the entrance and there were trees overarching the pathway and a little moat on either side. The little moat was solid green and my brother and I just knew that it was painted concrete. Then my brother threw a rock onto the concrete moat, and it sank. That concrete moat was algae on top of water.

Whoops.

On a mostly unrelated note, I was writing about the classification of risk and of scheduling based on that. And Jeroen wrote in to ask what we do when we don't know what we don't know. This got connected to Versailles in my head (wacky, I know!). Here's the thing: my brother and I sure thought we knew something, and we were completely wrong.

We didn't know what we didn't know.

Okay, finally back to the software!

It's all well and good to give high-risk items more time and low risk items pretty close to the estimate. But when you don't know the risk, that gets a lot harder to do. And when you think you know the risk but you're wrong, you're basically in the same boat. So what do we do then?

I don't really think there's any magic here. You cannot give a good estimate until you know how much you don't know. So you start working on reducing your unknowns and avoid giving an estimate. Do the work, and make sure you promise to "give an estimate on X date" rather than guessing on an estimate early on.

Does anyone else have ideas?