Monday, February 28, 2011

Between the Bullet Points

I sat down this morning to write a slide set for a client. The first slide looked something like this:

Endpoint Download:
- Download from endpoint foo
- Route to endpoints bar, bat, baz
- Physician registration required for routing
- Admin registration required for download

Sounds great, right?

Here's the trouble:
I really don't understand this project at all. I know there are physicians, and there are admins, and there's a ball of goo that gets downloaded and somehow parts or all of it get routed to physicians. I have no idea what's in the ball of goo or how I know how to route stuff, or what the relationship is between physicians and admins. If you asked me to build this, I wouldn't know where to start.

That's the danger of bullet point writing.

If you can write a bullet point, you're demonstrating that you can write a phrase or even a whole sentence about this thing. That's all you've shown. Now, if you can keep going and write more, you may truly understand it. If you can't get beyond the bullet points, though, then there's a problem. You don't really get it, and you particularly have demonstrated that you don't really understand how the bullet points relate.

Writing bullet points shows you are aware of your subject and possibly even different aspects of your subject. It does not show you understand how the subject and aspects of the subject relate to each other.

Bullet points provide awareness. Understanding is in the space between the bullets.

Next time you write some bullet points, go ahead and do it. Then write a paragraph explaining the relationship between the bullet points. How does bullet point A affect bullet point B? What about bullet point C? How does that fit in? It doesn't matter if you throw the paragraph away when you're done; the simple act of writing it will tell you whether you understand.

Bullets are great. They're short and easy to read. Just make sure you understand the whole story.

Thursday, February 17, 2011

Choosing the Highway

Some decisions when building a product are outside of your control. Some are fully in your control. Today let's take a driving trip. We don't get to pick our destination; that decision is out of our control. We do get to choose how we get there.

As we pick our route, we make choices. Some choices are small, like what radio station to listen to, or which side street to take. Some choices, however, are large, like which highway to take. They have a vast effect on our project, I mean, road trip.

Throughout a project, we make small decisions and we make highway decisions. Small decisions are things like a variable naming convention that don't have much effect on the end use of the product. Highway decisions are things like which language to use or whether to write object oriented code or not. Small decisions are the ones you should make quickly and easily because their effect is small. Highway decisions should be taken more seriously and are something he team should decide together and explicitly.

Choosing the right highway has a few factors. Sometimes both highways get you there but one is faster than the other. I used to commute from Cupertino, California to San Francisco every day. I could choose from two highways that would get me there: highway 280 or highway 101. Both will get you to the same city, but 280 was a lot faster and prettier, too. Choosing a language is often like this: both will get you there but one will do so more quickly and will be prettier along the way.

Other times, choosing the wrong highway will prevent you from reaching your destination; you'll have to go back almost to the start and pick a different highway to get there. Imagine if you accidentally got on an east-west highway in Nevada when you really wanted to go north - it's a long way between exits!

I worked on a product once that was an installable Windows application. We'd chosen our highway - the thick client PC program highway. A few years later, the big boss came in and announced that we were now a Web 2.0 company. The problem, of course, is that the web-based application highway was waaaaaaaayyy back in our rear-view mirror. We had to go all the way back to the beginning before we could even start trying to build a web client. It was ugly code, a buggy program, and in the end the company didn't make it.

So how do you know when you're about to make a highway decision? Well, some entrance ramps are pretty clearly marked. Your language, your overall architecture: those are pretty obviously highway decisions. Other times you don't really know you're going to get on a highway until you're there, and that goes for decisions, too. Choosing a library doesn't always feel like a highway decision, until you're locked in. To see if you're getting on a highway, ask yourself:
  • How easily can I change this decision if I want to?
  • Will the end user feel an effect as a result of this decision?
If the answers are that it's not easy to change and that the end user will feel an effect, then you've got a highway decision.
As I think back over software projects, most of my choices were small and didn't matter much to the product as a whole. Left lane versus right lane; camel case variables versus underscore-delimited variables - no big deal. Some of my choices where highway choices. Java versus Ruby, web-based versus thick client, deploy on Windows versus Linux. Those choices mattered three years later when I thought back on them.

For most decisions, just make them and go. But when it's a highway decision, take a look at where you're going and how you want to get there - and only then make a decision.

Tuesday, February 15, 2011

Oracles and Interpretation

In testing we talk about oracles. Oracles are things that define the "should" of the application. The oracle (or oracles) tell us what an application should do, how the UI ought to look, how the application ought to perform, etc.

I like to think that the word "oracle" is based on the Oracle at Delphi (I don't know this for sure, but it's plausible). And like the Oracle at Delphi, how we interpret an oracle can be at least as important as what the oracle actually says.

Emperor Nero - the guy who fiddled while Rome burned, killed his mother, and generally wasn't the most civil rights oriented of emperors - went to the Oracle at Delphi. He was told:

Your presence here outrages the god you seek. Go back, matricide! The number 73 marks the hour of your downfall!

Nero seems to have thought this was great. He was 30 at the time, so he figured he'd have 43 more years ruling Rome and then die at the ripe old age of 73. Not too shabby!

Nero was killed just a few years later in a revolt led by a 73 year old man. The oracle was right. The interpretation was completely wrong.

Now, the Oracle at Delphi was notorious for this kind of behavior: they'd say something, someone would think, "oh no problem" and then get his comeuppance through an interpretation he hadn't thought of. See Shakespeare's Macbeth and the witches for another example.

We need to be careful about our modern oracles for the same reason. There's what the oracle says, and there's how we interpret it, and those may not be the same thing. Please, use oracles. Just be careful to check your interpretation occasionally.

Monday, February 14, 2011

What.... and What Not

I've been working on a project with a group recently, and part of it involves writing a test strategy. As we were working on this, we started discussing the things we would do:

I would do some unit testing because XYZ. I would do risk identification using method MNO.

As interesting as what we would do is what we considered and decided we would not do:
I wouldn't bother stress testing because ABC.

It's become clear that for any list of things we would do, there are actually three separate lists:
  • the things we would do
  • the things we would NOT do
  • the things we hadn't considered
If we write down only the things that we would do, then we can't distinguish between ideas we considered and rejected - things we would not do - and ideas we simply didn't think of. Sometimes we really did think of everything, but that's pretty rare. Better to include a brief summary of the things we rejected as well as a list of things we will do. That allows us to explain our reasoning, making our assumptions visible for validation. It also allows others to suggest things that we didn't think of.

When you're making a list of what you will do, make a list of what you will not do as well. It'll help you make sure your lists are more complete.

Friday, February 11, 2011

(Avoiding) Consensus

I work with a lot of teams and a lot of different types of teams. One of those teams is what I call the "consensus driven team". This is a team with laudable goals: they want the entire team to feel ownership and to buy-in to what is being done. (Joining hands and singing of Kumbaya is optional.)

It sounds awesome. After all, someone who understands what the team is building and feels personally invested in that product is going to take care to work cleanly, is going to help out fellow team members, and is going to frankly work really hard toward the goal, which he completely understands.

But beware.

These teams can easily go to an extreme, and then production simply stops. The team goes completely adrift, and spends a whole lot of time talking and considering without actually doing anything. Why?

Because consensus doesn't scale.

Now, consensus can be a wonderful thing for all the reasons I outlined above, but it has to be used judiciously. Not all decisions can be made by consensus, and the more people involved in the process, the longer it will take to get consensus. Getting two people to agree on something (where to go for lunch, for example), is a lot faster than getting ten people to agree on that same thing.

Please, use consensus making as much as possible when it's appropriate. But don't try to use it for everything; you'll never actually make any progress.

Consensus is a great technique if:
  • the team is small (under about 5 people)
  • the decision is important to the whole team (a major architecture decision, for example)
  • the decision has ramifications across several areas of expertise and could use input from different viewpoints
Avoid using consensus to make decisions if:
  • the team is large (anything over about 8 and consensus is exponentially harder)
  • the decision is small (what to name a variable, for example)
  • the decision is urgent (consensus almost always takes longer)
  • the decision doesn't matter to other team members (what IDE you use)
  • the decision is subject to group think
Consensus sounds great. But like ice cream, consensus isn't the answer to everything. Sometimes we need to eschew consensus and just make a decision already.

Thursday, February 10, 2011

Definition Tests

When I get a new requirement, it's easy to read it and say, "sure, makes sense" and move on. Sometimes I later discover that I didn't really understand it at all. I just thought I did until I really sat down to think about it.

So to encourage myself to think about it, I do what I know as a "definition test" on the requirement. Basically, I go through the requirement twice times. The first time, I replace every key word with a synonym. The second time, I write a definition (like a dictionary style definition) for every key word in the requirement.

For example, I recently got this requirement:
Update documentation to explain variability in change rates due to probabilistic data ordering.

First, I identify the keywords: "documentation", "variability", "change rates", "probabilistic", "data ordering".

Replacing them all with synonyms, I get this:
Update the Integration Guide to explain inconsistency in "rate %" due to non-deterministic sequence of chunk ingest.

These synonyms are specific to my client. "Integration Guide" is a piece of documentation. Change rates are presented in our statistics, logs, and other outputs as "rate %", and data ordering refers to the specific sequence of chunks of data as it passes through our system. I show my synonym requirement to the product manager and to the head developer, and they think it matches. So far, I think I get it.

Then I take the requirement and write definitions for my keywords:
  • Documentation = one or more pieces of material intended for reading and use by customers and potential customers, who understand the high-level architecture of the system but who do not have access to implementation details.
  • Variability = inconsistent output given the same inputs, where inputs include configuration parameters, data set and order in which that data set is presented to the system, and hardware resources (RAM, CPU, disk) allocated to the system
  • Change Rates = a specific statistic in our system as represented in outputs as "rate %"
  • Data ordering = the sequence of chunks of data and specifically that sequence as it changes or does not change at various points entering and inside the system (e.g., a queue passing between two system components would be a point at which chunk sequence could be measured; a multi-threaded operation would be a logical point at which chunk sequence might be altered)
I take the definitions I wrote to the product manager and to the engineer implementing this requirement. If we still all agree that's correct, then we all agree on the requirement in much more detail.

If I still think I understand the requirement after I do a definition test on it, then I probably do. If product management, engineering, and I agree on all forms of the requirement, then we probably understand it in pretty much the same way. Try a dictionary test; it's a cheap way to make sure we all really do get it right.

Tuesday, February 8, 2011

Back to Basics

I've been a test professional for about 10 years now. In that time I've gone from having no idea what I'm doing through to test manager with a track record I'm pretty proud of.

I just started taking the BBST Foundations class. It's very much an introduction to software testing course. So why am I taking a basic software testing course that frankly I could have passed years ago?

Because it's helpful to go back to basics sometimes.

Look at it this way: the guy who hit the most home runs in the entire baseball league last year.... still goes to batting practice. He's really really good at batting, but he shows up and works on the basics of his swing and his stance and his.... well, whatever else goes into batting.

Back to basics.

I go back to basics because all of the things I've learned since color and deepen my understanding of the basics. I'm good at taking a system with little to no specs and figuring out what's going on, identifying oracles, etc. (See my earlier posts about oracles here and here). Participating in a class that includes a section about oracles, I can now apply what I already know to the lessons and more easily see real world applications, real world trip ups (see for example the second link above where I've learned that oracles are only good if your customer agrees with them!), and get some new ideas.

Even if you're an experienced tester, consider taking a survey course or other testing course that looks "too easy" for you. Going back to basics can help you refine your technique and help bring in new ideas for things that you thought you already knew. Use a class to avoid a rut; it sure can't hurt!

Friday, February 4, 2011

The Checklist Is Not the Purpose

Checklists dominate project management. They take many forms: the classic Gantt chart, a checklist, a backlog, a task list, a to do list, a work board, and more. Now, don't get me wrong, I love my lists and couldn't survive without them!

But.

The checklist is not a purpose. It's a communication tool; it's a side effect or recording of the real work.

This is a really common thing to see on an agenda:
"Go through the checklist for project foo"

And it's wrong. No one cares about the checklist; they care about the work being done. Getting through the checklist for the sake of walking through the checklist is a sign you have a meeting that isn't actually helping anyone. It's just rearranging deck chairs.

So if you write an agenda that says, "go through the checklist", stop and ask yourself what you really want to accomplish. Then keep the checklist handy as a reference and a data repository, and really listen to the overall status. Don't wander through your checklist with your head down; you'll miss all the other important information.

Similarly, if you're invited to a meeting with an agenda item that says, "go through the checklist", you're in danger of wasting your time. Go ahead and ask the meeting organizer to make sure there's time allotted to discuss things not on the checklist (this is the polite way of saying, "hey, buddy, the checklist isn't everything!").

No matter how accomplished and informed lists feel, they aren't the point. We'd all do well to remember that.

The checklist is not your focus. The checklist is a side effect of the focus.

Wednesday, February 2, 2011

Advisor, Not Interrogator

Testers are questioners, for the most part. They ask questions of a system: "what happens when I do this?" They ask questions of product management: "what should it do when...?" They ask questions of developers: "I'm seeing this funny thing; any ideas on what I might do to track it down?"

For the most part, this is a good thing. The testing role is ultimately about communication between the system and the humans involved in in the creation and use of that system, and about communication among the humans about the system.

But.

Questioning plus aggressiveness equals antagonism.

And that's not helpful to anyone.

Asking questions is about achieving clarity. If that's not your purpose, then you're over the line and all you're doing is interrogating someone. For example, if I'm asking questions to:
  • show someone they haven't thought through a problem
  • demonstrate in front of our boss that my co-worker is making unsubstantiated claims
  • prove that only a moron would ship the software with such large risks
  • paint a doom and gloom picture of the current state of affairs relative to our intended release date
  • make sure someone knows that dev is making dumb mistakes.... again
then I'm not helping. All I'm doing is making my relationship with the rest of the team antagonistic. I'm priming them to be defensive the next time I open my mouth. In short, I'm being a truly terrible team member.

So don't.
Just stop.

You don't have to be the smartest guy in the room. You don't have to be the best informed. You don't even have to be the one to tease out all the problems. No, instead you have to be a good team member.

Now, asking questions is perfectly fine. It's a large part of how we get our jobs done. Frequently a question will trigger identification of a new risk, or an understanding that maybe your co-worker is leaping to conclusions rather than pulling his weight and doing the work to get to those conclusions. But don't force it. Forcing it makes you the bad guy, and a tester is most effective when he is a trusted advisor.

So, how do we ask questions nicely? There's no universal answer, but consider these:
  • Be polite. Start with thanking the person for their time and for providing information. If your questions are in response to something (a requirements doc, for example), then start by acknowledging the work they put into this.
  • Lay off. Don't respond to every email with a barrage of questions.
  • "Me" not "you" phrasing. Using the word "you" in a question increases your chances of being seen as antagonistic, especially if you're also using a negative word. Avoid "Didn't you think that ___" in favor of "What about ____". Same question, but less attacking.
  • Shrink the audience. If you are at all worried that this might come across as negative, send it to as few people as possible. Talk to the developer privately rather than standing in the meeting and asking. Drop a bunch of people off a cc list and just email the developer privately.
  • Make yourself a partner. Put yourself out there with statements with the questions. "Looking at your performance graph, I see a dip at 10000. I wonder if that's the cache filling up. What do you think?" is better than "Why the dip at 10000?". It doesn't matter if you're right or wrong; what matters is that you've shown you thought about this and that you're willing to do some work, too.
So before you go asking a lot of questions, pause and see if you really want to know the answer, or if you have an ulterior motive. You can only be effective if you're the good guy. So be the good guy.

Tuesday, February 1, 2011

Little Things and Big Things

Whenever I work on large projects, other things simply don't get done. I let them slide so that I can focus on the project that I'm working on. But when I'm cleaning up all the little things I need to do, well, I'm not exactly making progress on the project I have ongoing.

Well, shoot. Neither of those statements is any good. I really do need to be making progress on my big projects. They're my bread-and-butter in the end; the projects are what people really want from me. But if I don't do the little things, then I will have a lot of people a little annoyed with me, and also no socks (laundry counts as a little thing!).

I can't do little things all the time. And I can't do big projects all the time. Shoot.

My personal project plan looks something like this, for the most part:
- Day 1: set up, figure out what I'm doing to do
- Day 2-5: project
- Day 6: little things
- Day 7-11: more on the project
- Day 12: little things
- Day 13: little things
- Day 14-18: finish project!
- Day 19: little things
- Day 20: little things
- Day 21: start next project

So what's interesting?

First, I work best in days. I get all day to focus on a project. Or I get all day to clear out a swath of little things. This is my personal bias. Second, I don't put it in here, but I do an email sweep three times a day or so. On project days I don't respond to anything that doesn't sound urgent, but I still check. Third, I spend a lot of time on little things.

I'm spending about 60-70% of my time on projects, and the other 30-40% on little things. That seems too biased toward little things, but I find that doing little things helps me continue to be highly responsive to my clients. Being highly responsive to our clients is one of the things that makes Abakas successful, and I'm very proud of that. Frankly, our clients would rather the project take a few days longer if it means that they can ask questions, plan the next project, bring up a crisis, or simply gloat (or vent) and know they'll get a response quickly.

Everyone has their own personal balance between the big projects and all the little things that pile up. When you're figuring out your balance, consider not only how you work best, but also how you want to work with other people and how they want to work with you. And good luck.... balance is always a tricky thing.