Tuesday, June 28, 2011

What Is Work?

I've been doing an agile training session with a team, and they mentioned that (1) they didn't ever finish everything that in the iteration; (2) all stories were coding stories; and (3) they had themselves set up at 6 hours per day each of "actual working time".

Can you spot the assumptions in those statements?

Assumption 1: "actual work" is writing code. Nothing else is work, just overhead.
Assumption 2: it's possible and sustainable to write code for 6 hours per day. Each.
Assumption 3: it's not a do-able thing unless you write code

This isn't an uncommon philosophy. This team happens to be all developers, but you hear other engineers - testers or support engineers - say similar things, just substituting "testing" or "working on customer issues" for "writing code".

It's also completely wrong.

You see, we don't ship code. We ship a product.

We ship a solution to other people's problems, something that enables them to do things faster, better, stronger than before. And to do that, we have to do a lot more than write code.

All of these things - and others - are work:
  • writing code
  • going to a meeting (hello, sprint planning meeting, anyone? company all hands)
  • answering emails, even from HR about when you're going on vacation
  • composing a design proposal for that nasty scaling problem
  • walking through UI mockups with the designer
  • fixing the build system
  • reviewing - or writing - product documentation
  • reading a white paper about your competitor's implementation
  • prepping a paper to present your company's solution at a conference
  • creating a patent application (and these take a really long time!)
In short, we do a whole lot! Most of it doesn't fall into the category of "the main thing that defines my job" (writing code, or testing, or helping customers, or whatever).

So, let's look at that statement again: (1) they didn't ever finish everything that in the iteration; (2) all stories were coding stories; and (3) they had themselves set up at 6 hours per day each of "actual working time".

Well, of course they didn't finish. They were tracking one part of their job - the coding part - and assuming they'd spend 6 hours per day on it. That leaves 2 hours per day for all the other stuff, and that's simply not enough. So there are two choices: extend the day, or stop assuming so much of the day is spent on one part of the job.

Now, in agile planning, there are two things you can affect: how much total time you think you have, and the stuff you count toward that time. Some things aren't ever going to show up in agile planning: answering emails, for example. Other things are more flexible; many teams choose to make "write white paper" a story, for example. Because not everything shows up, then we have to respect that we will spend part of our time working on things that are in the sprint, and we will spend part of our time working on things that aren't in the sprint. I've seen teams that put very little in their sprint, and they ended up with about 2 hours per day per person scheduled. I've seen other teams that put almost everything in their sprint, and for them 6 hours per day per person was about right. Most teams I've worked with are at about 50%; they spend about 4 hours per day per person on tasks that are part of the sprint.

This sounds low: "What do you mean the team is working only 4 hours per day!??!" It's not really low, though. It just means that the team is doing non-sprint tasks for the rest of the day. They're still working, just not on the things that are tracked in the sprint.

Respect the totality of the work, even the work that you didn't plan for. You'll be more sustainable and more accurate because you were honest about it.

Friday, June 24, 2011

Ask Him

I've been doing some agile training, and one of the first things that I did was an exercise I call Information Sources. Each member of the team fills in a questionnaire that describes various situations and says, "where do you go for information when this happens?". For example, one of the questions is "Congratulations. You've hired a new developer, and now he's standing by your desk wanting to learn about the system. Where do you tell him to go and what do you tell him to do?". Another is, "You just got a call from the implementation engineer, he's at a customer site, and he really needs to know how these two options interact with each other so he can finish the configuration. Where do you look to get him the information... before he's finished that 9th cup of coffee?"

Yesterday I did this exercise with a team, and for the first time ever, I got this response to one of the questions:

"Ask John"

Most of the time, people say things like, "I look on the wiki", or "I dig into the source code". This was the first time I'd heard someone say they'd ask another person.

I'm not really sure what to think about that. There are certainly many conclusions we could draw - from a refreshing honesty to someone who is really team-oriented to someone who doesn't know the group's tools. I did appreciate the response,though. After all, agile training is almost entirely about emphasizing human interactions.

I love asking questions: I never really know what the answer will be!

Wednesday, June 22, 2011

Practice Debt

We spend a lot of time thinking about beginnings. We kick off a project. We launch a product. We plan and prioritize and create backlogs and work backlogs. We're good at beginnings, and we're good at progress.

But what about endings?

We're decent at pre-defined endings. The retrospective after an iteration is a well-defined entity. The postmortem after a release or a big crisis is also something that's been around for decades. We miss a lot of endings, though; we simply don't give consideration to stopping things. And that accumulates practice debt. Practice debt is very similar to technical debt; instead of software getting crufty, it's our procedures and practices and traditions and policies. It's the stuff we do because we've always done it, even if it no longer has value.

We tend not to think about how to end support for a release.... until we're ready for it to be done right now because it's actively bothersome. We tend not to think about how to stop a tradition.... even when it's obsolete. We accumulate practice debt just like we accumulate technical debt.

I challenge you to stop something, to take one step toward paying down your practice debt.

In my case, today I am going to skip our weekly team meeting. We all sit together, and share a chat channel all day. We stand up daily and talk about progress. Do we really need a team meeting every week? No. Sometimes we do need a meeting, but usually for a specific purpose (like planning a big lab move). But that doesn't happen every week. Sometimes it happens twice in a week, and sometimes it doesn't happen for a month. So rather than interrupt the day to spend 15 minutes chatting about things we all already know, we're just going to keep working. We'll have a meeting when we need a meeting. But starting today, we're going to stop having a meeting just because we always meet on Wednesdays at 2.

Look around and ask yourself why you do the things you do. Ask yourself what you do that really isn't helping. And then stop. Pick one thing and pay down that practice debt - by simply stopping.

Monday, June 20, 2011

On Working With Marketing

Working with marketing is one of the more interesting parts of my job. It turns out that most marketer don't think anything like me. Their ideas, thought processes, and working styles are pretty much nothing like mine. That's actually really educational - I get insight into entirely new ways to look at the product and even the company.

I'm sometimes surprised at something I think is pretty irrelevant, and that they think is a great product attribute that is a huge differentiator. Or this thing that we worked on for ages and they say, "well, yes, the product needs that, but you can't go making a big deal of it because it's just not interesting except that if you don't have it then that's bad. Having it is not interesting. Not having it is a real problem." It's a great different viewpoint.

But.

We have to work together, which means we have to find a way to communicate effectively. Now, I'm an engineer. I'm used to communicating with other engineers, and we're generally able to be fairly precise about our requirements. We can say, "create this feature, and be sure you use a singleton for that bit of it because it'll be used in this scenario." We speak the same language.

When I'm working with marketing, we speak less of the same language. I speak some marketing, and they speak some engineering, but the alignment isn't quite as close. I need to do some translation and some more active communication to make sure that I'm getting marketing what they need.

For example, I recently got a request from marketing that they want to do a press release to show some "even more impressive performance". The good news is, we can do that. We've increased performance in several dimensions since the last "we're fast" press release went out. But we're not done communicating yet. I don't yet know what marketing needs.

So I ask: Tell me your headline.

I'm trying to figure out what I need to provide. Does marketing need the biggest single number I can provide? Do they need a chart (up and to the right is a theme of these)? Do they need to show a number that directly addresses a competitor's numbers in a particular configuration? It could be any one of these. By asking what marketing wants to use as a headline, I'll understand which number they're looking for.

If they say, "Solution is 10X faster!" then I know they want a single big number.
If they say, "Blows [Competitor] out of the water!" then I'll figure out which configuration and get that (or tell them that they need to rethink that press release).
If they say, "Now scales better than before!" then we probably need a graph.

How you work effectively with marketing depends on you and depends on your marketing team. Working from the conclusion - the headline - is one way to help ensure that you're getting information they need, and not providing the wrong thing or wasting your time gathering data that isn't helpful.

So ask.... "what's your headline?"

Thursday, June 16, 2011

Not a Team Player

What follows is a rant directed at the "engineering is not the business" types. Consider yourselves warned!

There is a strong school of thought running through engineering and particularly through testing that goes like this:
"Engineering is not the business. Engineering cannot possibly know what the business knows. Therefore engineering should not make any decision that could possibly be considered a business decision. This includes any and all market-related decisions, such as whether a product is ready for release, whether a defect is bad enough to warrant fixing, or whether a feature as described is sufficient to provide value."

There is a kernel of truth here. It's true that creating a product is a team effort and that each part of the team - marketing, sales, finance, engineering, support, etc. - understands only part of the picture. No one team member can make a fully-informed decision about whether a product is ready for release, or how important a defect truly is.

The problem comes when this concept is taken too far. When it morphs from "I cannot make this decision alone" to "I cannot make this decision alone so I will not express any opinion," then we have a real problem. We have a failure to be a useful member of the team. For you see, just like you the engineer cannot make the decision alone, the product manager also cannot make the decision alone. The marketing manager cannot make the decision alone. The CEO cannot make the decision alone. Every member of the team needs input and opinions from every other member of the team.

Statements like these exemplify the abdication of responsibility:
  • "I can't estimate this test effort. It's entirely dependent on what I find and what you want to do with the information."
  • "I don't know what the priority of this bug is. That's a business decision."
  • "Well, I know the trade show is in a week, but I don't know if we need to show the new version for it. Engineering should say if it's ready."
Every one of these statements is an explicit refusal to provide an opinion. It is a refusal to contribute to a decision. It's a refusal to be an active, contributing member the team, and it creates a false dichotomy, a we/they relationship that is completely unnecessary.

There is a large gulf between refusing to provide an opinion and being the sole decision maker, and the proper place for a team member - a tester, a developer, a marketing manager, a product manager - is in the middle of that gulf. No, the developer should not willy-nilly refuse to provide a build for release because "it's not ready yet" when the rest of the team thinks it is ready. But also no, the tester should not refuse to say, "I think this bug is really bad and we should hold the release for it because of X, Y, and Z." (generally replacing it with, "Here's what I found. You decide.") Both extremes are a discredit to their professions.

So be a member of the team who actually provides value. Don't be afraid to offer opinions. It's completely acceptable to characterize them as opinions, but use your voice. You're being paid to be the best-informed person on the team in some area, and part of what you're being paid for is to use that expertise to express opinions and explain circumstances and consequences.

No more excuses. Time to step up and embrace your membership in a team - the purpose, the responsibility, and yes, even the decisions.

End rant.

Tuesday, June 14, 2011

Definitive

Did you ever have one of those conversations that somehow just never ended? They trickle on over days or weeks, when they should have been over ages ago.

For example:
An email came in from a customer saying, "Hi, I'd like to do X, but I'd like to do it with Y turned off in Z configuration. Is that feasible?"
A newbie support guy responded with, "I'm not sure. I'll do some research and get back to you."
The support guy went to development and explained what the customer wanted.
The developer said, "well, that won't work, but what's he trying to do? Maybe there's another way."
The support guy took that info to the customer, who described what he was trying to get out of it.
The support guy went back to the developer, who said, "Hmm.... yeah, I don't think that's possible in this build."
Support guy went to the customer and said, "We're not sure if we can do it. I'm doing more research."

You can see where this is going. Or, more precisely, you can see where this is not going. Everyone involved is trying to do the right thing. The customer wants to accomplish something, and the developer and support guy are trying to make it happen, even though the product doesn't accomplish that in any way anyone can think of.

The problem is that until someone along the line says, "No, that's not possible," then the conversation won't end. There will be more research, and ideas, and thinking, and trying. And that's great, if it's really feasible. But if the product really doesn't support it (and in this case, the product really didn't support it), then it's better to end the conversation with a simple no. Offer alternatives (outside the system), and put the customer into the feature request queue, if that's appropriate, but be definitive.

When the answer is "yes, that totally works and here's how," then it's easy to be definitive. When the answer is "no," however, it's a lot harder. No one wants to hear no, and no one wants to say no. However, if the answer is eventually going to be "no," then it's better to get it early on. Better a "no" than a dragging on the issue and then a "no".

Wherever possible, be definitive. A quick and accurate no hurts a long less than a dragged out no. Get in, get out, and move on. Your customers will have more confidence in you, and you won't have wasted anyone's time.

Monday, June 13, 2011

An Object Lesson in Heat

Last Thursday night there was a power outage in the office. It took out the circuit that ran the air conditioners in the lab, but it didn't affect the circuit that powers the lab machines. And so the lab ran that way. All night.

In the morning, we walked in to a very hot lab, and started turning off everything we could get our hands on until the air conditioning (which had come back, thankfully!) caught up. So how hot was it overnight?

This hot.


These (now ex-) ducks were on a table in the lab, too. Tragically, there were no duck survivors.

Consider this your friendly local reminder that servers put out a whole lot of heat.

Friday, June 10, 2011

Strong Versus Weak Process

I do a fair amount of agile consulting, and one of the things I see over and over again is the notion of a strong process. The "strong process" is one in which the software development process is a set of rules and requirements to be followed (or else!). The "weak process", by contrast is the one in which the process basically amounts to guidelines rather than rules.

The strength of the process is orthogonal to the process type. It's possible to have a weak waterfall process, or a strong agile process, for example. The type of process describes what the team does, checkpoints, and guidelines for how to create software. The strength of the process describes the degree of freedom the team has in deciding when to follow process and when to do something that's not according to the determined process.

Briefly, a strong process would look like this:
  • All backlog items for the next sprint will be provided to development for estimation three days before the end of the previous sprint. Any new items provided after that will not have estimates when the next sprint starts, and therefore will be estimated at the highest point level.
A weak process would look like this:
  • Backlog items for the next sprint will be provided three days before the end of the sprint. This helps make sure that items are estimated before the sprint starts, which helps make sure that velocity is more stable and therefore predictable (which makes for easier release planning!).
The strong process is procedural: it describes what to do and the consequences of failure in detail, including alternate procedures. The weak process describes the desired end benefit, and leaves the procedures and exceptions vague. The weak process tells you where you're going, and leaves it up to the team to find a good route.

So why do we care?

Because the process should be the servant of the team, not the master.

Please note that I'm talking almost entirely about smaller teams. With larger teams (over about 30 people) or many teams, then consistency of expectation and work environment gets more important, and processes have to be a bit stronger to prevent total chaos from emerging.

So, why do strong processes ever exist on smaller teams? Fear.

A strong process is usually the result of a team that has been or is afraid of being jerked around. Perhaps they have been put through a death march of ever-changing requirements. Perhaps they are reacting to a product manager they don't respect, and attempting to use the process to point out that he's really far behind and vague to the point of hindering the product. Perhaps they are not confident in their abilities and looking for a scapegoat. In any case, they are usually hiding behind the process as a shield to help with a real underlying problem.

A strong process makes a terrible shield, however. All it's really good for on a small team is providing a method for identifying blame. So get rid of the underlying problem, and then take the strong process shield away. It's time to make the process work for you, not make you work for the process.

Wednesday, June 8, 2011

Use the Official Tests

I've been working on some tests of a system that uses the NIST PIX PDQ Preconnectathon tests. In many ways, these tests have been simple. They all spin up a server (the system under test), receive SOAP messages, and assert on the responses the server sends. The NIST tests have made it very simple to see what to assert on. Each test shows the body of the message that the system sends, and lists the assertions it makes.

Simple, right?

I implemented these as cucumber tests, basically reimplementing the NIST tests so that they're a lot easier and faster to run. I use the same message bodies the NIST tests use, and I assert on everything the NIST test asserts on. Done!

Right?

Wrong.

I still run the tests against the NIST server periodically. Not because the cucumber tests I wrote are wrong or bad, but because they're incomplete. There are hidden assertions - things that the NIST tests look for that aren't part of the official test case.

For example, the first time I ran our system against the NIST server, our system threw an exception and failed, because the NIST messages had elements in the SOAP envelope that we didn't have, and that our system couldn't handle.

Wherever possible, use tests that are fast and convenient for you (in our example, the cucumber tests). Also check your work against the official tests as well, though (in our example, the NIST tests). They provide different value, and they're both useful - so don't forget about one or the other.

Monday, June 6, 2011

A Technology Vacation

I spent last weekend at StartupWeekend Boston, and found it fascinating. The basic dynamic is this: guy with an idea shows up and pitches; people form teams around the pitches they like; implementation and business planning; Sunday night the team pitches the idea to VCs and tech-savvy folks.

So if your idea doesn't get picked, or if you don't show up with an idea? What on earth are you doing there?

For me, at least, the answer is that it's refreshing.

I spent the weekend building a Twitter hashtag stock trading game. With a team of six people, we did market research, built the game (I did all the CSS, and IE users, I'm sorry I didn't get time to make it look right in your browser!), created Twitter and Facebook accounts and started promoting it, and put together a pitch.

It was nothing like my day job, and that's refreshing. It was fun, and we worked hard, and now I'm back in the office and I'm energized. I spent the weekend doing CSS, Rails, and using the JQuery UI widgets, and now I'm looking at our Perl test infrastructure and our C-based product with fresh eyes.

There are many articles and books that will tell you about the power of taking a vacation. Take the time to take a technology vacation, too. Put down the technologies you normally use, pick a small project, and use a new technology for a few days. It'll leave you energized and ready to go.

Friday, June 3, 2011

What Can You Do About It?

One of the... umm.... joys of management is that the more you manage, the more you're involved in angsty meetings, phone calls, and decisions. For example:

Sales guy: "They're not gonna meet the release date, and we're gonna lose the deal"
or
Support manager: "The customer's really angry and their data center just isn't ready to receive our stuff."
or
Product manager: "It's just not fast enough for this analyst testing we're doing! Don't they know we need a faster number than the competitor?!"

Most of the angst comes around a real problem: a schedule is slipping and we really are at risk of being late; or something isn't working the way it should; or it isn't fast enough yet.

What can you personally do about it?

If the answer is nothing, then you have three obligations:
1. express your concern
2. get agreement on a frequency of updates (and be reasonable about it - don't forget there has to be some work between status reports!)
3. calm down

Calling incessant meetings, sending constant emails reiterating the importance of meeting a deadline, hosting conference calls among everyone affected to discuss how it's important: none of that helps. All it does is waste time and increase everyone's anxiety level.

Work helps.

So let the team work.

And you go to work: if the risk is that high, it's time to put a plan B in place. Plan B should be something other than "It's gotta work". Figuring out plan B is your work, so go do it.

Worrying is helpful only as long as it's contained. Identify the risk, describe the risk, ensure the team understands the risk and the consequences of failure. Then move on to the next important steps: containing and minimizing the risk. Don't sit around and be anxious about it; it's not helpful. Be helpful.

Wednesday, June 1, 2011

The Meeting Jerk

Many of us run meetings from time to time. Almost all of us go to meetings from time to time. Depending on the environment, culture, and reason for the meeting, these can be normal parts of the day, or they can resemble angst-driven white-collar battlefields. In any case, meetings happen. Frequently.

Because meetings happen so much and because we all take so much pleasure in complaining about those meetings, there are huge numbers of articles and massive amounts of advice about how to make meetings more effective. These cover everything from "stand up the whole time!" to "keep them small!" to "lead from the back of the room!".

Beware.

Read too many articles, and you'll turn into a meeting jerk. The meeting jerk is the person who didn't call the meeting, but who is absolutely determined that this will be an effective meeting if he has to make it effective all by himself. Dangit.

The meeting jerk reads perfectly reasonable articles like this one: How to Improve Meetings When You're Not in Charge. He then proceeds to decide that the person running the meeting is ineffective and to follow every point in the article repeatedly and loudly. Every meeting invitation is met with a request for an agenda (including the daily standup) and with pre-meeting thoughts and comments about every item sent to the entire invitation list. In the meeting itself, every point of discussion is concluded with the meeting jerk looking around the room and asking each person if they agree and if they have any more to add. There is an unannounced agenda item added to the beginning of every meeting - "who's going to take notes?" - and to the end - "are we sending notes out? when? to how much of the team?". In short, the meeting jerk does all the right things, to excess and with all the finesse of an elephant doing water ballet.

How do you know there's a meeting jerk in the room?
  • Look for the person who has Robert's Rules of Order memorized and is the one mentioning "meta-meeting" guidelines and etiquette frequently.
  • See if there's someone who frequently causes most of the meeting attendees' eyes to roll.
  • Follow the pre-meeting email trail about the agenda and the post-meeting email trail about the notes. Some discussion is normal; responding to every single email with point-by-point questions and rebuttals is antagonistic.
Let's say you've identified the meeting jerk, and she exists. Here's how to handle her:
  • Call the meeting jerk on her behavior. Let her know that you appreciate the effort but that she's causing a bad meeting environment. Include examples of situations where it's been a problem, and then ask her what she's trying to accomplish. Typically, there are some legitimate complaints and it's simply gone too far. Now's the time to address the complaints directly.
  • Do not deal with the meeting jerk in the meeting. That's a public place and egos get involved very quickly. Have a private discussion with the meeting jerk that involves as few people as possible.
  • Step back and look at your meeting facilitation skills. There are likely to be some problems that triggered this behavior in the first place. Check your agendas, meeting invitations, length, timeliness, and other factors.
  • Agree with the meeting jerk on a code to indicate you're handling things that you can use in the meeting. Maybe it's a word or phrase, or a movement of some sort that tells the meeting jerk, "yes, I know this discussion has gone on too long and I'm going to stop it now" so that the jerk doesn't feel like he has to step in. Over time, this will help build trust.