Friday, July 29, 2011

Learning

I was reading an article this morning about how testers get and stay "technical". Go read it, and then come back. I'll wait.

There are a lot of possible rants here:
  • How is it that anyone can be involved in writing software and not know how to use basic tools of the trade? I would think any tester would learn a job's technology space just like they learn its business space.
  • Learning a tool is not the same as becoming more "technical". It's roughly equivalent to a pastry chef learning how to make a dacquoise. You're not more "pastryish"; you just know how to make a dacquoise now.
  • Eclipse is not actually required to learn Java. You can learn Java without Eclipse if you like. You're also welcome to use Eclipse for other languages. Please don't conflate the tools, the language, and the principles.
But we're going to skip the rants for now.

The thing to pull out of the thread is not to learn X language, or to learn Y tool, or that you can always go to a programmer to code something. It's this:

You should learn the things that help you do your job better.

Some things you learn are more commonly useful than others. For example, I currently work on a C library and have learned some tricks with gdb. It's pretty unlikely that I'll use that one again any time soon, and that's okay: it's still useful for now. Other things - like programming skills or the ability to have a discussion via email with a customer - I use over and over again.

One of the hard things about learning through tools is that it's sometimes difficult to separate the tool from the underlying technique. For example, if you learn QTP as a tool, does that mean you can work with Selenium? They're two different tools, sure. But some of the underlying concepts are the same. In both cases, for example, the record/playback function is not sufficient for creating maintainable test cases. So when I'm looking at a new tool, I'll frequently play around with multiple tools that do similar things (e.g., QTP and Selenium). That way I start to learn what's common between them and what's specific too the tool. Then when I'm faced with another similar tool I don't know, I can honestly say to myself, "well, I don't know this tool, but I know the principles that underly the problem space, so I should be able to figure it out." That's learning.

So don't worry about whether something is "technical" or not. Worry about whether learning something will help you. If it'll help, go for it. It never hurts to know more.

Wednesday, July 27, 2011

On Performance

I was doing some performance work on an app last week, and we were making progress. We'd done a number of things: added an index, reworked an n+1 query problem on the main page, and a few other tweaks. I also emailed the person who controls production and asked him to increase the resources available to our application (threads, basically) so that we could get better concurrency. He made the increase, and along with our code changes, the overall application experience was a good amount better (hooray!). Overall, a successful performance effort.

Then on Monday I got an email from the person who controls production, and he said, "I see that increasing threads really helped the overall user performance. Is that it? Can we just do that again?"

That's a complicated question. Performance, you see, is almost always a multifactor problem. Changing one thing rarely helps as much as changing several things to work together better. There's also the fun question of whether you should be working on improving performance.

Can I improve performance? Almost always, yes.
Should I improve performance? Maybe.

Performance improvements on an application is a matter of balancing the benefit versus the cost.

Benefits:
  • happier customers!
  • more efficient use of resources
  • potential scalability improvements
Cost:
  • Adding resources - a common and valid performance improvement technique - costs real dollars
  • You don't get as many features (since engineering is making it faster, not do more stuff)
  • Highly optimized code and systems may be less flexible and harder to use (translation: slower development time)
That's the dirty little secret of performance: you can almost always be faster if you're willing to pay more. At some point, you need to decide to stop, that more performance isn't worth the cost. So have fun with performance. Make it faster. Make it better. Look in several places and get some more speed (or scalability or load) out of your app. But watch your costs, and when you're paying more than the benefit you're getting, stop.

Monday, July 25, 2011

Why Do You Question

I had a very interesting conversation with a woman over the weekend about questioning. She told me a story...

"One of the main things I learned in college was not to simply accept other people's statements. Instead, I learned to question things and create my own wisdom rather than simply receiving the wisdom of others. It sounds good, this questioning and thinking. Then I got my first job. And my second job. And my third. I realized that I was so busy questioning that I wasn't able to learn from anyone. Instead, I questioned everything and made myself so obnoxious that no one wanted to work with me; I had no one left to learn from."

She went on to talk about how she had to learn to differentiate between other people's experiences and the conclusions that they drew from those experiences. How questioning wasn't about rejecting the experiences of others, but about rejecting conclusions that didn't have a solid basis.

And that got me to thinking. This woman doesn't work in the software space, but there is still something we can take from that.

Too many times I've been taking a class, or talking with colleagues, or teaching a group, and The Questioner shows up. The Questioner is the one who can't allow any statement to be made without starting to question it.

Now, asking questions can be a hugely valuable tactic. Questions let us increase our understanding of a topic. Questions help us lead a student down a path of learning (we call these leading questions for a reason!). Questions can help transition from evidence to conclusions.

Questioning can be highly disruptive. Questioning can stop a class in its tracks. It can derail a conversation. They can force the walking of a known path again to come to the same conclusion, thus wasting time and exasperating listeners.

The difference between good questions and bad questioning is subtle. Good questions are designed to gather information for the betterment of all the parties involved in a conversation. Bad questioning is intended to make someone look smart or grand at the expense of the other parties involved in the conversation.

Asking questions is actually quite different from questioning things, although both use the same syntax and both make use of interrogation points (also known as question marks). Asking questions is about learning. Questioning things is a method of expressing doubt in someone or something.

Sometimes questioning is a very good thing. After all, if we didn't question conclusions, we'd still believe that illnesses were caused by humors and that the planet didn't spin. Just make sure that questioning is based on understanding and evidence, not on ego and a desire to be smarter/bigger/stronger/more famous than someone else.

(Dear person who gets kicked out of multiple classes for being argumentative and disruptive: you are likely questioning everything for the attention and because you are a jerk, not because you're actually learning anything or providing value to anyone. Please stop or go away.)

So next time you ask a question, first ask yourself whether you're asking a question to help the group as a whole, or whether you're asking the question because you are smarter and you're going to show it. Then put your ego in check, open your ears, and understand.

Friday, July 22, 2011

Bookkeeping

It feels so good to finish something. I commit the code, deploy the build, finish the test, whatever. Then I move on. Hooray!

Well, actually, first there's the bookkeeping to do. Bookkeeping is all the stuff outside the actual change that you have to do in order to inform the project that you're done. Typical bookkeeping includes one or more of:
  • marking a bug or story finished in a tracking system (paper or electronic). This could be moving an index card, or updating a status in Jira, or clicking the "deliver" button in Pivotal Tracker.
  • notifying the recipients of your work. This could be emailing or IMing the test team, or sending a notice to customers, or marking a bug as verified in a defect tracking system.
  • cleaning up the environment. This could be deleting a feature branch, reverting the database to clean out test data, or re-imaging the machine you were using as your victim (err.... test machine).
Some projects have more bookkeeping than others. After all, bookkeeping frequently leads to statistics that are interesting and potentially useful. Think of all the things we can track! We can track time spent and compare it to estimates, to help ourselves be more accurate. We can track how many issues are in the code versus the test infrastructure versus the configuration versus from merges and help ourselves not make the same mistakes. We can track who fixes what kind of bugs, or who does what kind of implementation tasks so we can balance better in the future. We can, we can, we can....

But all of that bookkeeping costs. It costs time for people to record the information. It costs time to set up the record tracking. It costs time for people to data mine the information and get value out of it. It costs in morale: when a team feels tracked and monitored they spend time, effort, and cycles gaming the system and worrying about looking good for the tracking, when they could be working to make the product better.

That's not to say there should be no bookkeeping. On the contrary, a team that doesn't monitor themselves a little bit has no ability to say they are improving, or to catch problems early.

Just make sure that when you add an element of bookkeeping to your process, that you know what value that element is going to provide. If you can't articulate the value and how you're going to get that value from that bookkeeping item, then you're not ready to start tracking it. And if that value is no longer there, stop the bookkeeping. Gather as much data as you can usefully use. No more, no less.

Wednesday, July 20, 2011

Trust by Alignment

I've been thinking recently about different forms of trust. Trust is essential to any team work - a team that trusts each other will be more functional and get more done than a team that doesn't trust each other.

But there are different kinds of trust. Sometimes we trust out of necessity. This is the kind of trust I have for my doctor; I trust that he can set my arm correctly after I break it, or diagnose a condition. Sometimes we trust out of respect and love. This is how families trust each other, and is the trust behind "blood is thicker than water". And sometimes we trust by alignment. This is the trust between colleagues.

Trust by alignment is about respect, but more importantly it's about shared values. For example, I work with Matt Heusser fairly frequently. When Matt emails me and says, "Here's something we should do. Are you in?", I almost always say yes. I don't ask too many questions; I just trust that it will work out. Why? Because Matt and I share many values. He's as careful with his reputation as I am with mine, so if he thinks this will enhance - or at least not harm - his reputation, then I can feel safe it won't hurt mine. Matt and I also share the idea that engineering advances when thinking people put aside the egos and grandiose pronouncements in favor of sharing experiences and ideas. So if he says, "let's do an article together", then we're going to wind up with an article that I can be proud of, because I know that he's not going to put in content that I completely disagree with. We have alignment of professional values, and that means we can trust each other.

So ask yourself whether you can trust someone, sure. And recognize that trust comes in many forms. Trusting by alignment, or necessity, is perfectly acceptable. Know that you trust and know why you trust. That will tell you when to trust.

Monday, July 18, 2011

"Make Me"

Adam Goucher said this on Twitter today: "how to fail at anything 'my boss is making me use java for this'. equally true for any language w/ 'making me'. (just more so with java)"

Now, I'm not going to say anything about Java; that's not today's point. Bash it or love it, it really doesn't matter to me.

But Adam's right about "My boss is making me use..." being a huge red flag. This person - whoever it is - will almost certainly fail at whatever task he's being asked to accomplish.

Why?

Because he's already decided to fail. You can tell because he said, "making me". When we say, "making me", it means several things:
  1. a decision has been made
  2. we didn't agree with it
  3. we're angry about it
This is a classic setup that almost always leads to "proving" that the decision maker "got it wrong" by failing.

Let's take a step back and look at this from two directions. First, let's talk about how technical decisions get made. Second, let's look at how we can help ensure people don't want to undermine those decisions - how we can avoid the "make me" moment.

Making technical decisions generally goes something like this:
  • Identify that there is a decision to be made
  • Identify (some of) the possibilities
  • (Maybe) do research to ensure the feasibility of those possibilities
  • Identify a decision maker
  • (Maybe) consider the pros and cons of each choice
  • Make a decision
In some cases, this can take seconds and be done by one person. For example, choosing whether to use a for loop or a while loop is a technical decision that is usually made by one person alone, and that's completely fine. In other cases, this is a huge decision made by a large team of people. What technology stack to use for a new platform is an example. Another example is the creation of a design that will be the heart of a next generation product. These decisions take a lot longer (days, weeks, months, even), and frequently involve research, prototyping, etc.

With these large decisions, the human factor becomes very important. After all, none of us is actually psychic, so we're all just guessing. Some of us are just better guessers than others due to experience, research, or simple skill. Similarly, there often isn't one clear winner. It's perfectly acceptable, for example, to write a web application in Ruby on Rails, Java, or .NET. Any of those are great technologies with various pros and cons, and any one of them might be right for this particular project. It's a decision that will be made on human factors and circumstances, rather than on technical feasibility. These are the decisions that are more likely to lead to the "make me" moment.

So now we know we're at risk of the "make me" moment. We're at a point where someone may decide they don't like the decision and decide that it's going to fail, consciously or not. (By the way, someone who has decided this won't work is a lot more likely to fail than someone who thinks it will work!) We need to head off that problem before it starts. And once again, it's the human side of things rather than the technology decision itself that will determine this. So, how do we make sure that the recipients of a technical decision are at least willing to give it a chance?

Try these:
  • Identify a decision maker before starting the process. This helps reduce surprise at the point of decision. It also tells the team who they have to convince about their views, and this is actually helpful since at least they know their voice was heard by the right people.
  • Make sure the decision maker has the technical prowess to make a decision intelligently. Being handed a decision by someone that the technical team respects makes a big difference in how likely they are to accept it. For many engineers, this means that the decision maker needs to be an architect or engineer, not a product manager or the like.
  • Give the team a voice. Let team members who care about the decision describe what they would choose and why. They probably have some good insights, and can at least make sure that the decision maker is addressing the relevant concerns, no matter what the decision ends up being.
  • Explain the decision. Describing why a decision went the way it did makes it look less arbitrary, and helps cut off a lot of dissent.
  • Provide a feedback loop. Sometimes a decision really does turn out to be wrong. You need the freedom to unmake the decision if it's ultimately harmful, and the earliest way to know that is to listen to the feedback from the people most involved with the effects of the decision. Provide explicit feedback mechanisms and timelines, and use those to rethink the decision if it's going unexpectedly poorly.
Making decisions is hard, and when the decision isn't clear there will frequently be someone who is angry about it. Understand and defuse the anger, and avoid the "make me" moment. You'll increase your chances of success.

Thursday, July 14, 2011

Balance Self-Driven Processes

At one client site, the engineering team has a policy of doing peer code reviews for every change set. It goes something like this:
  • I make a change and test it (and change and test until it's right)
  • I package up the change
  • I send a note to the group chat to ask for a review
  • Someone reviews the code
  • I make any suggested changes, get another review if it was a big set of changes
  • I check in (done!)
Simple enough. It works... as long as there are always willing reviewers.

For any sort of casual peer review workflow like this to work - whether it's code reviews, idea jams, design reviews, or whatever - all the peers involved have to be willing to balance reviewing and doing. That is, sometimes they have to be the one who is reviewing changes, and sometimes they have to be the one who is doing changes. If you're not in balance, then your casual review process is dividing your team into doers and monitors - and that's a bad vibe long term.

If you have a peer review process in place, and you're getting negative feedback, you might be out of balance. Listen for comments like:
"Well, I finished it yesterday but I haven't gotten a review yet."
"Nope, I didn't do that task yet. I was reviewing all day."
"Jacob will review it. (HAHAHAHHAHA!) Yeah, right!"

If you're out of balance, it's time to figure out how to re-achieve balance as a team, or to decide that the casual self-directed process isn't working. Generally the former is preferable unless the practice is completely useless. Typically, you can achieve balance by formalizing it for a while (e.g., designate a "reviewer of the day" and rotate it on a strict rotation).

Self-driven processes can be highly effective process techniques, but only if the workload is balanced across the whole team. As an individual team member, make sure you're doing your part. And as a team, make sure things still feel balanced. As with anything, how things feel is an early indicator of success or problems. You already know it in your team - just listen.

Tuesday, July 12, 2011

Top Overall Priority

Many of us have a single overarching goal.... and a whole lot of other stuff we have to do. For example, yesterday I had lunch with someone who is the (non-technical) founder of a very early stage startup. They have two people - the business guy and the tech guy - a product, and a few customers. They're getting ready to go look for a round of funding but don't need it just yet.

The business guy has one overall priority: sell more.
The tech guy has one overall priority: make the product more saleable (to existing and to future customers).

That's it. Sounds simple. Except for all the other stuff:
  • meeting potential investors and keeping them interested
  • forward-looking product features that won't help get today's customers but will help keep ahead of their competitors
  • hiring and managing sales guys and developers
  • writing blog posts and twitter updates to keep the company's public face fresh
  • analyzing logs to see if there are any lurking problems
  • budgeting - just how long before they really do need a cash infusion?
  • And a whole lot more...
Entire days go by when the business type doesn't make a single sales call to a current client about expansion or to a potential new client. Whole days go by when the technical guy doesn't create a product feature or fix an annoying bug. They're caught up in the other stuff.

Now, I'm not going to tell you the other stuff is unnecessary. The fact is that most of it is part of the job, too. But here's the catch: if you do all the other stuff but you don't do your single highest priority, you will ultimately fail.

So make sure your absolute top priority gets addressed frequently. Keep it, well, at the top.

Friday, July 8, 2011

Doesn't Matter How You Get There

I was doing some work with a team that had recently adopted SCRUM, and they were talking about their last planning meeting. We had met the week before the planning meeting, and we had practiced doing a planning session. When they got to the actual planning session, though, they did something different. Here's (the relevant part of) what they did:
  • bring a backlog with estimates to the meeting
  • calculated how many total iteration days they'd have (subtracting days for people on vacation)
  • multiplied number of people days by the number of effective hours in their day to get the total number of hours they could do in the iteration
  • discussed the backlog, let the product owner reorder a bit, changed an estimate or two
  • walked down the backlog and stopped when they couldn't fit any more in their total iteration hours
  • (This is the new part.) Collectively said, "eek!" and dropped the last item because the iteration felt too full.
So we sat down this week, and they told me what they did in the planning meeting, and said: "Did we do it wrong?" and "So how bad was that?"

And here's the answer.

It doesn't matter how you get to your iteration. What matters is that you all agree to it.

Estimating backlog items, and counting how much time you think you're going to have are useful exercises, but they're really only tools for the team. No one outside the team needs to care how you fill your iteration. You can use these tools like estimation and figuring out whether you're going to have more or less time than normal (in fact, I recommend that you do). But if you need to adjust those, go ahead! As long as everyone on the team agrees with where you end up, you can make whatever adjustments you like along the way.

As far as I'm concerned, there are only two rules for what must happen in or come out of a planning meeting:
  1. The entire team must agree on what they're signing up for. This includes the product owner and the team.
  2. The entire team communicates throughout the planning meeting. No bait and switch tactics, please, since that erodes trust.
That's it. Everything else is just a guideline - you can follow it, you can follow it and adjust, or you can throw it out the window (normally you don't, but you could). So don't sweat it. Just get to an iteration, and do it in a way that your whole team is okay with (and that includes the product owner!). How you get there is up to you.

Tuesday, July 5, 2011

Not On the List

I do almost all of my work out of OmniFocus. At any given time, I'm working on a range of things, from projects for clients to maintenance tasks for old clients (yum update foo!) to articles, etc. There's no way I can keep it all in my head, so I let OmniFocus take care of it for me.

So everything I do is there, right?!

Well, no. (Yes, hardcore GTD people can laugh at me now.)

I also do things that I get from:
  • my email
  • meetings and casual discussions
  • my IM
  • habit (blogging is an example of this - I never write it down but I know I need to write content for it)
This is pretty much identical to what happens to teams during sprints. There is a great list of things that they intend to do (just like my OmniFocus items). And then there's everything else. Some of it even comes from the same places - meetings (with customers or management), email, even IM (ever had a support rep IM you with an in-progress customer issue? I have.).

Just like life, sprints never really fit on a single list. Don't forget to give yourself a little breathing room for all the stuff that's not on the list.

Friday, July 1, 2011

Euphemisms

Many people object to various terms in software, claiming they sound negative. And they do sound negative. Unless you're an entomologist, "bug" for example, doesn't sound too good.

So people create euphemisms. I was at an event a few nights ago with a bunch of software folks, and we got to discussing euphemisms, specifically for bug review meetings.

It's not a Bug Review, it's a:
  • Improvement Opportunity Sessions
  • Challenge Validation Meeting
  • Analysis Meeting
  • Likely Sprint Addition Review

Seriously. Sometimes negative things happen; don't be afraid of it, but deal with it. Do enjoy the euphemisms, though - they're completely absurd!