Friday, August 31, 2012

Consider the Message

A couple days ago I was at the butcher picking up some meat for supper (burgers!). My card got declined. And here's the thinking: "Oh wow embarrassing! I come here all the time! I'm sooo not that person. I'm fiscally responsible, darn it! Besides, I'm nowhere near the limit. How annoying!" It's about a 5 second panic, but let's be honest, it's not a good feeling. It's embarrassing and annoying for both parties - for the cashier and for me.

So I paid with cash, and as I was going out, the cashier handed me the receipt from when it got declined. Here it is:




Seriously?! Seriously!?!? That 5 second panic and the annoyance for me and for the cashier - and check out that decline reason:

"Could not make Ssl c"

I'm going to assume that means "Could not make an SSL connection to ". Bonus points for the truncated message and for the interesting capitalization.

That's why error messages matter. The error shouldn't have been "Declined". It should have been "Error". That would have saved us all the embarrassment, at least! (Yeah, it still would have been annoying.)

So please, consider your error messages. They matter.

Wednesday, August 29, 2012

Autonomous and Dependent Teams

Almost all of my clients are currently doing some variation on agile methodologies. Most of them are doing SCRUM or some version of it. And yet, they are very different teams. In particular, lately I've noticed that there are two kinds of teams: those that encourage autonomy and those that encourage dependence.

To be clear, I'm speaking entirely within the development team itself. Within a development team there are invariably leaders, usually the ones who have been around for a while and show the most ability to interact on a technical level with the rest of the team. The character of those leaders and how much autonomy non-leader team members have says a lot about the team.

Teams that encourage autonomy take the attitude that team members should help themselves. If a member of the team needs something - a new git branch, a test class, a method on an object in another area of the code - then that person is responsible for getting it.  How the team member accomplishes that is, in descending order of preference: (1) doing it; (2) asking for help to do it with someone (pairing and learning); (3) asking someone else to do it; (4) throwing up a request to the team at large.

Teams that encourage dependence have a very different attitude. These are teams where each person has a specialty and anything outside that specialty should be done by a leader. If a team member needs something - a new git branch, a test class, a method in another layer of the code - then that person should ask a team leader, who will provide it. Sometimes the leader passes the request off to another team member, and sometimes the leader simply does it.

Let's look a little deeper at what happens with these teams.

Autonomous Teams

  • Emphasize a leaderless culture. These are the teams that will say, "we're all equals" or "there's no leader." There are people who know more about a given area or technology, but the team considers them question answerers more than doers in that particular area.
  • Can better withstand the loss of one person. Whether it's vacation, maternity leave, or leaving the company, the absent person is less likely to have specialized knowledge no one else on the team has. It's a loss that's easier to recover from.
  • Tend to have more tooling. Because there's no dedicated "tools person", everyone introduces tooling as it's needed, from a continuous integration system to deployment scripts to test infrastructure to design diagramming. Over time this actually winds up with more tools in active use than a team with a dedicated tools engineer.
  • Produce more well-rounded engineers. "I don't do CSS" is not an excuse on this team. If the thing you're working on needs it, well, you do CSS now!
  • Work together more. Because each team member touches a larger area of the code base, there's more to learn and team members wind up working together frequently, either as a training exercise, or to avoid bumping into each other's features, or just because they enjoy it.
  • Tend toward spaghetti code. With everyone touching many parts of the code, there is some duplication. Coding standards, a strong refactoring policy and static code analysis can help keep this under control.
  • Have less idea of the current status. Because each team member is off doing, they don't always know the overall status of a project. This is what the daily standup and burndown charts are supposed to help, and can if they're done carefully.
Dependent Teams
  • Have a command and control culture. These are the teams that say, "we'd be dead without so-and-so" or "Blah tells me what to do." They look to the leader (or leaders) and do what that person says, frequently waiting for his opinion.
  • Can quickly replace non-leaders but have a huge dependence on leaders. When a leader is missing - vacation, meeting, or leaves the company - then the team gets very little done, and uses the phrase, "I don't know. So-and-so would normally tell me, but he's not around."
  • Have a good sense of overall status. The leaders tend to know exactly where things stand. Individual team members often do not.
  • Do standup as an "update the manager" period. The leader usually leads standup, and members will speak directly to that person (watch the body language - they usually face the person). Standup often takes place in shorthand, and not all team members could describe each task being worked on.
  • Tend to work alone or with a leader. Because individual team members tend not to work on similar things or to know what everyone is doing, they'll often work alone or with a team leader.
  • Tend to wait. You'll hear phrases like, "well, I need a Git branch; has one been created yet?" Instead of attempting to solve the problem - for example, by looking for a branch and creating one - the team member will note the problem and wait for a leader to fix it or to direct the fix. 


Overall, I vastly prefer working with teams that encourage autonomy. There are moments of chaos, and times when you find yourself doing some rework, but overall the teams get a lot more done and they produce better engineers. I understand the appeal of dependent teams to those who want to be essential (they'd like to be the leaders) or to those who just want to do what they're told and go home, but it's not for me. Viva the autonomous team!








Friday, August 24, 2012

"Who Was That Masked Man?"

When we were kids, we'd occasionally get to watch the Lone Ranger. We watched the fifties version, with the Lone Ranger and Tonto riding into town in black and white and saving the day. Inevitably, it would end with someone staring into the camera and asking, "Who was that masked man?" as the Lone  Ranger rode off to his next town.
"Here I come to save the day!" (Wrong TV show, right sentiment)

I watched the show again not too long ago with a friend's son, and got to thinking that there really was something to this idea of someone riding into town, clearing out the bad guys, and riding off into the distance. It's really rather like being a consultant in some ways. You ride in, find the bad parts, fix them, and ride out. Fortunately for me, there is a lot less horseback riding and gunplay in software than there ever was in the Lone Ranger!

But let's look at each of those steps:

  1. You come in
  2. You find the bad part(s)
  3. You fix them
  4. You leave
All of those steps are important. If you leave without finishing every single step, well, you're no Lone Ranger.

Come In
Coming in doesn't have to mean going to the client's offices. It just means that you need to show up. You have to interact with the client  - and not just the project sponsor. This is how you'll know the full extent of the problem, and start to build trust to fix it. This means you have to be around and be available for casual conversations. This might be in person, by phone, in chat rooms, or over IM.

Find the Bad Part(s)
You're here because there's a problem the client can't or won't solve internally. Understanding that problem gives you the bounds of your engagement. Sure, there are probably other things you could optimize or improve, but don't lose sight of the thing you're actually here to fix!

Fix Them
You have to actually fix the bad part(s). Don't offer a proposal; don't outline the problem and note how it could be fixed. Do the work and fix it. This is what differentiates the Lone Ranger from the stereotypical management consultant.

Leave
This part is surprisingly hard sometimes. Sometimes the problem will be fixed and you just keep coming around, or start working on a new problem, or maintaining the fixes. This is all well and good until you're sure the fix is stable, or if there is another problem to be solved. When it's just maintenance, though, then it's time to leave. Don't forget to actually do that part.

And that is how 24 minutes with the Lone Ranger turned into a rant on consulting software engineers. Now, back to the fake Wild West for me!

Monday, August 20, 2012

The Dreaded Feature List

We've all seen them: the lists of features. Whether they show up in a backlog, or a test plan, or a competitive matrix, or an RFP, feature lists are everywhere. It's the driving force of software development, in many ways. "How many features have you built?" "When will that feature be ready?" "Does it have X feature?"

There's one big problem with that feature focus, though: it's not what customers actually want.

There are only two times I can think of where a customer actually explicitly cares about features:

  1. When comparing different products (RFP, evaluation, etc)
  2. When they're using the presence or absence of a feature as a proof point in an argument about your product
The rest of the time, they care only that they can solve their problem with your product. Having a "print current inventory" feature is useless. Being able to take a hard copy of the inventory report to the warehouse and scribble all over it while doing a count - that's what the customer actually wants. "Print current inventory" is just a way to get to the actual desire. These stories - tales of things the customer does that involve your software - are the heart and soul of the solution.

So - with the exception of RFPs and bakeoffs - ignore features. Start focusing on the customer and their stories.

Thursday, August 16, 2012

Good Software Makes Decisions

A lot of software is complex. Sometimes it's not even the software that's complex; it's the landscape of the problem that the software is solving. There are data elements and workflows and variations and options. Requirements frequently start with, "Normally, we foo the bar in the bat, with baz. On alternate Tuesdays in the summer, though, we pluto the mars in the saturn. Our biggest customer does something a little different with that bit, so we'll need an option to jupiter the mars in the saturn, too." Add in a product manager who answers every question with, "well, let's make that an option", and you end up with a user interface that looks like this:

No one wants to use that.

It's software that has taken all of the complexity of the problem space and barfed it out onto the user. That's not what they wanted. I promise. Even if it's what they asked for, it's not what they wanted.

Building software is about making decisions. Good software limits options; it doesn't add them. You start with a massive problem space: "Software can do anything!" and limit from there. First, you decide that you're going to solve an accounting problem. Then you decide what kind of inputs and outputs go into this accounting problem, and how the workflows go. You do this by consulting with people who know the problem space really well. Define what you will do and - just as important - what you won't do. Exposing a decision says, "I don't understand this problem; you decide." Making a decision builds customer confidence; it says, "I know what I'm doing." All the time you're creating power by limiting choices.

It's okay to make decisions in software. Take a stand; your software will be better for it.

Tuesday, August 14, 2012

Don't Claim It If You Only Read It

I've been hiring on behalf of a client lately, looking for a head of QA. One of the trends I noticed as I was reviewing resumes was that a lot of candidates had a list of languages on their resumes: C++, Perl, Ruby, Java, etc. "How cool!", I thought. It's great to see the testing profession attracting people who are capable of writing code but for whatever reason find testing a better fit, Maybe that profession is finally overcoming the stigma of "not good enough for dev" and "tester == manual tester".

And then I started the phone screens.

I've screened 12 people so far who listed languages on their resumes, and so far not one of them has actually been capable of writing code. Nope, that language list? That's the languages they can read. As in, "I can follow along on code written in C++ or Java."

Ouch. Now I'm disappointed in all of them. Here I thought they had some skills they don't have.

I understand what the candidates were going for. They're "technical testers" rather than testers who work purely on the business level and don't understand how software works. I get it. I really do. That such a distinction has meaning is sad but true.

But don't claim languages if you mean you can read them! You're an engineer, darnit! Claim a language if you can write it. There are enough engineers - and enough testers - who can write code that to claim a language without qualifying it means people will think you write it.

If you're trying to say, "hey, I'm a technical tester", then please do so in your resume. Just do so without unintentionally telling people that you're more technical than you are. The right way to say, "I can read but not write C++" is to say: "Languages: C++ (reading), Perl (read/scripting)" or something similar. That way hiring managers don't start with higher expectations than you can meet... And that's good for candidates, too.

Tuesday, August 7, 2012

The Why Behind He Said So

In software, we ask "why" a lot. "Why" is a seeking question. If we understand the cause or the reason for something then we can make an intelligent decision about changing it or going around it (and let's be honest - we usually ask why because we want to change it or work around it).

The answers to "why" are many and varied, but possibly the worst answer possible is: "Because so-and-so said so."

That's not a reason, that's an excuse. Presumably, whoever said so had a reason for saying so, and that's the actual why. There are two reasons we get into this situation:

  • Because that person is almost always right (or in a position of authority that effectively makes them correct)
  • Because that person is a convenient target for blame
It doesn't have to be this way. "Because so-and-so said so" is a learning opportunity. It's a chance to understand better, both this specific scenario and how so-and-so got to be such an expert. Think of it as a chance to check your work.

"Why is there a limit of 255 virtual machines on this box?"
"Because John said so."

Well, John's a really smart guy, and he knows the product really really well, so presumably he has a reason for saying so. So why did he say so? "Because we use a /24 network, which allows 256 hosts, and we reserve one address for the physical box." See? Now you actually know something more about the product than you did before. You also know that there's potentially a problem there (networks typically allow 254 hosts, with .0 and .255 generally reserved). 

Find out why someone said so.... it's illuminating.