Monday, January 30, 2012

Always Plan?

Almost no matter what process you use, there's almost always the idea of planning. RUP has its Inception Phase. SCRUM has its sprint kickoff and grooming. Even Kanban has a backlog and planning in the form of creating item cards. Apparently, we all really think planning matters! We may disagree about timing, how much we plan, etc., but we all like planning.

Planning gives us a lot of things:

  • An explicitly-expressed understanding of what we're doing
  • An explicit communication of how hard that is (points, hours, whatever)
  • Accountability for how well we did (whether you call this velocity, signup, whatever)
  • Insight into trends for macro planning (i.e., we get roughly X done every week, so this feature will be done in approximately Y weeks, give or take)
I'd like to make a counterargument: sometimes we shouldn't plan.

There are a lot of things that can be planned - features, data processing, even performance work. There are a lot of things, though, where planning is just guessing. Bug finding/fixing is a good example of this. "How long is it going to take you to fix all the issues we're going to find in this integration effort?" True answer: "Gosh I really don't know. Depends on what issues we find." Planning answer: "Umm... about 4 hours". With a long term concerted effort, you can figure out very rough estimates based on how many bugs you usually find in similar features and . You'll frequently be off by a bit, but statistically you can make it work out, again, with a lot of effort.

With some things and with some time periods, the effort of planning is not worth the benefit. If you can only plan with reasonable accuracy based on a lot of research and tracking, and if you don't have that, you can't plan with reasonable accuracy. Rather than just guess, consider skipping the planning.

For example, I worked with a company that was doing a big Christmas Day launch (yes, it was Christmas related). It didn't matter how good or bad it was, that was the date. We finished all the features with about two weeks to go. It was shaky, of course. If you did everything right, it worked, but we hadn't tried anything else other than the happy path. We had two weeks for testing and bug fixing and any tweaks. So we needed a planning session! Right? Nope. No way. We didn't know what bugs there were; we didn't know how long they would take to fix. Planning would have been a big guessing system. Better to get started, spend a few days learning just how brittle things were, and THEN we can plan.

Most of the time, we should plan. Most of the time we do plan. Every once in a while, we get to a point where planning doesn't make sense, and it's okay to skip it, at least for a while.

Friday, January 27, 2012

Why I Haven't Been Hiring Testers


As many of you know, my engineering career started in software testing. I'm in engineering management now, and spend most of my time building and managing teams, writing code (product code and test code), and handling technology strategy for various clients. I feel like that's important to mention up front: I am not anti-test.

But.

Well, here's the thing. I haven't hired a straight up manual tester since 2007. I haven't hired anyone to a test position since 2009. I've hired developers, contracted designers, even hired project managers and contracted documentation experts. But no testers. I know what kind of value a tester can bring to an organization... so why haven't I hired one?

Well, partly because the testers that we did hire mostly stayed. That guy I hired in 2007 is still with the company and still doing testing. The guy I hired in 2009 is still with the company, although he has moved into development. We're still getting the testing function.

There's another part, too. I've basically stopped hiring testers as an early team component.

The testing function is hugely important. We do want to know that what we built works and that it fits our customers needs. So why no testers?

Increasingly, I have other people who are embracing the tasks that used to be reserved for a dedicated tester:
- developers have survived - thrived even - while writing test automation
- product owners have been eager to use the product in a structured way
- customer service reps and implementation managers have been able to provide additional context about application usage and behavior
- improved monitoring points out problems in dev, QA and production, and makes the patterns behind them apparent so debugging is a lot easier

Now, for some applications I'd still hire a tester. I'd make that hire when I needed that kind of specialized knowledge. But that knowledge is becoming less and less specialized as more people start listening to the mantra that "quality is everyone's responsibility". We're seeing more people who test, even as I'm around fewer testers.

Test is not dead. Test is more pervasive and discussed than it has been for most of my career. The dedicated tester, however, is becoming more rare. And I think that's actually good.

Thursday, January 26, 2012

The Guess the Language Game

As I'm surfing the net, or riding the subway, I notice ads that contain code. Usually they're ads for job sites or degree programs. Sitting on the subway is actually pretty dull, so I find myself playing that "what language is that?" game.

For example, this is an ad I saw today:



This one was for a job board (job boards play this game a lot). Other common advertisers are training or college programs; and tech or biotech companies. It's a fun mix of languages and styles. Here are some recent ones:

  • a for-profit college: Java
  • biotech company: C++? (I think?)
  • a large college: Java
  • the same large college: JavaScript
  • a job board: that was totally not a language. We'll be polite and say pseudocode
  • another job board: JavaScript
  • a technology company: Perl
The use of actual recognizable languages is surprisingly high!

It's also interesting to watch how languages change. When I first started noticing these ads four or five years ago, the language was predominantly Java or Perl. These days I see a whole lot of JavaScript. I don't necessarily think that it says anything about languages you need to know, but it sure makes a subway ride more fun!

Monday, January 23, 2012

The Hard Way Gets Easier

There are two ways software development and deployment can go: the easy way; and the hard way. When it's the easy way, code works, deployments are one-click (or one command), and unicorns dance through rainbows during each build. This way doesn't happen very often outside of tutorials and extremely small projects. No, all too often, it's the hard way.

The hard way is characterized by messiness. Maybe the code is sloppy. Maybe the code is great but the deployment process is manual and error prone. Maybe the code and deployment ideas are great but different environments are all hand built, and none quite the same way. Maybe everything's fine, but getting approval takes forever so releases are slow. Maybe the development process requires things that don't get done, so you're stuck waiting on a code review... for days. Either way, nothing ever goes as smoothly as we'd hope. To some degree, "hard" characterizes most software development shops I've seen.

(Tangential note: I'm a consulting engineering manager. I probably see a higher proportion of software shops doing things the hard way. If it's already easy then my services are generally not needed!)

Building software the hard way isn't fun. It's frustrating for engineering, most of whom know better but don't have a clear path (or don't have time) to get from current to easy. It's frustrating for management, who would like to see new features more quickly and more reliably. I know very few people who actually enjoy doing things the hard way.

So how do we break the cycle? How do we make it less hard? We fix it. But we do so slowly. The first mistake most people make when they realize that developing and delivering software has gotten hard is that they try to fix it. All of it. Right now. That's a recipe for failure. If they can't fix it all at once - and they can't - then they don't fix any of it.

The trick is to make small steps toward a fix. Management isn't going to approve stopping all new work for three to six months and completely rebuilding production just to make it easy. So we do things that management can approve. Remember, it's in management's interests to see this problem solved, too; they want to approve changes that will make things easier. They just have to balance those changes with meeting all the other obligations. Hence: baby steps.

Start with finding something small you can change. It doesn't have to solve the hardest thing. It doesn't have to address the biggest problem. It doesn't have to fix things all the way. Our only two criteria are: (1) we can do it; and (2) it makes things a little bit better. Then do that one small thing.

You can guess what comes next. Yup, do it again!

Doing things the hard way is an uncomfortable reality for many of us. We can make it easier, though. Think small, think doable, and eventually what's hard will get easy.

Wednesday, January 18, 2012

The "Hardening" Myth

The purpose of software changes as it ages. When we first write something, we're doing it quick and dirty, trying to validate that the core idea is good. We might call it a prototype, a proof of concept, and early alpha, beta, or even 1.0, but the commonality remains: the software has been written with speed and  getting the core of the offering right. This is true whether the core is an algorithm, business process, market hold, whatever.

Once we get through that first phase, though, it's time to build "real software". This is the time you build the software you can grow on. All those things that got skipped in the first phase - error handling, monitoring, administration - need to get put into the product.

With product life cycles being what they are, that frequently turns into "hardening" the prototype. Basically, engineering is going to go off and add administration, and configuration, and error management, and scaling capabilities. Engineering probably says they want to rewrite at least part of the system, but gets pushed (or walks themselves) into the idea of hardening.

Bad idea alert!

"Hardening" is building something that looks like this:



Looks nice on top. Pretty shaky down below.

"Hardening", you see, is a myth. Way back when the team first built the prototype, they either "built it right" or they didn't. They either set up robust deployment, included monitoring, got their configuration and logs into a good location, or they didn't. If they did, then you're not having the hardening versus rewrite conversation at all. If they didn't, then you need to do a rewrite.

"Hardening" is a compromise that leaves you with baggage, slowing you down over time. The symptoms vary. Maybe you have a production system that you can't shut down because no one knows how it was deployed but it's functioning (for now). Maybe your releases are high risk because you have to change configurations in code. Maybe you don't have monitoring, so you only know about downtimes when your customers call (whoops!). Maybe the system has a lot of errors, and consumers are confused, creating a burden on your support or dev team. Whatever your particular symptoms, you're seeing the effect of trying to take a prototype (or alpha) and make it into software that will serve you at scale.

There's no shame in rewriting software. It's not a bad thing to throw out software once it has accomplished its purpose. If you wrote a prototype and it showed that your core algorithm worked, great! It has done what it needed to do. Time to retire it and write the software that will implement your core algorithm at scale.

So don't "harden".  If you can proceed with what you've got, then great. If you can't, bite the bullet and rewrite. It'll slow you down in the immediate term, but it's the right path to long-term velocity.

Monday, January 16, 2012

Test Intent or Implementation

Writing unit tests for existing code can be very simple. You pull up the method (or API or class or whatever), look at the implementation, and start writing tests to make sure you hit it all. Read method, write test, read method, write test. Repeat until desired test coverage is achieved.

Yay! We're done!

Well, no, we're not.

We've tested what was implemented, that's true.

But did we test what was meant to happen? Did we test the intent?

This is a much trickier question. If we're testing a method that's entirely internal - a helper method, say - then the intent and the implementation are probably identical. After all, the consumer of the method is the guy who wrote it, and if it doesn't do everything he wants, then he's probably noticed by now! If, however, we're testing something that is consumed externally - an API, say, or a library - then the intent may be quite different from what the programmer understood.

For example, let's say we have an API that provides some information. The programmer may have implemented that as a GET with url parameters for each data point (e.g., http://mycoolapi?id=1&name=bob&text=hi). The implementation may be perfectly correct. But the intent may be quite different. The product manager may have meant for the API to be a POST with a multi-part form upload, since the next data point is going to be a picture that can be uploaded. Implementation - perfect! Intent - oooops.

So when you're testing something, go ahead and look at the implementation. Just don't forget to look at the intent, too!

Friday, January 13, 2012

The Myth of the Passionate Employee

I've had conversations with two separate people in the past week, and in each the person I was talking to said, "No, I really need an employee. I want someone who's going to be passionate!"

It piqued my interest. When you really sit down and parse those two simple sentences, there are several assumptions and definitions in there that are fascinating.

First there's the notion that an employee - rather than a contractor - is needed to display passion. I'm not sure I buy this, although I can see where they're coming from. After all, if you pay a contractor to build something, they're going to do the job and expect to be paid for it, and that's all. If they're good at what they do, they will do a good job and do what it takes to deliver quality work on time. An employee is theoretically more closely tied to the future of the company and therefore will display more passion about it. Unfortunately, this is a loose correlation at best. There are passionless employees and there are contractors who are highly passionate about each of their clients.

Second there's the definition of "passion." I do not think this word means what  my conversation partners think it means. After all, passion merely means "strong emotion". What they think is means is more along the lines of "cares deeply about the future and vision of the company and will work as hard as it takes to see that vision come true." And that is a very different idea. Both of the people I was talking to are company founders. They live and breathe these companies: they stand beside their sons' soccer games thinking about closing the next client; they send emails - and reply to them - at all hours of the day and night; and they've sacrificed normal jobs for this. They want everyone to want their companies to succeed as much as they do, and to work as hard. That's what they call "passion".

Here's a little hint: It ain't gonna happen.

There are people who work very hard for dreams. They do it because they personally are getting something out of it. Maybe it's a chance to be rich ("I'm gonna be the next Google millionaire because I'm employee number 2!). Maybe it's advancement in their career. Maybe it's fame (the guy who came up with Amazon AWS is a rock star in some circles). Maybe it's just because they're really interested in the work.

There are other people who won't work that hard. Your dreams and blood and sweat and tears are their job. They want to do their job, get compensated fairly for it, and go do something else.

Specifying an employee or a contractor won't guarantee you passion or lack thereof. The financial arrangements of the work - employment or contracting - really don't have anything to do with how hard they work and what they want for it.

Working hard or long is also no guarantee of quality. Ask yourself what you really need. Do you really want someone who will be there all the time responding to your emails and putting in the hours - regardless of how good the output is? Or do you want someone who is going to help you build your product well, even if that someone will only do it in 40 hours a week?

So the next time you say you want a passionate employee to work for you, don't. Step back and think about what you really mean. Do you want someone who will build good things? Or do you want someone who is always around? Me, I'll take building things.

Your choice.

Wednesday, January 11, 2012

A Rant On Learned Specialization

I've written a few times about specialization in various forms. It's a large point of consideration on small software teams. On the one hand, having a specialist on hand means that the work can get done generally more quickly and better. On the other hand, having a specialist means that when he's not available that kind of work just plain doesn't happen.

Ultimately, we as engineering teams generally have a few fairly concrete goals:
  • deliver functionality to our customers as fast as possible
  • don't kill ourselves - we make mistakes when we work too fast
  • create solid software so that it can survive the onslaught of usage (yay success!)
  • provide management with predictable production rates - they need to know when things are likely to happen, even if it's just an estimate. Most of the time we call this velocity.
None of that says anything about specialist or generalist, at least on the surface. If we have specialists, some parts of our goal become easier; we are more likely to create solid software, and we'll probably do it a bit faster. If we have generalists, though, we're more likely to be predictable; one person being gone doesn't stop everyone else from finishing things, and we have fewer bottlenecks since anyone can jump in to help.

Specialization is occasionally necessary. If you're doing something truly specialized - a new data storage technique, or high volume (read: millions per second) data processing - then you need a specialist. Or two or three. For most of us, though, we're not doing anything that specialized. We're building web applications, or apps for various devices, or installable software. We have to be good software engineers, but the problems aren't really that esoteric. Even in cases where the product is hugely specialized, there is still a large part of it that's just a solid program to do the more commonly needed parts (administration, logging, UI, etc.).

It's a lot like physicians. Most of the time what you need is a general practitioner who is going to know how to solve lots of different kinds of problems that many people need solved (chicken pox, colds, appendicitis). You really only need a specialist when something truly weird is going on - Wilson's Disease, for example.

Here's the rant part:
There is another, more insidious, form of specialization that I like to call "learned specialization". This is where a person or a team who could be a generalist decides to become a specialist. This is what's happening when you hear statements like, "I'm the architect. Therefore I have to do all of the design and you can't start that feature yet because I haven't designed it." Or you hear things like, "Well, Jeff should do that because he knows the frobble module." Or "I'd really rather not tackle that story because I haven't worked in the admin before." The team - or one person on the team - is creating unnecessary specialization. There's no reason that John can't learn the frobble module, even if Jeff already knows it. There's no reason that Jorge can't produce a design for a feature and get the architect to take a look (must assuage that ego!).

The worst part of learned specialization is that it's self-reinforcing. The longer that Jeff works on the frobble module, the better he'll know it and the more other team members will just avoid those tasks because Jeff is slowly getting faster and better at working with it. The more Janet avoids the admin module because she doesn't know it, the larger it will get and the more there will be that she doesn't know. If the architect does all of the design, no other team member is going to learn how to design software effectively.

Be vigilant against learned specialization. Any time you see incipient specialization occurring, take it as a training opportunity and get someone else in there. Ultimately, long term software success is about breaking down silos, creating an environment in which there are multiple points of redundancy, and encouraging an atmosphere of training rather than avoidance and burnout. You'll only get there with generalization.

Monday, January 9, 2012

Positive Negative Assertions

I've been working with someone learning how to test, and we ran across our first negative test case. The method we were testing looked something like this:

new_car_color = paint_car(color)

You feed it the color you want your car, and it returns the color of the new car. If you feed it nothing, it returns the current color.

We'd been writing positive tests that looked something like this:

def test_paint_new_color
   color_i_want = "red"
   color_i_got = paint_car("red")
   color_i_want.should == color_i_got
end

def test_dont_paint
  color_i_want = "blue"
  color_i_got = paint_car("blue")
  color_i_want.should == color_i_got
  color_i_got = paint_car()
  color_i_want.should == color_i_got
end

So when we got to the first negative test, we talked about it and came up with this: "We want to test what happens when we try to paint a car a color that doesn't exist".

The junior tester then wrote up the following test:

def test_paint_no_color
    color_i_want = "not a color"
    color_i_got = paint_car("not a color")
    color_i_want.should != color_i_got
end

Well, it's a start. But our car could be any color as long as it's a color. Is that really what we want? What if we started red and the method painted the car blue whenever it got a color it didn't understand? We wouldn't have a clue!

Instead, we write the method like this:


def test_paint_no_color
  current_color = paint_car()
  color_i_got = paint_car("not a color")
  color_i_got.should == current_color
end

We make a positive assertion and confirm that the color of the car didn't change.


Even when we're doing negative tests, our assertions should be as positive as possible. We should assert that something is rather than asserting that something is not. After all there is a large range of things that fits inside "is not" - and we might not like all of them! There is a much smaller range of things that fits inside "is". So wherever possible, we should do positive assertions, even when we're testing a negative case.

Friday, January 6, 2012

The Game Fad

There is a fad wandering around the software community these days: games. In particular, logic games like SET are becoming very popular in training exercises, interviews, conferences, etc. The basic idea is that playing these games can teach you to be a better tester, or developer, or product manager. Someone who is good at SET can readily see patterns, can show flexibility (use one card in multiple sets), and can make logical deductions. These are all characteristics of a good software engineer. So does this work? Or is this just the next version of the "how many piano tuners are in the U.S.?" questions that were so popular in the 1990s and early 2000s? In the spirit of fair disclosure, I should mention that SET Enterprises is a client. So in my particular case, understanding SET is actually helpful for my job. For most people, though, games are a proxy for the actual work. They're an analogy come to life. Most of us don't build, test, and maintain games. The trouble with software is that it is highly abstract, and so it's hard to really know that any two people understand things the same way. It's also hard to distinguish familiarity with something from "read an article about it", simply because usage of the same thing varies so much across different environments. For example, someone might say, "yes, we used factories" and be referring to test object creation with Factory Girl, and someone else might make the same statement and mean that they implemented the factory pattern. In addition, software engineering is context heavy, and context is less shared than in some other industries. If a bunch of lawyers or CPAs are talking, they can all know that they went through the same base knowledge - the bar or The CPA exam - and so they can assume some commonalities. There is no software equivalent of Generally Accepted Accounting Principles (GAAP). So in order for us to share ideas, we need to establish a baseline of knowledge and skills. When time is relatively short, say, in an interview, we seek out quicker proxies that highlight the skills we feel are relevant. Some of the best testers you will ever see are useless on their first day; they require in depth product knowledge for their particular skills to shine. We can't do that in an interview or at a one hour talk, so we find analogies that we believe highlight those trengths. We play games. So is a game a useful analogy? If I'm good at SET will I be a great tester? If I play a mean game of chess, does that say anything about my skills as a developer? Yes, but only if the skills are actually relevant to the kind of work I'm doing. The visual pattern matching in SET is great if I'm testing a UI centric application, or something where design aesthetic and precision are important. The logic and sequencing that a good chess player displays are great if the product I'm developing is, say, a finite state machine. However, if I'm testing an API, the visual patterns aren't so useful. If I'm writing a simple login module, then advanced logic skills are less important than other skills. Playing games is a lot of fun. So were those logic questions about why manholes are round. But before you put too much emphasis on them, ask yourself if they're really showing characteristics that are important for your particular situation. Then go play some games, if only for the fun of it.

Wednesday, January 4, 2012

Quick: RSpec::Core::Configuration::MustBeConfiguredBeforeExampleGroupsError

I upgraded my RSpec version to 2.7 in an old project today and immediately got this error:

arete:textaurant catherine$ rspec spec/*
/Users/catherine/.rvm/gems/ruby-1.8.7-p352/gems/rspec-core-2.7.1/lib/rspec/core/configuration.rb:471:in `assert_no_example_groups_defined': RSpec's mock_framework configuration option must be configured before any example groups are defined, but you have already defined a group. (RSpec::Core::Configuration::MustBeConfiguredBeforeExampleGroupsError)
from /Users/catherine/.rvm/gems/ruby-1.8.7-p352/gems/rspec-core-2.7.1/lib/rspec/core/configuration.rb:168:in `mock_framework='
from /Users/catherine/.rvm/gems/ruby-1.8.7-p352/gems/rspec-core-2.7.1/lib/rspec/core/configuration.rb:142:in `mock_with'
from /Users/catherine/Documents/turnstar/textaurant/spec/spec_helper.rb:25
from /Users/catherine/.rvm/gems/ruby-1.8.7-p352/gems/rspec-core-2.7.1/lib/rspec/core.rb:71:in `configure'
from /Users/catherine/Documents/turnstar/textaurant/spec/spec_helper.rb:17
from /Users/catherine/.rvm/gems/ruby-1.8.7-p352/gems/activesupport-3.0.10/lib/active_support/dependencies.rb:235:in `load'
from /Users/catherine/.rvm/gems/ruby-1.8.7-p352/gems/activesupport-3.0.10/lib/active_support/dependencies.rb:235:in `load'
from /Users/catherine/.rvm/gems/ruby-1.8.7-p352/gems/activesupport-3.0.10/lib/active_support/dependencies.rb:227:in `load_dependency'
from /Users/catherine/.rvm/gems/ruby-1.8.7-p352/gems/activesupport-3.0.10/lib/active_support/dependencies.rb:235:in `load'
from /Users/catherine/.rvm/gems/ruby-1.8.7-p352/gems/rspec-core-2.7.1/lib/rspec/core/configuration.rb:459:in `load_spec_files'
from /Users/catherine/.rvm/gems/ruby-1.8.7-p352/gems/rspec-core-2.7.1/lib/rspec/core/configuration.rb:459:in `map'
from /Users/catherine/.rvm/gems/ruby-1.8.7-p352/gems/rspec-core-2.7.1/lib/rspec/core/configuration.rb:459:in `load_spec_files'
from /Users/catherine/.rvm/gems/ruby-1.8.7-p352/gems/rspec-core-2.7.1/lib/rspec/core/command_line.rb:18:in `run'
from /Users/catherine/.rvm/gems/ruby-1.8.7-p352/gems/rspec-core-2.7.1/lib/rspec/core/runner.rb:80:in `run_in_process'
from /Users/catherine/.rvm/gems/ruby-1.8.7-p352/gems/rspec-core-2.7.1/lib/rspec/core/runner.rb:66:in `run'
from /Users/catherine/.rvm/gems/ruby-1.8.7-p352/gems/rspec-core-2.7.1/lib/rspec/core/runner.rb:10:in `autorun'
from /Users/catherine/.rvm/gems/ruby-1.8.7-p352/bin/rspec:19

The solution is twofold:
  1. Convert all old-style rspec includes (require File.dirname(__FILE__) + '/../spec_helper') to new style includes (require 'spec_helper').
  2. Make sure that all your specs have " require 'spec_helper'", even the ones that don't have any specs in them. I had one spec file where all the specs were commented out (I know, I know! I'm fixing it!) and it happened to be loading first, and causing the above error.

Tuesday, January 3, 2012

Improve, Then Brag

One of the hallmarks of new (as in New Year), is ambition. This year, after all, will be different! This year we will be better, stronger, more efficient, and just all around great. Plus, we've all just gotten off a relatively slow time, maybe even had a vacation. It's time to get back to normal and to get back to work.

There's a temptation to take the zeal a bit too far. After all, the velocity points you got in December were slowed down by vacations, holidays, time off, etc. So we can increase the amount we think we'll do!

Just be careful that everyone on your team agrees.

It's fine to think you'll be better, stronger, faster; we all want to improve. But wait for some actual evidence of it before you go making claims based on being improved. See your velocity go up before you start bragging about those features that will be ready earlier than planned, or the extra features that will make it in. After all, not all improvements happen as fast as we want.

It's simple:

Improve first. Talk about it second.

Get those two backwards and you'll just depress your team and anger your customers by setting another set of unrealistic goals. So avoid unrealistic goals, don't make promises you aren't sure you can fit, and look for improvements before you assume they're out there.