I just took a quick online quiz, and I got 7 out of 10. Go me! I'm awesome!
If the average score is 5 out of 10, then yeah, I'm pretty awesome.
If the average score is 9 out of 10, well, I'm maybe not so awesome. (At least in this one way!)
I have a number - in this case a score: 7
I even have a scale: out of 10
But without knowing about how other people did, I don't know very much about how I did. Let's generalize:
Numbers, even on a scale, are generally pretty meaningless without comparators.
If you find yourself looking at a number - whether it's a score, a latency figure, a max users figure, or whatever - ask yourself how this number compares with similar numbers. Only then will you really know what you're looking at.
I've been doing a fair amount of work lately with people who are remote. We're not in the same city or even in the same time zone, but we're working on the same project and using each other's code. It's not always easy, but we're making progress.
In order to do this successfully, we have two simple rules:
Write a README
Show me what you see
Let's talk about a README for a minute. I know that when I'm done writing some piece of code, it's intuitive and obvious how to use it.... to me. That doesn't mean it's obvious to anyone else on the planet. So I write a README that tells people how to use this thing I've created. The more background I share with someone, the shorter the README can be. If I'm adding a small feature to a Rails application and my colleague is a Rails engineer who also knows the application, I can probably have a README that says, "I added a spec to show it - check out the commit." He'll know to go look at my latest commit, where to do that, how to understand the rspec test I wrote, etc. With less shared background, the README gets more detailed. For example, I wrote a python script recently and handed it off to an engineer to run. The catch, the engineer doesn't know python. So my README was very detailed, including how to set up Python, how to check out the project, how to run the exact command line he needed, and an example of a successful result. Why? There's not very much shared background, so I can't assume he would do the same things I did, have the same setup I do, or read output the same way. This is no denigration of the engineer I was working with; he simply has a different background. (You should have seen the README he sent me for the .NET utility he wrote - I had none of that background!)
All in all, it's pretty simple. Write a README telling someone what you did. The less they know about how you think and how you do things, the more detailed the README should be.
There's some bad news, though. No matter how good your README, someone's going to have trouble following it one day. This is where the second rule comes in. Always show me what you see. It's a classic user call for help: "I tried it and I got an error!". We engineers do it, too, and it really doesn't help solve the problem.
So if we run into a problem and need help, we have to show each other what we're seeing. On a command line, that means we run some basic helpful things and send the console output to the user. Let's take that python script (which didn't go right for the user the first time despite the great README). In order to help me figure out why it was working for me and not for him, he sent me a basic console output that showed this (names changed to protect the innocent):
$ python --version
$ git status
# On branch master
nothing to commit (working directory clean)
$ git pull
$ command -to -run the script
ERROR WAS HERE
This way I have a lot of information. I can already tell:
what version of python he's running
the exact error he got
where his source tree is located and if there's anything funny about it
that he's on mac, not Windows
that he doesn't have any interfering local changes
how he ran the script, including arguments and options
When we're remote from each other, communication is somewhat high latency; there are minutes between communication at a minimum, and sometimes more. We don't want to go back and forth many times; fewer is faster. So, he shows me what he sees and that makes it much more likely I'll figure out what's different in fewer tries. And then we can both get back to work faster!
For the first time in a long time I'm working on testing a system in which I can't get at the logs. All I can see is what the end user sees - what data went in, and what the results output are.
It's immensely frustrating.
It's also hugely educational.
I hate being blind to what's going on under the covers because it makes me very dependent on the developers who can get at the logs. I'm unable to be as precise as I would like to be in reporting results, simply because I can't distinguish different behaviors. For example, if I get a response from the API that contains no results, then I don't know what happened. It could be any one of the following:
if I fed it data that shouldn't have gotten results
I called the API incorrectly
it wasn't done yet, and I just needed to be a bit more patient
an error or bug occurred
I have no way to tell whether there's a problem and what kind of problem it might be, which means I'm running to the developer for every test that doesn't return exactly what I expect. Codependence is not normally my style!
On the other hand....
What I see is what external consumers of the API see. The customers using the product don't get to look at logs, either. So if there's not enough information to figure out what's going on, then customers may have the same problems that I'm having. I've learned a lot about the usability of the API, which it turns out was maybe not so good. And we've been fixing it, making it more transparent what's going on - whether it's bad calls, or data that doesn't end up with results, or a bug in the underlying system.
So even if you're not blind, like I am, spend some time pretending you are. Ignore your logs and your code traces. Being blind is sometimes as illuminating as seeing.
There's an adage that they teach you in manager school:
Hire slowly. Fire fast.
This is a polite way of saying that a bad hire costs a whole lot of time and potentially money, so be careful about bringing them on and don't hesitate to ditch someone who "isn't working out" (translation: "He WHAT?! AGAIN!?" followed by optional weeping).
On the surface of it, this is sage advice and I completely agree with it. In practice, though, it's darn hard.
Looking at the hiring side, the safest hire is no hire at all. Obviously, that's not going to work; you need to hire someone because the current team can't do all the work all the time. You need a skillset, or at least another pair of clueful hands.
Looking at the firing side, getting someone to leave is emotionally wrenching for everyone involved: the employee, the manager, the rest of the team. Occasionally things turn truly nasty, and threats of litigation erupt. If you do it too often, too, you'll give your company a reputation for being a bad place to work. Firing is really not something you want to do.
So what's a manager to do?
There's no one answer. There are, however, partial answers:
listen to the "hire slowly" part. It's better to be understaffed than to have to fire too often.
Use contracting. When a contractor leaves, that's just the end of a contract; it's not firing. It's much less wrenching on the team and doesn't lead
Contract-to-hire is a valid way to go. It's harder to hire, but it gives everyone an out if things don't work (that means the potential employee can leave, too, so make sure you have a good work environment so he'll want to stay!)
It's best to hire a great person who will complement and enhance the team, but it's impossible to really know that until you've gotten into it. In the meantime, be careful, and give yourself and the candidate as many chances as possible to decide this isn't working.... until that happy day when you both decide it is!
This is a quick one for all the Ruby types out there. I installed the heroku gem onto a fresh Amazon Linux box today, and then this happened:
[ec2-user@newbox ~]$ heroku :29:in `require': no such file to load -- readline (LoadError) from :29:in `require' from /usr/local/lib/ruby/gems/1.9.1/gems/heroku-2.9.0/lib/heroku/command/run.rb:1:in `' from :29:in `require' from :29:in `require' from /usr/local/lib/ruby/gems/1.9.1/gems/heroku-2.9.0/lib/heroku/command.rb:17:in `block in load' from /usr/local/lib/ruby/gems/1.9.1/gems/heroku-2.9.0/lib/heroku/command.rb:16:in `each' from /usr/local/lib/ruby/gems/1.9.1/gems/heroku-2.9.0/lib/heroku/command.rb:16:in `load' from /usr/local/lib/ruby/gems/1.9.1/gems/heroku-2.9.0/bin/heroku:13:in `' from /usr/local/bin/heroku:19:in `load' from /usr/local/bin/heroku:19:in `'
The gem install missed a dependency. This fixes it:
gem install rb-readline
That was easy, but took a good 10 minutes to figure out. Here's hoping you don't stumble over the same thing!
I was discussing a problem with an engineer I work with recently, and we basically came down to two ways we could solve the problem. They were basically equal; each had benefits and drawbacks, but neither was obviously a better choice.
There is, however, one big difference: one of the ideas followed standard conventions. The other violated them.
Let's talk about conventions for a second. Conventions are the well-worn paths in software. They're the habits that engineers pick up. Creating a branch per major feature and merging into master (thus keeping master pretty stable) is a convention of a lot of developers using Git. Giving a gag (read: really ugly) desk decoration to the developer who most recently broke the build is a convention followed by many engineering teams.
Conventions are a feedback loop. Tools help create conventions, and then conventions dictate what the tools do and how they work. Over time this makes it a lot easier to follow conventions. The tools support it better (fewer workarounds needed!). New developers joining the team can get up to speed more quickly.
So if our choices are equal in every other way, then choose the one that follows convention.
I've been interviewing devs and testers frequently lately, and I noticed I was doing something that in retrospect seemed kind of odd:
I almost always asked about superlatives.
"What's your favorite bug?"
"Tell me the thing you most disliked about the test framework you built."
"Describe the coolest project you ever led, and who you worked with on it."
Most, biggest, best, worst, least. All of it is extremes.
Extremes are memorable, but most of life is NOT an extreme. Most of life is somewhere in the middle. You're only going to have one best and one worst at any point in time. You're going to have lots of in between.
So why don't I ask about that? After all, most of the time working with this person will be spent on the stuff in the middle.
Let's talk about the best and the worst, but let's also talk about the things that happen the most.... all the stuff in between. There's no adrenaline rush, there's no despair, there's no exultation and cheering. I want to know you and I can work effectively together when all those extremes are absent. What do we do on an average Thursday? Now THAT's interesting.
As a happy homeowner, I occasionally get to do fun things like hire contractors to come in and make the place look better. (Note: Sarcasm present. This is not actually fun. It's both expensive and truly frustrating.) Our last project just finished last Friday, and involved replacing damaged sheetrock and flooring from a water leak in the upstairs, followed by painting, restaining the floors, and hanging a new light fixture. Overall we worked with a good group of people, and I'm pleased with the end result.
But so much of it was like a software project.
We basically did this:
demolition (okay, this part was kind of fun)
testing for water damage
new light fixture
Fairly straightforward and almost totally linear. If this were a software project, it would be the easiest thing to plan, ever. But then life intervenes.
For each step, we'd get a call from the project manager: "I can have a guy there tomorrow, but my best guy's on another job tomorrow and he can't come until the day after tomorrow." Translation: Do you want it done fast? Or done right?
Or I'd be talking to a worker and he'd say, "I can try the patch with this [weird-looking thing that's what would happen if a drill and a blow dryer had a baby], and it'll be okay but it might warp a little, or I can just let it air dry and come back tomorrow." Translation: Do you want it done fast? Or done right?
(And no, I couldn't pick both. See? Just like software!)
Now the easy thing to say would be: "Do it right! What's another day?" That's easier to say when you're not living in a construction zone. It's also easy to say when you don't have guests coming.... real soon now!
Is this sounding like software yet? Compromises to meet a hard date, check!
So we chose. We can do demolition fast. Testing we didn't get to control. Sheetrock had to be right; that's hard to change. Painting wasn't hard, so we chose the guys who were available (and who did a perfectly acceptable job). The floor had to be right all the way - we walk on it all the time, and any variations in color or finish are both very expensive to fix and really noticeable.
In short, we did some fast, and we did some right. Just like software.
Keep in mind you don't have to make a single universal choice. You get to pick every time you have a choice. So pick frequently and pick what's right for that decision. You can't have fast and right, but you can control what's fast and what's right - and that will give you the closest to your ideal.