Wednesday, November 30, 2011

On Tone

I was reading a blog post the other day that included a number of very similar comments. The blog post itself wasn't hugely important (feel free to read it here). The important part was that the blog post author was showing an implementation of something, and that implementation could have been simplified by using a different API call. Several of the comments pointed out the existence of this other API call.

This is all pretty simple, and quite common, but let's look at HOW they pointed out that the author's implementation could have been improved. Here are three comments that all say the same thing:



They take quite different tones, even though they're saying the same thing. Comment 1 is very short and neutral in tone. Comment 2 is quite aggressive, even belittling (or teasing, depending on how thick your skin is) the author. Comment 3 is much more gentle, posing feedback in the form of a question, even though the question almost certainly presumes an answer of "no advantage. I should use DictReader.". Question 3 is also the only one that provides a link to the referenced API call.

None of these comments is inherently better or worse than the other. Using the tone in comment 2 has the risk of making others think you're kind of a jerk. Using the brevity of comment 1 probably works best when it's safe to assume some level of knowledge (e.g., that the user can go find the docs for csv.DictReader). Comment 3 is the least likely to offend but the most likely to make the speaker look tentative or soft. The point is more that  you can express the same information many different ways. Take into account your relationship with the recipient of the comment and the type of comment, and use that to find an appropriate tone.

Monday, November 28, 2011

Cute On Occasion

There is a trend in consumer software - and to some extent in business software - to be cute. That can be a cute logo, or fun field naming in forms, or humorous error pages. It's all in good fun, and can frequently help personalize software. After all, software is kind of a remote, sometimes dull thing. Why not have some fun with it!?

I'm all for fun with it. I'm even all for fun with errors or error pages (see the Twitter fail whale for example).


Be careful not to take cute too far. Cute is only fun when it's occasional. When it's frequent, cute just becomes frustrating.

So when you're going to do something cute and fun, that's great. Before you go for it, though, ask yourself: "how often will my users see this?". If the answer is "not very often", then go for it! If the answer is, "kind of a lot" or "every day", then don't be cute.

After all, it's all fun, until it isn't. Keep it fun.

Monday, November 21, 2011

Watching the Logs Go By

I was sitting at a client site the other day, watching our production logs scroll by. And then the client boss came by:
"You're just sitting there!"
"I'm watching the logs go by."
"Yeah, just sitting there!"

Not exactly. I'm learning from the system in a normal state. Understanding what's normal is the first step to figuring out what's wrong... when there's something wrong. For example, knowing that the widget didn't frobble and that must mean the frobbler crashed... well, I can only know that if I know that the widget normally frobbles, and specifically if I know that's in the log. If I didn't know it was usually there, I wouldn't notice its absence. To take another example, if I'm looking at normal logs and noticing that third party API calls are taking about 3-4 seconds, then there won't be any errors in the logs, just the usual timestamps and info messages. However, that might be a problem - maybe those API calls should be taking 1-2 seconds - even though the system is behaving "normally".

Take some time to watch the system as it behaves normally. Only by understanding what normally happens can you then figure out what is abnormal in a problem scenario.

Friday, November 18, 2011

Jammit Lessons

I just put Jammit into production in one of my Rails applications.  We had... well, kind of a lot.. of assets, mostly JS and CSS, and it was getting both hard to work with and rather taxing on my servers. When one page load takes 30 requests, then it's time to get some asset management in place. We're still on Rails 3.0 (a couple of gems not available for 3.1 yet), so I went with Jammit for now.

Overall, the move was easier than I expected. However, there are a few things that tripped me up. That's what today's blog post is about: the lessons I learned implementing Jammit.

Lesson 1: LOVE
First of all, I would like to complement the Jammit guys. It's supremely easy to use, and it just plain works, once you figure out what you're looking for.

Lesson 2: Use external URLs for compiling
Many of my requests are for images, almost all of them under 10K. I embedded these images in my css files. Jammit makes this easy; you just set embed_assets to on, then make sure your images are in your CSS (i.e., background images instead of image tags in the source). Then you just type:
jammit --base-url "http://my-server"

I got tripped up, though: the base-url must be an external url or IP. If you specify localhost, the images don't embed. Solution: use an external IP.

Lesson 3: Embedded images look like requests
When I did get the images embedded, they still looked like they might be requests in my browser. Here's a screenshot (note: this is a work in progress).

Note that all of the "data:image/png" lines look like requests. They're not. These images are successfully embedded. I was able to confirm this using a network packet tracer. Solution: don't panic!

Lesson 4: Don't use what you don't need
This isn't related to Jammit, but as I was doing this exercise, I noticed some assets we were importing but not using. Solution: take the opportunity to clean up a little bit!

Jammit isn't the be all end all; tragically, it has not yet cured world hunger. But if you're on Rails 3.0, it's a great tool, and my hat goes off to those who built and maintained it. Quick, easy, and with only a few quirks - thanks, guys!

Wednesday, November 16, 2011

The Power of Plain Text

Like it or not, email is still a common way to communicate. I get all sorts of emails, from newsletters to personal mail to diatribes (those are fun) to emails from clients or people I'm working with. Because I work with software, a decent amount of that email contains code of some sort: a method; a stack trace; an error message; a soap or json object serialized and printed. So what do I do with it? Frequently, I copy it. For example:

  • copy an id into a command prompt so I can find it in a log
  • copy a method into a class so I can try to run it
  • copy a serialized object to a buffer so I can diff it with something else

Now, we can argue about whether email is the most appropriate form for all of this, but that's a bit academic; I don't really control what other people do or send. Rather, let's look at one useful point:

This is all a lot easier if the email is plain text.

With plain text email, I don't get weird formatting issues. I don't accidentally grep for HTML. I copy-paste instead of copy-paste-delete. I can get you whatever information you need a whole lot faster.

So go ahead and send whatever you want in email. Just send it in plain text, please.

Monday, November 14, 2011

Your Test Code Should be Defensive

One of the things you learn when you first start to write code that will be used by others or with other code is this: "write defensive code." It's shorthand for not trusting inputs or external dependencies, but checking them before you try to use them, and it involves things like validating inputs, calling external services asynchronously, etc. Well, for all you test automation engineers out there:

Test code should be very defensive.

After all, we're running test code because we want to make sure that an external system (the system under test) does what we expect. If we knew beyond the shadow of a doubt that it worked, then we wouldn't bother running the test. So, we're only running the test because we don't know with 100% confidence that it will work. And that's when we have to be defensive in our coding. We're already looking for things "that should never happen", so let's write our test programs so they can handle them.

Make sure you at least consider whether you  might need:

  • Asynchrony and/or timeouts. Does the system not returning mean your test will hang?
  • Null checks. "That should never be null" is a lot easier to trace if you check when you first see it rather than waiting for it to blow up somewhere later in your code.
  • Try/catches. "The program should never throw an exception", key word "should"
  • Results inspection. Just because the program returned something and didn't error doesn't mean that return matches what you thought.
Basically, your goal is to either produce a "pass" or a legible failure. Keep in mind that automated tests ted to (1) have a long life; and (2) be run mostly by people or systems other than the ones who wrote them. Just because you the test author knows what it means, that doesn't mean it'll be obvious to the QA engineer evaluating the output of the results a year from now (even if that QA engineer IS the test author!).

Keep your tests defensive, keep your error messages useful, and let your tests have a good long life.

Wednesday, November 9, 2011

Trusting the Code

I'm working on a project now that involves writing some automated system tests. These are pretty simple, in the grand scheme of things - functional smoke tests and small load tests. I write them, run them against QA until they pass (fixing the test and/or the system along the way), and then hand them off to the QA team for future use. Pretty straightforward.

And then.... building trust begins.

You see, after the tests have been handed them off to QA, they don't always work every time. Sometimes they fail because a new build broke the system. Sometimes they fail because we're pointing it at a different environment that is not configured correctly. And occasionally they fail because there's a bug in the test.

Regardless of why the test fails, when it fails, the first question is: "Is there a problem, or is the test just wrong?" This is where trusting the test enters the picture. It takes time and repeated usage to build trust in any code base - whether that's a system or a test.

So when something happens that's unexpected or undesirable - a test failure, a system response - don't be offended if the first thing you, the author, hears is, "Is it your code?". That's just trust that's not quite there yet. Figure out what's going on, and over time the test (or the system) will get better, and trust will come. Eventually, you'll stop getting asked, "Is it your code?" and start getting asked, "What went wrong?" - that's trust.

Monday, November 7, 2011

If You Had Nothing To Do Today...

If you had nothing to do today.... what would you do? If all of your assigned tasks magically disappeared,  or if you couldn't get to your source tree/test system/primary work environment.... what would you do?

Give yourself 30 seconds to think about it.

This is your subconscious talking to you. If you can't come up with anything, then that's a sign that you're either in the most focused job ever, or you're bored and kind of disengaged. After all, there's the stuff you're supposed to do, sure. Those are external requirements and needs - things that "management" has decided is important.

There are also your internal requirements and needs and ideas. These are the things that don't make it onto the backlog, or that you haven't told anyone about because they're not really bothering anybody else. Those are the things you would do if you didn't have any external requirement or needs. And if there's nothing - nothing at all - that you want to do better, no itch you want to scratch, no idea you want to try out, then you are in a big rut.

The external requirements are the world talking to you about this thing you're doing. The internal requirements are the whispers that say, "I care about this thing I'm doing." They're the things that turn you from hands on the keyboard to an engaged team member. And an engaged team member is almost always a happier team member.

So get engaged. And if you're not, then if you had nothing to do today..... you should find something interesting, either where you are now or on another project.

Wednesday, November 2, 2011


Last week I went to several events, including a testing conference, a networking/demo event, and accelerator event, and a symphony performance. One thing I heard at each of these was the concept of "professional".

At the symphony: "She [the soloist] looks like a cake topper! So unprofessional!"
At the testing conference: "Well, I really think that the onus is on us to be professional, even in ethical grey areas."
At the networking event: "So, are you a professional entrepreneur, or is this a side thing?"

There is a whole lot of drama wrapped up in the term "professional." The word simply means "paid to do something", but it has a lot of underlying meaning that basically amounts to "conforms to my expectations of what a good example of this____ would do, say or act". To call someone professional or unprofessional is to judge them.

So before you go using this "professional" shorthand, ask yourself what you're really trying to say. There are more precise words out there. Use them.

Tuesday, November 1, 2011

Agent of Disruption

It sucks, but it happens sometimes: your client loses confidence in you. Often this is the result of an event - a production downtime, or a big bug, or an embarrassing demo, or a completely blown deadline. Sometimes, it's a compendium of little things. Regardless of how you got here, you've got a client who no longer trusts you.

You won't want to lose the customer, so what do you do? How do you win back their confidence?

Slowly. By doing trustworthy things. By not making the same mistake.

And how do you buy the time you need to restore confidence?

Enter the Agent of Disruption.

This can be someone external or internal. Most of the time it's a person, but it can also be a tool if the situation warrants it. In the end, the client has lost confidence not only in the particular situation but in the environment an the team that produced that situation. Therefore, to regain confidence, the environment and/or the team needs to change. The introduction of a disruptive agent shows recognition of that need for a systemic change, and is the first step toward restoring confidence. It is a non-verbal indicator to the client that you really understand their concern and are in alignment with the breadth of change needed. Actual change takes time, and it takes even more time to see he effects of change - which is what rebuilds confidence. A disruptive agent buys you time to make that change.

Of course, you have to let the disruptive agent actually disrupt. Simply introducing the agent doesn't solve the underlying issue; it just buys you time to do so.

If you have a client who has lost confidence, then you need only three steps:
1. Bring in a disruptive agent to show commitment to fixing he problem and restoring confidence
2. Actually fix the problem
3. Be patient. Confidence is business speak for trust, and trust is far harder to earn the second time around

None of us likes to be in this situation, but if it happens, we can earn back our client's confidence. We just need to show we want to do so, and then do so.