Friday, June 29, 2012

Engineering Words

I've written before about words that have special meaning in engineering. But wait, there are more! There is a subset of words that I use when I'm talking with engineers and we all use them. Then I talk to my coworkers in marketing...... and they look at me like I just got out a dictionary and picked the obscure words!

It's jargon. Pure and simple. Jargon doesn't have to be words that have no meaning outside the profession. Sometimes the words are just uncommon outside your profession, or have different meanings!

So, what's some engineering jargon?

Canonical
In general usage, canonical means "accepted" or "authorized". In engineering, canonical means authoritative. It implies that it's the most appropriate or best archetype or something. Think CNAME ("canonical name"), or the canonical source of some information (the one that's guaranteed to be right).

Idempotent
This one shares meaning, but I've heard it a lot more in engineering and math than I have in general usage. It means that something is unchanged when multiplied by itself. In practice, that means that if you call the same function again with the same inputs, then there is no effect.

Recursive
This is applying an algorithm or function repeatedly to solve a problem. Solving a recursion problem is a really common interview test for developers. Surprisingly, when I talk with people in marketing or in finance, they use the same tricks - applying something repeatedly to get to an ultimate solution. They just call it things like, "launder, rinse, repeat" or "do it again".

Trivial
Here's another word that has slightly different meanings to different people. For many people I talk to, trivial means "quick". For a lot of engineers I know, though, trivial means "I know how to do this and there aren't any minefields." It doesn't necessarily mean fast, although that can certainly happen, too!

What engineering jargon do you use?

Monday, June 25, 2012

A Good Working Day

One of the awkward things about summer is the lure of the outside. When it's a beautiful day then reading in the sun or taking a walk in the Arboretum sure sounds better than working. (And I love my work!)

Days like today, though, are great working days. Here's roughly what it looks like outside right now.



Somehow the idea of taking my lunch to the park and feeding the baby ducklings just isn't enticing.

Happy productive rain day, everyone!

Tuesday, June 19, 2012

Better Prep, Faster Meetings

What's the minimum meeting length? Not for standup or other ritual meetings, but for an actual meeting with someone you don't speak with often? 30 minutes? 45?

Sometimes you don't have that kind of time. Sometimes you just have to put your head down and get what you need out of the meeting. This is particularly true of "starter" meetings - introductions, tutorials, etc.

For example, I have a meeting scheduled tomorrow with a very simple agenda. Here's the whole thing, copied from the meeting invite:
"Understand if product meets our needs"

That's it. It's not even a particularly complex product. The vendor asked for an hour, but due to scheduling issues we're going to have twenty minutes at most. So, can we get what we need in a 20 minute meeting? I say yes, as long as we're all very prepared.

What do I want out of the meeting? I want to know if the product has various features, and I want to check for a few common gotchas to see if there are any extensibility problems.

What does the vendor want out of the meeting? A sale. Or at least a step closer to a sale.

So the real agenda is much less vague. It's going to go like this:

  • (3 minutes) Hellos and who are we
  • (2 minutes) Overview of project
  • (12 minutes) I ask questions and the vendor answers immediately or notes which ones require more research to answer.
  • (3 minutes) Thank you and next steps
That's it. You'll notice that with detail comes responsibility. Before the meeting I have to have a very crisp list of questions to ask. (They'll take the form "Does your product support print to PDF?" "What are the formatting options for that kind of printing?") The vendor knows we're tight on time, and is bringing his senior sales engineer to the meeting so we can answer as many questions as possible.

Because I'm prepared, and because the vendor knows to be prepared we'll get through the meeting. Anyone calling into the meeting without preparation is going to be completely lost. And that's okay - we don't have time to "catch people up". This is about accomplishing things. Keep up.

So if you're in a hurry, go ahead and do the meeting. Just be prepared.

Friday, June 15, 2012

Is "Acceptance" a Good Term?

Most of my clients use some variation on agile processes. Most of them use something like stories to define work. They call it different things - tasks, features, work items - but they're all basically the same thing: units of work that need to get done and that provide value to the customers.

In any event, one of the things we do with stories is figure out how we will know when we're done with them. What are the criteria that we'll use to figure that out?

The Acceptance Criteria

The problem is that acceptance implies judgement. The terminology reflects this: "what are the acceptance criteria?" or "does QA accept this?" or "does the customer accept this?"


This is not your QA team

There's a strong implication of judgement and scrutiny going on. Or, if you have kind of a doofus QA team (or product management team), then acceptance is at best a costume parade with a rubber stamp.

This is not your QA team, either.
And neither of those is accurate. When the software development process is working, there's no team or team member sitting in judgement of another. There's no rubber stamp. Instead, there is a collective idea that something is done or not.

Don't accept "acceptance". Let's stop talking about accept and start talking about being done.

Wednesday, June 13, 2012

Talk To Your User

Yesterday I was talking with a friend - also a software engineer - and we got to talking about the weird things users ask for and the even weirder things they do on their own.

For example: I have a user who decided that the best response to any error while creating an order in the system was to delete the entire order and start over from scratch. He would do this even if the error was something like "NZ is not a valid state. Please enter a US state abbreviation."

Another example: He noticed from some logs that the system was rebuilding its inventory very often. This was a process that was intended to happen once a day, and it was happening six or seven times a day. After spending most of a day diagnosing the "bug", he discovered that it was being triggered by a user manually, and the user was doing it early and often.

In both cases, the users were doing things they thought were logical. My user had heard "cancel the order and start over" from support frequently enough that he just stopped calling and started canceling. My friend's user thought he understood how the system worked and was making sure it was right, even though the system worked a different way. They were both completely logical actions on the user's part. They were not very nuanced, but they made sense, once we talked to the user and understood what was going on.

But to get there, you have to know your user. You have to know what they're trying to accomplish. You have to know what they think your software does - even if it doesn't actually work that way. You have to learn what parts of your software are kind of scary and mysterious. Knowing all that will help you understand what they're doing, and how you can help them do accomplish their purpose better.

So riddle me this: when's the last time you talked to your user?

Wednesday, June 6, 2012

Chatty

A number of my clients use chat systems. Most of them use Campfire, but a few use Skype or IRC. Either way, it's basically a place where some or all of the engineering team hangs out. As with most hangouts, it can be really helpful or a complete waste of time.

A few simple rules increase your chances of successful chat:

  • Don't expect immediate responses. Sometimes people are working and ignoring chat for an hour or two or four. That has to be okay - chatting isn't about interrupting work, it's about having a resource.
  • Log chats. Chat history is a good resource for remembering conversations, including helpful things like where the "make it better" command lives, or which commit introduced a massive memory leak. Make sure that history is available and searchable.
  • Everyone should log in. Chat's not useful if only half the people are there. Not everyone has to be there all the time (see above about immediate responses), but everyone should log in and scroll through the history periodically - at least once or twice a day, usually.
  • Use alerts. Pick a chat client that allows alerting, and then set up alerts on things like your name. That way, when someone says "catherine, how'd you do that neat trick?", you get a more active alert and don't have to scan through the whole chat history.
  • Keep it mostly work. Just like meetings or work conversations, there can (and should) be a little fun. Most of the conversation should be work-related, though.  A good ratio is roughly 90% work, 10% goofing off... err... camaraderie.
  • Use it for questions and non-urgent notifications. "How do I blah?" or "Where's that change deployed?" or "Hey everybody, I'm going to redeploy our shared database server at noon unless someone tells me otherwise" are good things to put in a chat room. This gets back to the idea non-immediate responses.
Effectively used, a chat room can be a great resource for using the group's collective knowledge. It provides all the benefits of talking with the entire group without the downside of interrupting team members who are concentrating. It takes interrupting activities - questions, notifications, discussions and debates - and makes them less disruptive. Chat can be your friend or your enemy - make it your friend.

Monday, June 4, 2012

Welcome, Interns


In the annual technology calendar, summer heralds the arrival of a new and curious species: interns. The intern - latin name collegius eagerus - generally arrives in May or June and stays until August or September. A few particularly ambitious members of the species stay for an extra semester or even a year.

Distinguishing Intern Subspecies
There are a few distinct subspecies of interns:
- Internus Know-it-all-us: The internus know-it-all-us usually comes from a school known for a good computer science program. He has passed an algorithms class or two and probably knows his way around a compiler. This intern is here to apply his wisdom to the real world. This intern is best handled gently, since the ego is very large.
- Internus Know-nothingus: This subspecies is often the youngest of the bunch. He knows mostly that he doesn't know much at all. Half the difficulty interacting with this subspecies is getting them to stop wallowing in now knowing and just start doing. Extremely simple tasks are a good starter, and a lot of encouragement is needed.
- Internus Eagerus: This subspecies is hear to learn. He wants to know everything about everything. The trick to handling this intern is helping them accomplish something and not get lost simply gathering knowledge.

Managing Interns
On the face of it, an intern is a productivity killer. After all, this is a person who needs training in almost every aspect of the job. However, properly managed, an intern can bring a lot of value to a team, even in the few months they are there.

To properly manage an intern, recognize what they know and what they don't know. Most interns know something about coding and about things like algorithms and patterns. However, most interns have worked mostly alone or at most in teams of three or four people. That's the part they have to learn - how to write and maintain software as a team member.

Focus on working with the intern on software team dynamics:

  • Using source control, including branching, merging and conflict management.
  • Refactoring and creating reusable methods and classes
  • Recognizing and following style standards
  • Code structure and layout


Benefiting from Interns
Interns take a lot of time. They make mistakes. They write code that's sometimes utterly unusable.

They're also a huge benefit to a team.

You see, they don't know any of the history, so they don't know what can't be done. Explaining something to an intern helps you think through a problem or through a process - and that helps depend your understanding of it. Sometimes, they also show you what corners you have been cutting that you shouldn't, or what corners you can cut. Fresh eyes on your product will help you see it as your customers do - totally and completely without your assumptions.

If you get the chance to work with an intern, leap at it. They're frustrating and barely there long enough to be productive. They're also refreshing and exciting and a breath of life to a stable team.

Friday, June 1, 2012

If It's Never Right...

It's been a few years, so I feel like I can write about this, but I'm still not naming names. I had a client who had a workflow that went something like this:

  • client sends in data
  • our client services team processes the data using internal libraries and scripts
  • our client services team creates a report
  • our QA team checks the report
  • report goes to the client
On the surface, it sort of seems like a reasonable process.

In practice, 90% or more of the reports were rejected by the QA team. Almost all of those were for obvious errors: misspellings, data that was missing completely, wildly improbable conclusions (e.g., "factor F went up 700000%!"). Two iterations on a report was most common; three iterations happened on occasion.

Clearly, if it's almost never right the first time, there's a problem.

So what do we do about it?

First, we have to figure out why on earth we're failing so consistently. Is this a people problem, a data problem, or a process problem? Is the client services team suffering from script blindness: an unwarranted faith that what the scripts produce must be right? Is the client team just plain not looking at their reports? Is QA rejecting things incorrectly? Is the inbound data really just terribly dirty? Are client services and QA looking at different specs? The right way to find out is to review each incident and figure out where it went wrong. Hopefully, a pattern will emerge. Until we understand the problem, we can't fix it. it's also sadly possible that there is more than one problem.

Once you know what's wrong, you can fix it. Maybe it's as simple as telling client services to look at their reports before they send it off. Maybe it's fixing the libraries and scripts to avoid or at least yell loudly about errors. Maybe it's fixing where specs are kept and how they're understood. Maybe it's a combination.

Lastly, give yourself time to see the results of your changes. The first report we did after making some process and people changes failed. The second one did, too. After about the fifth one, though - and a few more tweaks - we were seeing QA reject rates go way down.

In any case, if it's never right, it's time to fix it.