Monday, November 12, 2012

A Few CSS Pet Peeves

I just started a greenfield project, and remembered again that one of the things I love about greenfield projects is that you get a chance to Do It Right(tm). I'm also working with a team, which means we have a few different people helping decide what Do It Right means, exactly. In particular, we've been having discussions through the pull requests about how to handle CSS and markup. In doing so, I found another benefit - it's really making me crystalize some of the things that I kind of knew but hadn't articulated.

I wanted to share the CSS tips and bugaboos I've articulated for myself (and my team) over the past few weeks:

  • Use a dynamic stylesheet language. I don't care which one - sass, less or something else - but use a stylesheet language. CSS is much easier when you can use variables, mixins, and other things we programmers take for granted elsewhere. Plus, who really likes doing search and replace on hex codes?
  • No bare text. Don't use text that's not in a container of some sort. It will be affected by any styling changes that affect the body text, which tends to lead to unintended consequences. In addition, bare text is a lot harder to place appropriately on the page. Put it in a span or a div or a p or something - anything!
  • No br tags. I'm not a fan of
    tags. That's mixing layout and content, which I think is a bad idea, even on a static page. If you really want a line break, it's almost certainly because you're changing to a separate type of content - another paragraph, or another item in a list. That means you should make it a paragraph or a list. Oh, and if you really must use the br tag, at least close it!
  • html_safe is a minor code smell. The project is a Rails project, but other languages have equivalents. Using "FooBar".html_safe is an indication that you're doing something odd. This can open up a security hole if any of the contents are user input. And if you're not using user contents, then you're probably putting html tags inside a single element - again mixing content and layout, which is not a good idea.
  • Lists are your friend. Not all lists have bullets or numbers. They're really useful for any layout that is sequential, horizontally or vertically. That means navigation menus, sequential results, etc.
  • Deep is bad. Particularly when you're using a CSS layout (e.g., Bootstrap) and/or a lot of different backgrounds, it gets really easy to nest a lot of divs to achieve a layout. As the application grows it will start to show in the render time of the page. Use as few divs as possible for simple positioning, and see if there's a way to make them shallower where possible.
A lot of these thoughts tie back to a separation of concerns. Content, layout/markup and styles are three different things and should be handled in three different places. You create a layout in your views, inject content from the controller (using an MVC framework or similar), and style it with your stylesheet language (ultimately CSS). Don't mix them. 

What are your CSS-related bugaboos?

Wednesday, October 31, 2012

How I Do Pull Requests

I've been working with a couple new developers lately and we've been talking about how we actually write, review and check in code. Early on, we all agreed that we would use a shared repository and pull requests. That way we get good peer code reviews, and we can look at each other's work in progress code.

Great!

One of the developers approached me after that conversation and said, "umm, so how exactly do I do pull requests?" On Github, here's how I do it:


# CD into the project
$ cd Documents/project/project_repo/

# Check what branch I'm on (almost always should be master when I'm starting a new feature)
$ git branch
* master

# Pull to make sure I have the latest
$ git pull
Already up-to-date.

# Make a new branch off master. This will be my feature branch and can be named whatever I want. I prefix all my feature branches with cmp_ so people know who did most of the coding on it.
$ git branch cmp_NEWFEATURE

# Switch to my new branch
$ git checkout cmp_NEWFEATURE
Switched to branch 'cmp_NEWFEATURE'

#####
# DEVELOP.
# Commit my changes because they're awesome
# Develop more stuff. Make changes, etc.
# Commit my changes, because they're awesome.
# Pull from master and merge to make sure my stuff still works.
# NOW I'm ready to make a pull request.
#####

# push my branch to origin
$ git push origin cmp_NEWFEATURE

# Create the pull request
# I do this on github. See https://help.github.com/articles/using-pull-requests
# Set the base branch to master and the head branch to cmp_NEWFEATURE

# Twiddle my thumbs waiting for the pull request to be approved.
# When it's approved, huzzah!

# If the reviewer said, "go ahead and merge", then the merge is on me to do.

# Check out the master branch. This is what I'll be merging into
$ git checkout master

# Merge from my pull request branch into master. I use --squash so the whole merge is one commit, no matter how many commits it took on the branch to get there.
$ git merge --squash cmp_NEWFEATURE

# If the merge isn't clean, go ahead and fix and commit.

# Push my newly-merged master up to origin. The pull request is done!
$ git push

# Go into github and close the pull request. I do this in a browser.

# Delete the feature branch from origin. Tidiness is awesome.
$ git push origin :

# Delete my local copy of the feature branch.
$ git branch -D


More information can be found here: https://help.github.com/articles/using-pull-requests.

Happy pulling!

Friday, October 26, 2012

Use Your Own API

One of the early milestones for an engineering team is the development of the first API. Someone wants to use our stuff! Maybe it's another team in the company, or another company altogether. Either way, someone will be consuming your API.

That's truly awesome news. It's always fun to see the great things teams can accomplish with your work.

So how do we create an API? Most of this is fairly straightforward:

  • Figure out what people want to do
  • Create an appropriate security model
  • Settle on an API model (RESTful JSON API? XML-RPC? Choose your technologies.)
  • Define the granularity and the API itself.
  • Document it: training materials, PyDoc/rdoc/javadoc, implementation guide, etc.
The trouble is that most people stop there. There's actually one more step to building a good API:

Use your own API.

All the theory in the world doesn't replace building an application with an API. Things that seem like great ideas on paper turn out to be very painful in the real world - like a too-granular API resulting in hundreds of calls, or hidden workflows enforced by convention only. Things that seemed like they were going to be hard are easy. Data elements show up in unexpected places. This is all stuff you figure out not through examination, but through trying it.

So try it. Create an API, then create a simple app that uses your API. It's amazing what you'll learn.

Friday, October 19, 2012

Mind Your Words

When you're surrounded by other engineers all day, it's easy to forget that some words have other meanings. Take "CRUD", for example. In engineering, that's shorthand for Create Retrieve Update Delete - your standard data operations. In real life, it's a derogatory term.

At several of my client sites, we use CRUD as engineers do. "Yeah, I've just implemented the CRUD operations; it's not pretty yet." "That's just a CRUD workflow? Yup!" "CRUD looks good."

When you get in front of non-technical folks, though, that exact conversation takes on a whole new tone. All of a sudden, you're a coarse, derogatory engineer. Oops!

Long story short, mind your words. Context changes meaning!

Tuesday, October 16, 2012

Management by Monopoly Money

I work with a lot of different teams, and almost all of them have some concept of a backlog. Basically, it's a list in some approximation of priority that the team works from. How they pull off the backlog and how stuff moves around the backlog varies a bit, but that's the basic idea.

One of the frustrations with this kind of backlog management is that certain things get short shrift. The person who owns the backlog has - through no fault of their own - certain biases. Often, it's whoever shouts loudest or most recently, or things that relate to that person's background get priority (e.g., an ex-IT guy will prioritize technical debt and deployment/monitoring items, while an ex-sales person will prioritize customer and sales requests). This leads to fun conversations and ideas like, "well, 10% of our stories should be paying down technical debt," or "every fifth story should be a customer request." In theory this works well. In practice, things aren't nearly that smooth.  In practice, sometimes things come up and we skip our customer story in favor of a strategic priority, or we don't hit our technical debt goal.

There's another way.


Yup, Monopoly money!

It sounds silly, but bear with me. When we make statements like, "10% of our time should be on technical debt", we're talking about a budget. So let's use budgeting tools - money.

Here's how it works:

  1. The product owner and management work to set a budget. It usually is as simple as agreeing that sales gets 25%, product management gets 50%, engineering gets 25%.
  2. Set a total budget amount that reflects the total amount of work you expect the team to accomplish.
  3. The representative from each group gets monopoly money according to their proportion of the budget. Usually it's a dollar a point (or $10 a point, if you're feeling rich).
  4. As you create the backlog, each group spends their money. When they're out, they're out.
Try to keep your budget over a fairly short period of time - a few sprints, or no more than a quarter. That way there's a reset opportunity before anything gets too skewed.

This turns fun very quickly. By providing money, you're creating an economy. The IT guy will trade away some of his money in exchange for getting it back from marketing next quarter, for example. Or two groups will band together to outspend everyone and push a feature they don't want onto the backlog. It also makes the process very concrete. There's no stretching or remembering. There's just how much money each group has left.

Making abstract work concrete and sticking to a budget.... now that's worth a shot. Give it a try.

Thursday, October 11, 2012

Installing Umbraco Packages

Just a quick tech note today, for anyone using Umbraco.

I was attempting to install a package from the Umbraco package repository and kept getting an error: ERR_NAME_NOT_RESOLVED. It looked like a DNS error, but I couldn't find any DNS problems!

I finally found the right answer: it's the world's most poorly formed permissions problem. Give your IIS user read, write and execute permissions on c:\Windows\Temp, and the problem will go away.


Tuesday, October 9, 2012

The Overloaded Tester

As most of you know, I came up through software testing, before branching out into development and management. I still like to keep in touch with my roots, and spend a lot of time working with my clients on the test function in software.

I've noticed a trend recently toward overloading the tester. It's no longer fashionable to simply have testers. That's "old school" "retro" thinking that produces script monkeys and poor manual testers. The bright shiny way now is to have test automation gurus - basically developers who specialize in developing tests and test frameworks - and a customer or customer proxy who helps define what to test.

All in all, I don't have a problem with the thinking behind the trend. Yes, script monkeys give testing a bad name. Yes, test automation can be a hugely valuable tool. Yes, listening to the customer is great. Like many trends, it went too far, but it's starting to swing back. And that's where things are getting interesting.

You see, we have all these manual testers, and some of them are very good. Some of them know the business and the uses like the backs of their hands, and we're starting to recognize how valuable that knowledge is. Others know the attributes of testable, maintainable, scalable software and can be great testing in the code review or architecture era. What we used to call a manual tester can now be a great future proofer.

We're really starting to overload the tester. Now instead of hiring a senior manual tester, we see job reqs for a senior tester that include phrases like:

  • "owns the quality process"
  • "evangelist for good software development and delivery practices"
  • "works across the organization to instill a culture of quality"
Your most senior testers are becoming - at least in some places - process gurus and influencers more than testers. They may never actually test a piece of software.

This actually isn't inherently a good or a bad thing. Some companies have experimented with similar ideas on the development side for years, often under the title "Office of the CTO". It's a kind of roving expert consultant who works exclusively within one company but works across the organization. Seeing this start to come up in testing.... well, it's a career path for testers who don't want to get into managing people.

I think overall I'm for this trend of testers taking on a broader role in the organization. Just be careful you know what you're getting into!

Wednesday, October 3, 2012

Team Neutral

I have a client who has outgrown their scrum team. Daily standups are taking longer and longer, and the team doesn't really fit into their room very well at all. Sprint planning is getting more and more... fun, too. And by fun I mean long.

So we're splitting the teams. Going forward, we will have two functionally equivalent teams. First order of business: a team name!

Sounds simple, right? Nope, not so much. Let's back up for a second.

This isn't the first client I've had who's split teams. The last client that split wound up with three teams needing names. And then the fun started. One team suggested greek alphabet as a theme. They would, of course, be Team Alpha. The second team would be Team Beta. The third team would be Team Omega. Team Beta was particularly unamused. Another team suggested a theme of 80's TV shows. They would be Miami Vice (cool cars!). The second team would be Growing Pains. The third team would be the Golden Girls (this team, incidentally had no women). Neither team was amused. The third team suggested a simple numbering system. They would be team 1. Someone else would be team 2. And the last team would be team 3. The squabbling continued.

It took about three weeks to land on colors as a relatively neutral theme. Granted, this was an extreme example of a group that collectively contained some pretty fragile egos and a highly developed sense of hierarchy.

The underlying point is interesting, though. It's useful to give teams names. After all, we have to refer to them somehow. It's also fun to let teams pick their own names.

But.

Make sure those names are neutral and don't imply positioning or relative importance in any way. Neutral things like colors, places, video games, and others are a good idea. And make sure every team gets to pick their own name.

One day, I'll get to be on Team Neutral. Until then, you can find me on Team has-a-neutral-name.

Tuesday, September 25, 2012

Clean Project Checklist

Some projects are harder than others. On a few of my projects, everything just feels harder than it should be. I can't find the classes I need to change; the tests on a clean checkout fail; configuration is manual and fiddly; getting set up to try your code is slow and unwieldy. They're just painful all around. Other projects are smooth and easy. There's a solid README for setting it up - and the README is short. Tests run well on clean checkout. Classes and methods are in the first place you look. The only hard part of these projects is any difficulty in actually solving the technical problem.

The main difference, I've discovered, is in the tooling around the projects. Some of the smooth projects are huge and involve different services and technologies. Some of the difficult projects are actually quite small and simple. When the tooling is clean and works, then the project is much more likely to be smooth. Over time I've come to look for a set of things that tend to indicate a clean project.

This is my clean project checklist:

  • Deployment and environment setup is automated using an actual automation tool. I don't care if it's Capistrano, Chef, Puppet, RightScale scripts or whatever. It just has to be something beyond a human and/or an SSH script. This includes creating a new box, whether it's a cloud box or a local machine in a data center somewhere. If it's scripted then I don't have to fiddle a lot manually, and it also tends to mean there's good isolation of configuration.
  • Uses Git. This makes the next item possible. It's also an indicator that the developers are using a toolchain that I happen to enjoy using and to understand.
  • Developers are branch-happy. Feature branches, "I might mess this up" branches, "just in case" branches - developers who are branch happy tend to think smaller. With that many branches, most don't survive for long! Smaller changes means smaller merging, and that makes me happy. It fits nicely with the way I work. I should note that I don't care how merging happens, either by merge or by pull request.
  • Has a CI system that runs all the automated tests. It might not run them all in one batch, and it might not run them all before it spits out a build, but it runs them. The key here is that automated tests run regularly on a system that is created cleanly regularly. This cuts down on tests that fail because they weren't updated, or issues that relate to data not getting properly put in the database (or other data stores).
  • Cares about the red bar. The build shouldn't be broken for long, and failing tests should be diagnosed quickly. In policing, it's called the broken windows problem: if you get used to small bad things then bigger bad things will happen. Don't let your project windows break.

I'm sure there's more, but what else am I missing?

Tuesday, September 18, 2012

Automate What?

I was in a meeting the other day, talking about an upcoming project. I made some offhand comment about needing to make sure we allowed for a continuous integration and test automation environment when we were sizing our hardware needs. The response was immediate: "but automation never works!" After a bit of discussion, it emerged that this team had never attempted automation at any level under the GUI, and their GUI automation was, well, not good.

Automation at the GUI level is a legitimate thing to do. It's far from the only automation you can do. Consider all the other options:

  • Automated deployment. Manual scripts or with tools - this is an entire discipline in itself. And it doesn't have to deploy to production; automated deployment to test or staging environments is also useful.
  • Unit tests. Yes, this is test automation, too.
  • API tests. Automation of a user interface, jut not a graphical user interface.
  • Integration tests. You can test pieces together without testing the whole thing or without using the GUI.

So here's my automation heuristic:
Automate as much as you can as low as possible.

Let's break that down a bit. "Automate as much as you can" means just that. Not everything is automate-able, and not everything should be automated. If you have a GUI, testing that the colors go nicely together is probably not automate-able - so don't do it. Do that manually. If you have a method that takes inputs and produces outputs, you can probably automate the test that confirms it works with different kinds of inputs and outputs. If automating something involves massive architectural changes or fails one in three times randomly, then you're not going to maintain it and it's either not automate-able or simply broken.

(On a somewhat related code, the ability to automate something frequently correlates with how modular and maintainable an architecture it has. Hard-coded configuration parameters, nonexistent interfaces, etc. make code less testable and also a lot harder to maintain!)

"As low as possible" means that you should target your automation as close to the code you're trying to affect (deploy or test, for example) as you can. If you're testing that a method does the right thing, use a unit test. Sure, it might be possible to test that same method through a GUI test but the overhead will make it a lot slower and a lot more brittle. The team I was talking with was right about one thing: GUI tests tend to be brittle. If you can go below the GUI - either through an API or using an injected test framework (think xUnit tests) - then you can avoid the brittleness.

Yay automation! But even more yay, appropriate automation!

Friday, September 14, 2012

Don't Forget the Joy

Work is, well, work. It's also an open secret that we're better workers when we have fun at work. We're more productive, we help the bottom line, and we're more likely to stay in a job longer if we enjoy our jobs. So let's roll up our sleeves and get to work, but don't forget the joy!

Some joy comes from the work itself. There is a thrill in solving a hard problem. Learning a new technique provides a quiet satisfaction. Finally understanding some concept - really understanding it! - puts a spring in my step. Looking at a demo or a customer report and knowing I built some feature they really love sends me home bragging. I know I'm not alone in quite simply loving what I do.

Other joy coms from the culture around the work. This is where your colleagues and your work environment come in. Joking around the coffee machine is just plain fun. Having your desk set up the way you like it - with your Star Wars figurines and footrest - makes the physical office more comfortable. We're people, after all, not automatons.

As an employer, there are things I can do to help make work more fun. Most of them aren't even very expensive! Consider these:

  • A training course - online or at a local university's extension school - for an employee who asks usually costs under $1000.
  • Letting someone go speak at a conference costs about 20 hours for prep and the cost of plane/hotel. (The conference itself is usually free for speakers.)
  • Letting the team choose a theme for sprint names and name their own sprints. Free and often hilarious.
  • Bringing in bagels or cookies or cake once or twice a month - either on a schedule or ad hoc, depending on your team's sense of routine - is surprisingly fun. Keeping snacks on hand accomplishes the same thing.
  • Don't do management by walking around. Be available and show up but don't hover. You don't want to be your employees' friend, but you don't want to be the heavy, either. Free, too.
Joy has a place at work. Encourage it.

Tuesday, September 11, 2012

Status Message Messaging

For those of us who use hosted services, status pages are an essential communication point. They're a way to indicate service health ("it's not us, it's you") and, when there are problems, they provide a forum for disseminating updates quickly and loudly. The "everything is okay" update is pretty easy. The "stuff's broken" update is a much more delicate thing to write. It has to reflect your relationship with your users, but also reflect the gravity of the situation.

Here's part of a status update Beanstalk published this morning:

"Sorry for the continued problems and ruining your morning." 

Oh man. That's an update you don't want to have to publish. To provide some context, we'll just say that Beanstalk has had a bad few days. Beanstalk provides git and subversion hosting; that makes them a high-volume service. Pushing, pulling, committing, checking in/out, etc. happen very frequently and, well, software teams on deadline are not known for being nice when their tools get in the way. The last few days have been hard on Beanstalk: they got hit by the GoDaddy attack, then had a problem with load on their servers traced to an internal problem, and finally are again having difficulties with their file servers that host repos. And you can see it in that status update. "[R]uining your morning" is the phrasing of someone who is seriously exasperated. That update does some things well: it shows they understand the problem is severe; and it reflects the increasing frustration users are likely experiencing. It's escalating, just like user's tempers are probably escalating. However, it goes too far for my taste. It reeks of frustration on the part of whoever wrote the update, and that's clearly not a good head space for actually solving the problem. It also implies a sense of fatalism. That update was at 9:23am - my morning might still be salvaged, if they can get the system back up relatively quickly. Don't give up, guys!

There's an art to writing the status update when things are going poorly. When I'm working with a team fixing a major problem, I'll dedicate someone to doing just that. They sit in the middle of the war room (or chat session or conference call) and do nothing but manage status communications. Let the people fixing the problem focus on fixing the problem. Hand off status to someone who will maintain a level head and write a good status update, considering:

  • Frequency of updates. At least every hour for most issues, and whenever something changes there should be a status update. Silence does not inspire confidence.
  • Location of updates. As many channels as possible are good. Use twitter, the status page, email or phone calls to really important customers, and any other tools at your disposal.
  • Tone of updates. This needs to match your general tone of communication (see the Twitter fail whale - still cute and fun, even in error!) but also show that you know your customers are getting frustrated.
  • Inspiring confidence. Providing enough information to look like you are getting a grip on the problem and will get it fixed is important. Providing a proper postmortem also helps inspire confidence.



Friday, August 31, 2012

Consider the Message

A couple days ago I was at the butcher picking up some meat for supper (burgers!). My card got declined. And here's the thinking: "Oh wow embarrassing! I come here all the time! I'm sooo not that person. I'm fiscally responsible, darn it! Besides, I'm nowhere near the limit. How annoying!" It's about a 5 second panic, but let's be honest, it's not a good feeling. It's embarrassing and annoying for both parties - for the cashier and for me.

So I paid with cash, and as I was going out, the cashier handed me the receipt from when it got declined. Here it is:




Seriously?! Seriously!?!? That 5 second panic and the annoyance for me and for the cashier - and check out that decline reason:

"Could not make Ssl c"

I'm going to assume that means "Could not make an SSL connection to ". Bonus points for the truncated message and for the interesting capitalization.

That's why error messages matter. The error shouldn't have been "Declined". It should have been "Error". That would have saved us all the embarrassment, at least! (Yeah, it still would have been annoying.)

So please, consider your error messages. They matter.

Wednesday, August 29, 2012

Autonomous and Dependent Teams

Almost all of my clients are currently doing some variation on agile methodologies. Most of them are doing SCRUM or some version of it. And yet, they are very different teams. In particular, lately I've noticed that there are two kinds of teams: those that encourage autonomy and those that encourage dependence.

To be clear, I'm speaking entirely within the development team itself. Within a development team there are invariably leaders, usually the ones who have been around for a while and show the most ability to interact on a technical level with the rest of the team. The character of those leaders and how much autonomy non-leader team members have says a lot about the team.

Teams that encourage autonomy take the attitude that team members should help themselves. If a member of the team needs something - a new git branch, a test class, a method on an object in another area of the code - then that person is responsible for getting it.  How the team member accomplishes that is, in descending order of preference: (1) doing it; (2) asking for help to do it with someone (pairing and learning); (3) asking someone else to do it; (4) throwing up a request to the team at large.

Teams that encourage dependence have a very different attitude. These are teams where each person has a specialty and anything outside that specialty should be done by a leader. If a team member needs something - a new git branch, a test class, a method in another layer of the code - then that person should ask a team leader, who will provide it. Sometimes the leader passes the request off to another team member, and sometimes the leader simply does it.

Let's look a little deeper at what happens with these teams.

Autonomous Teams

  • Emphasize a leaderless culture. These are the teams that will say, "we're all equals" or "there's no leader." There are people who know more about a given area or technology, but the team considers them question answerers more than doers in that particular area.
  • Can better withstand the loss of one person. Whether it's vacation, maternity leave, or leaving the company, the absent person is less likely to have specialized knowledge no one else on the team has. It's a loss that's easier to recover from.
  • Tend to have more tooling. Because there's no dedicated "tools person", everyone introduces tooling as it's needed, from a continuous integration system to deployment scripts to test infrastructure to design diagramming. Over time this actually winds up with more tools in active use than a team with a dedicated tools engineer.
  • Produce more well-rounded engineers. "I don't do CSS" is not an excuse on this team. If the thing you're working on needs it, well, you do CSS now!
  • Work together more. Because each team member touches a larger area of the code base, there's more to learn and team members wind up working together frequently, either as a training exercise, or to avoid bumping into each other's features, or just because they enjoy it.
  • Tend toward spaghetti code. With everyone touching many parts of the code, there is some duplication. Coding standards, a strong refactoring policy and static code analysis can help keep this under control.
  • Have less idea of the current status. Because each team member is off doing, they don't always know the overall status of a project. This is what the daily standup and burndown charts are supposed to help, and can if they're done carefully.
Dependent Teams
  • Have a command and control culture. These are the teams that say, "we'd be dead without so-and-so" or "Blah tells me what to do." They look to the leader (or leaders) and do what that person says, frequently waiting for his opinion.
  • Can quickly replace non-leaders but have a huge dependence on leaders. When a leader is missing - vacation, meeting, or leaves the company - then the team gets very little done, and uses the phrase, "I don't know. So-and-so would normally tell me, but he's not around."
  • Have a good sense of overall status. The leaders tend to know exactly where things stand. Individual team members often do not.
  • Do standup as an "update the manager" period. The leader usually leads standup, and members will speak directly to that person (watch the body language - they usually face the person). Standup often takes place in shorthand, and not all team members could describe each task being worked on.
  • Tend to work alone or with a leader. Because individual team members tend not to work on similar things or to know what everyone is doing, they'll often work alone or with a team leader.
  • Tend to wait. You'll hear phrases like, "well, I need a Git branch; has one been created yet?" Instead of attempting to solve the problem - for example, by looking for a branch and creating one - the team member will note the problem and wait for a leader to fix it or to direct the fix. 


Overall, I vastly prefer working with teams that encourage autonomy. There are moments of chaos, and times when you find yourself doing some rework, but overall the teams get a lot more done and they produce better engineers. I understand the appeal of dependent teams to those who want to be essential (they'd like to be the leaders) or to those who just want to do what they're told and go home, but it's not for me. Viva the autonomous team!








Friday, August 24, 2012

"Who Was That Masked Man?"

When we were kids, we'd occasionally get to watch the Lone Ranger. We watched the fifties version, with the Lone Ranger and Tonto riding into town in black and white and saving the day. Inevitably, it would end with someone staring into the camera and asking, "Who was that masked man?" as the Lone  Ranger rode off to his next town.
"Here I come to save the day!" (Wrong TV show, right sentiment)

I watched the show again not too long ago with a friend's son, and got to thinking that there really was something to this idea of someone riding into town, clearing out the bad guys, and riding off into the distance. It's really rather like being a consultant in some ways. You ride in, find the bad parts, fix them, and ride out. Fortunately for me, there is a lot less horseback riding and gunplay in software than there ever was in the Lone Ranger!

But let's look at each of those steps:

  1. You come in
  2. You find the bad part(s)
  3. You fix them
  4. You leave
All of those steps are important. If you leave without finishing every single step, well, you're no Lone Ranger.

Come In
Coming in doesn't have to mean going to the client's offices. It just means that you need to show up. You have to interact with the client  - and not just the project sponsor. This is how you'll know the full extent of the problem, and start to build trust to fix it. This means you have to be around and be available for casual conversations. This might be in person, by phone, in chat rooms, or over IM.

Find the Bad Part(s)
You're here because there's a problem the client can't or won't solve internally. Understanding that problem gives you the bounds of your engagement. Sure, there are probably other things you could optimize or improve, but don't lose sight of the thing you're actually here to fix!

Fix Them
You have to actually fix the bad part(s). Don't offer a proposal; don't outline the problem and note how it could be fixed. Do the work and fix it. This is what differentiates the Lone Ranger from the stereotypical management consultant.

Leave
This part is surprisingly hard sometimes. Sometimes the problem will be fixed and you just keep coming around, or start working on a new problem, or maintaining the fixes. This is all well and good until you're sure the fix is stable, or if there is another problem to be solved. When it's just maintenance, though, then it's time to leave. Don't forget to actually do that part.

And that is how 24 minutes with the Lone Ranger turned into a rant on consulting software engineers. Now, back to the fake Wild West for me!

Monday, August 20, 2012

The Dreaded Feature List

We've all seen them: the lists of features. Whether they show up in a backlog, or a test plan, or a competitive matrix, or an RFP, feature lists are everywhere. It's the driving force of software development, in many ways. "How many features have you built?" "When will that feature be ready?" "Does it have X feature?"

There's one big problem with that feature focus, though: it's not what customers actually want.

There are only two times I can think of where a customer actually explicitly cares about features:

  1. When comparing different products (RFP, evaluation, etc)
  2. When they're using the presence or absence of a feature as a proof point in an argument about your product
The rest of the time, they care only that they can solve their problem with your product. Having a "print current inventory" feature is useless. Being able to take a hard copy of the inventory report to the warehouse and scribble all over it while doing a count - that's what the customer actually wants. "Print current inventory" is just a way to get to the actual desire. These stories - tales of things the customer does that involve your software - are the heart and soul of the solution.

So - with the exception of RFPs and bakeoffs - ignore features. Start focusing on the customer and their stories.

Thursday, August 16, 2012

Good Software Makes Decisions

A lot of software is complex. Sometimes it's not even the software that's complex; it's the landscape of the problem that the software is solving. There are data elements and workflows and variations and options. Requirements frequently start with, "Normally, we foo the bar in the bat, with baz. On alternate Tuesdays in the summer, though, we pluto the mars in the saturn. Our biggest customer does something a little different with that bit, so we'll need an option to jupiter the mars in the saturn, too." Add in a product manager who answers every question with, "well, let's make that an option", and you end up with a user interface that looks like this:

No one wants to use that.

It's software that has taken all of the complexity of the problem space and barfed it out onto the user. That's not what they wanted. I promise. Even if it's what they asked for, it's not what they wanted.

Building software is about making decisions. Good software limits options; it doesn't add them. You start with a massive problem space: "Software can do anything!" and limit from there. First, you decide that you're going to solve an accounting problem. Then you decide what kind of inputs and outputs go into this accounting problem, and how the workflows go. You do this by consulting with people who know the problem space really well. Define what you will do and - just as important - what you won't do. Exposing a decision says, "I don't understand this problem; you decide." Making a decision builds customer confidence; it says, "I know what I'm doing." All the time you're creating power by limiting choices.

It's okay to make decisions in software. Take a stand; your software will be better for it.

Tuesday, August 14, 2012

Don't Claim It If You Only Read It

I've been hiring on behalf of a client lately, looking for a head of QA. One of the trends I noticed as I was reviewing resumes was that a lot of candidates had a list of languages on their resumes: C++, Perl, Ruby, Java, etc. "How cool!", I thought. It's great to see the testing profession attracting people who are capable of writing code but for whatever reason find testing a better fit, Maybe that profession is finally overcoming the stigma of "not good enough for dev" and "tester == manual tester".

And then I started the phone screens.

I've screened 12 people so far who listed languages on their resumes, and so far not one of them has actually been capable of writing code. Nope, that language list? That's the languages they can read. As in, "I can follow along on code written in C++ or Java."

Ouch. Now I'm disappointed in all of them. Here I thought they had some skills they don't have.

I understand what the candidates were going for. They're "technical testers" rather than testers who work purely on the business level and don't understand how software works. I get it. I really do. That such a distinction has meaning is sad but true.

But don't claim languages if you mean you can read them! You're an engineer, darnit! Claim a language if you can write it. There are enough engineers - and enough testers - who can write code that to claim a language without qualifying it means people will think you write it.

If you're trying to say, "hey, I'm a technical tester", then please do so in your resume. Just do so without unintentionally telling people that you're more technical than you are. The right way to say, "I can read but not write C++" is to say: "Languages: C++ (reading), Perl (read/scripting)" or something similar. That way hiring managers don't start with higher expectations than you can meet... And that's good for candidates, too.

Tuesday, August 7, 2012

The Why Behind He Said So

In software, we ask "why" a lot. "Why" is a seeking question. If we understand the cause or the reason for something then we can make an intelligent decision about changing it or going around it (and let's be honest - we usually ask why because we want to change it or work around it).

The answers to "why" are many and varied, but possibly the worst answer possible is: "Because so-and-so said so."

That's not a reason, that's an excuse. Presumably, whoever said so had a reason for saying so, and that's the actual why. There are two reasons we get into this situation:

  • Because that person is almost always right (or in a position of authority that effectively makes them correct)
  • Because that person is a convenient target for blame
It doesn't have to be this way. "Because so-and-so said so" is a learning opportunity. It's a chance to understand better, both this specific scenario and how so-and-so got to be such an expert. Think of it as a chance to check your work.

"Why is there a limit of 255 virtual machines on this box?"
"Because John said so."

Well, John's a really smart guy, and he knows the product really really well, so presumably he has a reason for saying so. So why did he say so? "Because we use a /24 network, which allows 256 hosts, and we reserve one address for the physical box." See? Now you actually know something more about the product than you did before. You also know that there's potentially a problem there (networks typically allow 254 hosts, with .0 and .255 generally reserved). 

Find out why someone said so.... it's illuminating.

Monday, July 30, 2012

Take Pleasure in Absence

We spend our professional lives seeking things. We want confirmation of bugs. We want new features. We want affirmation that our bug fixes work. We spend all day every day doing things.

Sometimes, though, there is pleasure in not doing things. There is pleasure in absence, and only when we stop doing can we start seeing.

Give yourself five minutes while you're making coffee tomorrow morning, put down the iPhone (Twitter can wait!), and consider:

  • The worry no longer felt
  • The fear not realized
  • The pain not seen
All that absence! All that not seeking! Feels good, doesn't it?

I'm not going to tell you to stop seeking; after all all that absence came from seeking improvements. Instead, take one small breath and notice the beauty of the absence.

Tuesday, July 24, 2012

You Do What Again?

This weekend my aunt, uncle, and about-to-go-to-college cousin were in town visiting. We were catching up, and one of the things my cousin said was, "So what do you do all day?"

As background, you should know that both my aunt and uncle are physicians. They don't know what software engineers do all day, either! (Granted, I don't know what physicians do all day, so we're even.)

So what do I do all day? In any given day, I'm probably going to do most of this:

  • Modify some existing code
  • Send a bunch of emails
  • Write some CSS
  • Test a feature someone else wrote
  • Respond to a customer request or problem
  • Attend stand ups and/or scrum meetings
That really doesn't sound very interesting. Those are the tasks I do, yes, and there is joy in many of them. I actually really enjoy writing code - it's a game to make the software do what I want. I also like playing with someone else's feature and seeing how they interpreted it. There's value, too. Because I write an email, a customer knows what to do. Because I wrote some CSS, a page looks as good as a designer could make it. Because I fixed a problem, a client saved time. That's pretty cool.

Those aren't tasks, though. They're accomplishments.

So what do I accomplish in a day?
  • Help others see possibilities
  • Make a system do something useful that it didn't do before
  • Make a web page more beautiful
  • Save time for customers
  • Offer my customers abilities they didn't have before
That's better. That's what I really do.

What do you do?

Friday, July 20, 2012

Synonyms for "Will Work All The Time"

The pendulum swings again. Remember work-life balance? Sooooo four years ago.

Now we hire honey badgers, apparently. We also hire people who want to have a beer at 6pm... as long as they're doing it at work. Hello, brogrammers. Oh yeah, and we'll take a ninja and a rockstar, please! As this guy put it:



In other words, we want people who are going to work very long hours and still be good at what they do. Oh, and we want them to be fun. The drinking is really only incentive to stay at work.

And that's fine. If you have an all-consuming dream, it's easy to expect others to want to believe in your dream just as much as you do, and to work tirelessly just like you are. That's the kind of thinking that brought us Facebook. Also the Internet. (Okay, so it's not all bad.)

Unfortunately, not every effort is the Internet. Some efforts are the next big toy. Or some niche thing. Or - let's face it - something that won't change the world. Some efforts fail. Some are just a bad idea, even if they are a dream.

And here's the thing. It's trendy to want people that will work all hours and do it well. The number of people willing to do that for anything is limited. The number of people willing to do that for any dream that is not their own is even more limited. The number of people willing to do that for your particular dream is even smaller. You've severely limited your hiring pool, and we haven't even started talking about things like talent or skills!

So be careful before you decide you're only going to hire rockstar ninja brogrammers. Be willing to consider something talented, even if they only want to work 40-60 hours a week. That's 40-60 hours a week more than you're getting from having nobody working for you.

Tuesday, July 10, 2012

Baby Sysadmin

Herein, a rant.

If you're a software engineer of any form - developer, tester, support engineer, or any variation thereon - you should know your tools. That's pretty basic. Java engineers probably ought to be able to compile something, and to know what a library is, and how to set up a class. It's reasonable to expect that Rails testers can describe the difference between factories and fixtures, and when each is useful. We spend time training support engineers in our tools and procedures specifically so they'll know how to use available tools effectively. Pretty basic, right?

But software does not live alone. It's been decades since the software we wrote was the only thing involved with running our program. Our software runs in an environment, complete with operating system, other programs, drivers, hardware, third-party dependencies, etc.

Shouldn't we know our environment?

Some of the problems we solve in our software are related to the environment. We have to handle the case in which our logging can't write because the disk is full. Or we have to test for what happens when our software and some other piece of software both try to use the webcam simultaneously. Or we discover that the hang in our software is actually the user's browser crashing for some unrelated reason.

Just like we use tools to create software, we use the environment to create software. We use disk space, or the webcam (and its drivers), or the browser (another piece of software). So just like we need to learn our tools, we need to learn our environment. We need to make ourselves into baby sysadmins.

We don't need to know how to do every sysadmin task, but there are some basic things that we should know:

  • How to view what processes are running and basic information (status, memory and CPU utilization)
  • Where to look for indications of hardware problems
  • How to view network information and utilization
  • How the operating system handles disk usage and I/0
  • Where the operating system or virtual machine (e.g., JVM) logs and how
  • What environment variables are required and how they might be set - both by our software and by external software
  • What limits are implied by libraries (e.g., 32-bit versus 64-bit software)
  • Roughly how the operating system works, including dependency management
  • Roughly how any containers - JVMs or browsers - work, including loading order and frequency, and other dependency management
None of these things are particularly difficult, and they're all directly related to how your software behaves in its environment. So go ahead, embrace your inner baby sysadmin.

Friday, June 29, 2012

Engineering Words

I've written before about words that have special meaning in engineering. But wait, there are more! There is a subset of words that I use when I'm talking with engineers and we all use them. Then I talk to my coworkers in marketing...... and they look at me like I just got out a dictionary and picked the obscure words!

It's jargon. Pure and simple. Jargon doesn't have to be words that have no meaning outside the profession. Sometimes the words are just uncommon outside your profession, or have different meanings!

So, what's some engineering jargon?

Canonical
In general usage, canonical means "accepted" or "authorized". In engineering, canonical means authoritative. It implies that it's the most appropriate or best archetype or something. Think CNAME ("canonical name"), or the canonical source of some information (the one that's guaranteed to be right).

Idempotent
This one shares meaning, but I've heard it a lot more in engineering and math than I have in general usage. It means that something is unchanged when multiplied by itself. In practice, that means that if you call the same function again with the same inputs, then there is no effect.

Recursive
This is applying an algorithm or function repeatedly to solve a problem. Solving a recursion problem is a really common interview test for developers. Surprisingly, when I talk with people in marketing or in finance, they use the same tricks - applying something repeatedly to get to an ultimate solution. They just call it things like, "launder, rinse, repeat" or "do it again".

Trivial
Here's another word that has slightly different meanings to different people. For many people I talk to, trivial means "quick". For a lot of engineers I know, though, trivial means "I know how to do this and there aren't any minefields." It doesn't necessarily mean fast, although that can certainly happen, too!

What engineering jargon do you use?

Monday, June 25, 2012

A Good Working Day

One of the awkward things about summer is the lure of the outside. When it's a beautiful day then reading in the sun or taking a walk in the Arboretum sure sounds better than working. (And I love my work!)

Days like today, though, are great working days. Here's roughly what it looks like outside right now.



Somehow the idea of taking my lunch to the park and feeding the baby ducklings just isn't enticing.

Happy productive rain day, everyone!

Tuesday, June 19, 2012

Better Prep, Faster Meetings

What's the minimum meeting length? Not for standup or other ritual meetings, but for an actual meeting with someone you don't speak with often? 30 minutes? 45?

Sometimes you don't have that kind of time. Sometimes you just have to put your head down and get what you need out of the meeting. This is particularly true of "starter" meetings - introductions, tutorials, etc.

For example, I have a meeting scheduled tomorrow with a very simple agenda. Here's the whole thing, copied from the meeting invite:
"Understand if product meets our needs"

That's it. It's not even a particularly complex product. The vendor asked for an hour, but due to scheduling issues we're going to have twenty minutes at most. So, can we get what we need in a 20 minute meeting? I say yes, as long as we're all very prepared.

What do I want out of the meeting? I want to know if the product has various features, and I want to check for a few common gotchas to see if there are any extensibility problems.

What does the vendor want out of the meeting? A sale. Or at least a step closer to a sale.

So the real agenda is much less vague. It's going to go like this:

  • (3 minutes) Hellos and who are we
  • (2 minutes) Overview of project
  • (12 minutes) I ask questions and the vendor answers immediately or notes which ones require more research to answer.
  • (3 minutes) Thank you and next steps
That's it. You'll notice that with detail comes responsibility. Before the meeting I have to have a very crisp list of questions to ask. (They'll take the form "Does your product support print to PDF?" "What are the formatting options for that kind of printing?") The vendor knows we're tight on time, and is bringing his senior sales engineer to the meeting so we can answer as many questions as possible.

Because I'm prepared, and because the vendor knows to be prepared we'll get through the meeting. Anyone calling into the meeting without preparation is going to be completely lost. And that's okay - we don't have time to "catch people up". This is about accomplishing things. Keep up.

So if you're in a hurry, go ahead and do the meeting. Just be prepared.

Friday, June 15, 2012

Is "Acceptance" a Good Term?

Most of my clients use some variation on agile processes. Most of them use something like stories to define work. They call it different things - tasks, features, work items - but they're all basically the same thing: units of work that need to get done and that provide value to the customers.

In any event, one of the things we do with stories is figure out how we will know when we're done with them. What are the criteria that we'll use to figure that out?

The Acceptance Criteria

The problem is that acceptance implies judgement. The terminology reflects this: "what are the acceptance criteria?" or "does QA accept this?" or "does the customer accept this?"


This is not your QA team

There's a strong implication of judgement and scrutiny going on. Or, if you have kind of a doofus QA team (or product management team), then acceptance is at best a costume parade with a rubber stamp.

This is not your QA team, either.
And neither of those is accurate. When the software development process is working, there's no team or team member sitting in judgement of another. There's no rubber stamp. Instead, there is a collective idea that something is done or not.

Don't accept "acceptance". Let's stop talking about accept and start talking about being done.

Wednesday, June 13, 2012

Talk To Your User

Yesterday I was talking with a friend - also a software engineer - and we got to talking about the weird things users ask for and the even weirder things they do on their own.

For example: I have a user who decided that the best response to any error while creating an order in the system was to delete the entire order and start over from scratch. He would do this even if the error was something like "NZ is not a valid state. Please enter a US state abbreviation."

Another example: He noticed from some logs that the system was rebuilding its inventory very often. This was a process that was intended to happen once a day, and it was happening six or seven times a day. After spending most of a day diagnosing the "bug", he discovered that it was being triggered by a user manually, and the user was doing it early and often.

In both cases, the users were doing things they thought were logical. My user had heard "cancel the order and start over" from support frequently enough that he just stopped calling and started canceling. My friend's user thought he understood how the system worked and was making sure it was right, even though the system worked a different way. They were both completely logical actions on the user's part. They were not very nuanced, but they made sense, once we talked to the user and understood what was going on.

But to get there, you have to know your user. You have to know what they're trying to accomplish. You have to know what they think your software does - even if it doesn't actually work that way. You have to learn what parts of your software are kind of scary and mysterious. Knowing all that will help you understand what they're doing, and how you can help them do accomplish their purpose better.

So riddle me this: when's the last time you talked to your user?

Wednesday, June 6, 2012

Chatty

A number of my clients use chat systems. Most of them use Campfire, but a few use Skype or IRC. Either way, it's basically a place where some or all of the engineering team hangs out. As with most hangouts, it can be really helpful or a complete waste of time.

A few simple rules increase your chances of successful chat:

  • Don't expect immediate responses. Sometimes people are working and ignoring chat for an hour or two or four. That has to be okay - chatting isn't about interrupting work, it's about having a resource.
  • Log chats. Chat history is a good resource for remembering conversations, including helpful things like where the "make it better" command lives, or which commit introduced a massive memory leak. Make sure that history is available and searchable.
  • Everyone should log in. Chat's not useful if only half the people are there. Not everyone has to be there all the time (see above about immediate responses), but everyone should log in and scroll through the history periodically - at least once or twice a day, usually.
  • Use alerts. Pick a chat client that allows alerting, and then set up alerts on things like your name. That way, when someone says "catherine, how'd you do that neat trick?", you get a more active alert and don't have to scan through the whole chat history.
  • Keep it mostly work. Just like meetings or work conversations, there can (and should) be a little fun. Most of the conversation should be work-related, though.  A good ratio is roughly 90% work, 10% goofing off... err... camaraderie.
  • Use it for questions and non-urgent notifications. "How do I blah?" or "Where's that change deployed?" or "Hey everybody, I'm going to redeploy our shared database server at noon unless someone tells me otherwise" are good things to put in a chat room. This gets back to the idea non-immediate responses.
Effectively used, a chat room can be a great resource for using the group's collective knowledge. It provides all the benefits of talking with the entire group without the downside of interrupting team members who are concentrating. It takes interrupting activities - questions, notifications, discussions and debates - and makes them less disruptive. Chat can be your friend or your enemy - make it your friend.

Monday, June 4, 2012

Welcome, Interns


In the annual technology calendar, summer heralds the arrival of a new and curious species: interns. The intern - latin name collegius eagerus - generally arrives in May or June and stays until August or September. A few particularly ambitious members of the species stay for an extra semester or even a year.

Distinguishing Intern Subspecies
There are a few distinct subspecies of interns:
- Internus Know-it-all-us: The internus know-it-all-us usually comes from a school known for a good computer science program. He has passed an algorithms class or two and probably knows his way around a compiler. This intern is here to apply his wisdom to the real world. This intern is best handled gently, since the ego is very large.
- Internus Know-nothingus: This subspecies is often the youngest of the bunch. He knows mostly that he doesn't know much at all. Half the difficulty interacting with this subspecies is getting them to stop wallowing in now knowing and just start doing. Extremely simple tasks are a good starter, and a lot of encouragement is needed.
- Internus Eagerus: This subspecies is hear to learn. He wants to know everything about everything. The trick to handling this intern is helping them accomplish something and not get lost simply gathering knowledge.

Managing Interns
On the face of it, an intern is a productivity killer. After all, this is a person who needs training in almost every aspect of the job. However, properly managed, an intern can bring a lot of value to a team, even in the few months they are there.

To properly manage an intern, recognize what they know and what they don't know. Most interns know something about coding and about things like algorithms and patterns. However, most interns have worked mostly alone or at most in teams of three or four people. That's the part they have to learn - how to write and maintain software as a team member.

Focus on working with the intern on software team dynamics:

  • Using source control, including branching, merging and conflict management.
  • Refactoring and creating reusable methods and classes
  • Recognizing and following style standards
  • Code structure and layout


Benefiting from Interns
Interns take a lot of time. They make mistakes. They write code that's sometimes utterly unusable.

They're also a huge benefit to a team.

You see, they don't know any of the history, so they don't know what can't be done. Explaining something to an intern helps you think through a problem or through a process - and that helps depend your understanding of it. Sometimes, they also show you what corners you have been cutting that you shouldn't, or what corners you can cut. Fresh eyes on your product will help you see it as your customers do - totally and completely without your assumptions.

If you get the chance to work with an intern, leap at it. They're frustrating and barely there long enough to be productive. They're also refreshing and exciting and a breath of life to a stable team.

Friday, June 1, 2012

If It's Never Right...

It's been a few years, so I feel like I can write about this, but I'm still not naming names. I had a client who had a workflow that went something like this:

  • client sends in data
  • our client services team processes the data using internal libraries and scripts
  • our client services team creates a report
  • our QA team checks the report
  • report goes to the client
On the surface, it sort of seems like a reasonable process.

In practice, 90% or more of the reports were rejected by the QA team. Almost all of those were for obvious errors: misspellings, data that was missing completely, wildly improbable conclusions (e.g., "factor F went up 700000%!"). Two iterations on a report was most common; three iterations happened on occasion.

Clearly, if it's almost never right the first time, there's a problem.

So what do we do about it?

First, we have to figure out why on earth we're failing so consistently. Is this a people problem, a data problem, or a process problem? Is the client services team suffering from script blindness: an unwarranted faith that what the scripts produce must be right? Is the client team just plain not looking at their reports? Is QA rejecting things incorrectly? Is the inbound data really just terribly dirty? Are client services and QA looking at different specs? The right way to find out is to review each incident and figure out where it went wrong. Hopefully, a pattern will emerge. Until we understand the problem, we can't fix it. it's also sadly possible that there is more than one problem.

Once you know what's wrong, you can fix it. Maybe it's as simple as telling client services to look at their reports before they send it off. Maybe it's fixing the libraries and scripts to avoid or at least yell loudly about errors. Maybe it's fixing where specs are kept and how they're understood. Maybe it's a combination.

Lastly, give yourself time to see the results of your changes. The first report we did after making some process and people changes failed. The second one did, too. After about the fifth one, though - and a few more tweaks - we were seeing QA reject rates go way down.

In any case, if it's never right, it's time to fix it.

Friday, May 25, 2012

Offsets

It's the Friday before Memorial Day weekend. It's projected to be 74 degrees today here in Boston. This reads like a recipe for people leaving work early or just not working today at all. And I can't blame 'em. I mean, really. Look at this! That's the actual park I go to (thank you Boston Public Garden).



Heck, I'd love to take a picnic and a book to the park for lunch, and totally blow off the afternoon.

Ready for the real fun of it? I'm a consultant. I CAN.

But.

Before I get to hit my patch of grass, I have to meet all my obligations for all my clients. I don't get vacation days. I don't get to blow off tasks on the grounds that my boss isn't around to notice.

So I've been working extra all week. I worked a bit late every night and got through almost everything. I'll do the last bit this morning. I pushed meetings to the morning or earlier days. I stayed up late last night writing my Friday status reports. Then, this afternoon, I'll make all the working stiffs jealous with my book in the park.

In other words, I've offset my work this week. So next time you see that slacker in the park, give 'em a wave. Maybe they're not slacking. Maybe they just already did their 40 hours (or 60 or .... you know, I don't count hours and that's probably for a reason!).

Monday, May 21, 2012

Convention and Enforcement

One of my clients is in the midst of a major architectural consideration: do they enforce rules, or govern by convention? The system as it's currently built uses convention; there is pretty much no enforcement or validation of business rules. Instead, it focuses on allowing flexibility and allowing the user to determine business rules. The problem is that the dev team spends a lot of time chasing problems that turn out to be bad or unexpected data.

Hence, the discussion. It goes something like this:
"We're having problems where data doesn't conform to our assumptions, and we're spending time tracking them down. We should enforce the rules we're assuming."
"The rules might change, and our system needs to be flexible to accommodate it. Any rule we're assuming should be considered a bug."
"Some rules won't change - we know, for example, that a widget will belong to exactly one frobble. We can enforce those rules."

And so on.

There's not really a best practice here; some systems will fall closer to the enforcement end of the spectrum, and some closer to the convention end of the spectrum. The thing to consider is this: "Who knows the rules? The creators of the system or the users of the system?" Choosing to enforce rules lessens the chance that a user is going to get themselves in trouble, but will give them less freedom to make business rule changes. Choosing convention puts the onus of rule creation and enforcement on the user of the system. Choosing convention also means that you can't make internal assumptions and then expect people to conform to them - after all, they don't know the internal implementation details. Choosing enforcement means that you have to change the system every time the business rules change.

Hours of log diving have made me lean toward enforcement of rules where we know them, but systems that use convention to enforce rules can work. Just make sure it's an explicit choice.

Thursday, May 17, 2012

Gamifying Programming?

I have a friend who is a teaching assistant in a local college. She teaches introductory programming courses, mostly to eager and not-so-eager 18 year olds. She's been working on engaging her students more, worried about the dropout rates from the Computer Science program.

So we were spitballing about the problem. And a lightbulb came on:
Let's gamify programming!

"Gamification" is - at least in the circles I move in - the hot trend of 2010-2011. Anything and everything is gamified, it seems. Shopping, fitness, training, surveys: you name it, it's probably been gamed. Gamification at its heart is simply applying the techniques of games to other systems. It usually means adding scoring or points, progress bars, leader boards or other challenges.

So let's gamify programming? Woo hoo! How would that work?

We were throwing out ideas like this:

  • Points for shorter methods (rather than thousand line methods of doom)
  • Leader boards for writing programs that accomplish their tasks faster
  • Teaching test driven development (red-green-refactor is totally a game)
It's all nice, but really, for me, programming was already kind of a game. It's just a different kind of game. Anyone remember Black & White?
Black & White was a strategy game that came out in 2001. It was a hugely ambitious game, in which you controlled villagers and a creature, and you had the power to make the creature good or evil (cue maniacal laughter here). Writing software is so much like playing that game, even without gamification elements.  Both things are immersive: I look up and discover that three hours have passed in the blink of an eye, and I have either a creature, or some new feature. It's frustrating: the game was hugely buggy, and writing code can sometimes mean intense effort to come up with.... 2 lines of code (woo too!). Both the game and the programming gives me a god-like feeling: in both the game and the code YOU made that happen (more maniacal laughter). 

I can't really get excited about gamifying software development because to me it's already fun in many of the same ways games are. Maybe that's just me.

But would you gamify programming? If so, how would you do it?

Thursday, May 10, 2012

Is "first" a Code Smell?

I was looking at some code the other day and noticed that it had a line like this:

@product.first.price

And I cringed. Not because I knew it was wrong (it wasn't). But because using ".first" feels like a code smell. (I should note that this isn't specific to the particular syntax. Seeing product[0].price would be equally cringe-inducing.)

So is the use of .first (or equivalent) a code smell?

Arguments for:

  • "first" is arbitrary and not even necessarily in a consistent order. That means it's not deterministic, which is frequently a problem. If you really want random, then wouldn't it be better to use a randomizer explicitly?
  • first means you're working with a collection, and it's inefficient to get the whole collection if you only want one.
Arguments against:
  • if you have a collection in which every element is equivalent in some way, then getting information off the first one (or any random one) is just fine. You may later need it as a collection for something else.
  • some languages have quirks where if you want one then you get a collection of one element.

I'm still not completely sure, but my current thinking is that .first is pretty much always a bad idea except in cases where the language makes you. What do ya'll think?

Monday, May 7, 2012

"No" and "Not Yet"

I work with entrepreneurs. They - like many of us - want things fast, want things great, and want things cheap. That darn iron triangle keeps getting in the way, though.


You can change the shape of the triangle, but you can't grow it. The secret thing that makes a triangle larger or smaller? Money. Microsoft's triangle is a lot bigger than Joe Startup's triangle. They're both still triangles.

So when you're a startup and you need to build a lot without a lot of money, oh and fast, please, then something's going to give. This is when the word "no" starts to come up.

But I rarely actually say, "no." Because "no" really means "never." It's pretty rare that someone comes to me with something that is technically infeasible, so "never" isn't really the right answer. Rather, I wind up saying, "not yet" or "sure, but you'll have to not do this other thing."

And that's okay. That's not me being obstructionist. That's me looking at my fixed triangle points - usually resources and sometimes time - and saying, "here's how big functionality can be". I'd rather everyone know up front that there are constraints than tell a client, "we'll try" and not finish the entire wish list.

It's not "no". It's "not yet".

Monday, April 30, 2012

What Does a Browser Actually Do?

I do a lot of my development using web technologies. Much of my code winds up running in a browser (or driving something that runs in a browser). Most of the time, the browser is just a convenient delivery mechanism. I write a few lines of code, and the web browser interprets them and does all sorts of great layout and rendering stuff.

And anyone who has watched me try to work with PhotoShop knows that it's a really good thing that the browser is handling the rendering instead of me!

But to get back to the point, I can write perfectly find HTML, CSS, and JavaScript and the browser will handle it just fine most of the time. Sometimes, though, mysterious things happen. Sometimes a page will simply not finish rendering. Sometimes parts of it will be really slow. Sometimes layouts won't do what I meant for them to do. And then I have to start debugging. We can start with the simple things: looking for JavaScript errors, checking load order, etc. Once it gets more complicated, though, we have to start understanding what's going on. What is the browser actually doing with our code?

I found this fascinating - and long - article describing how modern web browsers actually work. An Israeli researcher spent a whole lot of time looking at browser source code and reading published works on browser internals, and wrote up a summary. It's a must-read for any web developers out there.

For web developers, the browser is a tool. A hugely powerful and important tool. And the way to get the absolute best out of that tool is to really understand it. Knowing how your tool works will make debugging easier and great new features possible.

Monday, April 23, 2012

Happy Errors

I'm working on a system right now that interfaces with a third party system. That third party system is an aggregator, which means it interfaces with a number of other third party systems. You can think of it as a giant game of telephone.

Unfortunately, like most games of telephone, things get a little.... messy... along the way. What starts at one end as a useful message often comes out garbled. For example, "This account is disabled" comes across as "Unknown error".  (thanks!)

Now, most of the time, we can figure out what the error means in the end. Usually, we end up calling the originating system and saying, "what do you see?" We've built ourselves an informal table of likely translations.

There's one error, though, that we can't figure out:

Error 75: "Thank you for using "

That's the most cheerful error message I think I've ever seen. I'm not sure it's even an error! We haven't really figured out what to do with it, but boy did we at last get a laugh out of it.


Wednesday, April 18, 2012

User Interfaces Aren't Just For Pretty

I was reminded this morning that user interfaces aren't just about being pretty or looking good. They're about putting the right information in front of the user at the right time. Good user interfaces are also about helping users understand the meaning of that information in context: How has it changed? Is this a normal value or not?

Bad user interfaces can and do lead to disasters. If you're lucky, your user interface is to a web site and the disaster is that you lose users. If you're unlucky, your user interface is to some life-critical piece of equipment and someone dies. (Don't roll your eyes, it's happened.)

One of the more famous user interface disasters is Air Inter Flight 148, which crashed in 1992. Why? Bad user interface played a part. The pilots thought they were entering a rate of descent and they were actually entering vertical speed; 3.3 degrees down became 3300 feet down. That's an important distinction when a mountain is in your path. The problem was a bad user interface; it didn't show the pilots what they had actually entered, and they didn't figure it out until it was too late.

It's easy to talk about the user interface as the "making it pretty" step. And user interfaces can be pretty. They also need to be functional. Aim beyond "has all the data" and shoot for "conveys information". You users will thank you for it, even if no one dies.

Monday, April 16, 2012

Don't Push Always

I'm a manager. My job basically consists of getting other people to do things. Preferably they'll do the right things with reasonable timing. There are about a zillion ways to accomplish this, and I'm not going to get into too much detail about many of them here. Instead, let's look at one small detail: pushing.

We push people frequently. They'll say, "well, that'll take three days", and we say, "do you think you could do it in two?" They'll say, "I know we can do it in six months", and we say, "what can we take off the table so we can ship something in three months?" We often phrase it a bit less crudely than that, but the underlying message is there.

Now, there's nothing inherently bad about pushing a team. There's nothing wrong with saying, "what are the tradeoffs to get something faster/better/stronger?" There's nothing wrong with saying, "can we do better?" Burning out your team is almost always a terrible idea, but pushing doesn't have to mean burning out. It can mean trading off additional team members or money or features in favor of time.

But.

Sometimes pushing gets to be a habit. Every estimate is met with "how can we do that faster?" Every problem becomes something to throw people at. Every feature is something that gets cut to the bone in favor of shipping. It's pushing the team just for the sake of pushing. Even if you're not burning people out, they're still going to resent it. No one likes to be pushed all the time.

Don't push unless there really is a need to do so.

Time isn't always the most important thing. Sometimes there's plenty of time. Sometimes the feature is more important than time. Or keeping the team together is more important than time. Or the security implications or deployability is more important than time. These are all legitimate decisions.

Save the pushing for when you really need to push. Don't push just because you can.

Friday, April 13, 2012

"Sluggish"

There are some problem reports that are guaranteed to strike fear into the hearts of developers, testers, and support engineers alike. The top one of these:

"It's sluggish."


(Don't panic don't run away don't toss an innocent bystander in front of the problem.)

This is the kind of problem report that is probably going to end up being a bit angsty. But why is it so scary?

  • It's completely subjective. What the reporter sees as "sluggish" might seem "just fine" to you (or someone else).
  • There's no clear goal. If it's sluggish (read: too slow), then what's "snappy"? What is fast enough? How do we know we've fixed it?
  • Even if you get it fast enough, it's still pretty high risk that there's an environmental factor on the user's end that will make it continue to appear slow. For example, if their internet connection is slow, then it might still appear sluggish, no matter how fast we make our web application.
So, given that we've decided not to panic or throw innocent team mates at the problem, how can we handle these types of problem reports?
  • Get more information. Understand what "sluggish" means to the user. Asking them is surprisingly effective.
  • Discuss what you did rather than what the user will see. Since we don't get to say it's fixed - the customer does - we should say that it's better or that we've seen improvements.
  • Establish open lines of communication. This isn't always possible, of course, but if it is, then it both helps the user engage with you and makes the whole interaction friendlier.
Certain problem reports are scary. But they're handle-able. So don't run away, handle it.

Thursday, April 12, 2012

Decision Safety Net

Yesterday I was giving a talk for the STP online summit for Test Management. My talk was on getting your testing MBA and it was basically about how testers can and should consider the needs of the business, in order to work more effectively with business groups.

One of the questions I got was about what decisions it is appropriate for a test manager to make. The question actually applies to all engineering managers - test managers, development managers, support managers, etc.

First, a bit of background:

During my talk, I noted that there is some prominent thinking in the test field that testers and test managers should not make decisions. They are information providers and advisors, but they are not equipped with sufficient information to make decisions. The classic example is that the test manager shouldn't be able to block a release, because it's possible that releasing early with a huge bug is better than releasing later with no bug due to some contractual obligation with a client or potential customer. In that case, closing the deal by shipping (on, say, the last day of a quarter) might be more important than fixing the issue that would cause the test manager to say, "don't ship!"

My response to this is that there is a kernel of validity but that the overall message of "don't make a decision!" has gone too far. It's perfectly okay for test managers or development managers to make some decisions. There truly are times when the test manager or the development manager should be making a decision, and where not making the decision is simply a cop out.

And now to my actual point:
There is a decision safety net. The business will not allow you to make a decision you shouldn't be making - at least, not more than once.

That's the rub. NO ONE has perfect information in any business. The development manager doesn't know everything. The product manager doesn't know everything. The CEO doesn't know everything. Everyone knows some of the information that influences a decision, and no one knows all of it. That's the point of good management - to provide a safety net that maximizes the available information for someone to make a decision. Product managers do not have a monopoly on maximizing information; technical managers can do it, too!

So don't be scared of making a decision. And don't be scared of asking for opinions or offering opinions. Most business decisions are made by one person after being influenced by a lot of people. That's the job of a manger. Don't fear it. Embrace it.

Monday, April 9, 2012

Tips for Working With Others

Many of us software engineers work somewhat in isolation. Yes, most of us are on teams and yes, most of us are working in the same code base as other engineers. However, code bases contain thousands of lines of code, with dozens or hundreds of modules (or classes or files or whatever). The truth is that most of us kind of work off on our own most of the time.

For example, I have one project where I spend most of my time working in the reporting module. There are about 8 other engineers on this project, but I'm almost always the only one working on reporting. In the sense of the overall code base, I work with the rest of the team - it all gets combined together and deployed together. At the module level, though, I don't work with others very often.

Contrast that with another project, where I'm writing some Selenium tests for a client. There's another engineer here writing Selenium tests as well. This is a big project, so he and I have divided it up, but we're still both working within the same modules. I'm working on test A while he's working on the Page object, and I'm working on the Page object while he's working on test B. At the module level, we're working quite closely together.

This kind of close collaboration is a lot of fun; when it goes well you get a whole lot done quickly, and you can use each others' work to accomplish a lot quickly. It's also really easy for this kind of project to go poorly, for it to end with shouting and tears and a whole lot of deleting of code. So, if you ever find yourself in a situation where you're doing something like the Selenium project I'm working on, there are some things you can do to make it go a lot better:

  • Update your code often. git fetch or svn up frequently - every time you finish a small task. This will make sure that ya'll don't get too far apart, and will help prevent merge problems.
  • TALK. Talk about what ya'll are going to do frequently. Mention what you're doing. Describe what you're going to do next. You'll spend less time duplicating each others' effort and more time using each others' helpers.
  • Keep your checkins tiny. Don't check in dozens of files and hundreds of lines of code. That's just asking for a merge problem. Check in small changes, and remember that the "wip" convention in commits is a valid choice.

It can be a great joy to work together  - even on the same code - with someone, as long as you're careful about how you do it.

Thursday, April 5, 2012

In No Way Unusual

I'm sitting in a co-working facility and listening to a couple of engineers behind me. They seem to be working on an application in Rails that involves doing role-based security. I'm not sure exactly what the application does, but they keep saying things like: "Well, admins can see every user's profile. And members can see the profiles of other members of the same group. And guests can see only profiles marked as public." All in all, it seems pretty straightforward. I don't know Rails personally, but my understanding is that it's just fine for stuff like that.

But about 5 minutes ago, the conversation took a bit of a scary turn.

Developer A: "Well, I had to do this weird hacky thing to make this role see this object."
Developer B: "Hmm... but we're using devise" (a common Rails authentication solution)
Developer A: "And then once I did that, I had to do this other weird thing, and it doesn't really want me to do it, but I managed to coerce devise into letting me set this attribute here. It's all kind of brittle, but it seems to work."
Developer B: "Hmm..., but we're using devise"

I'm with developer B on this one.

The minute you say, "had to do this hacky thing" when you're not doing anything unusual, you've got a massive code smell. As long as what you're doing is within the opinions of the tools and framework you're using, then you should be able to do it without resorting to hacks, coercion and bypassing the very framework you're trying to take advantage of. Role-based security, by the way, is well within the bounds of what the devise framework can handle (promise).

And that's the overall lesson.

Most of the things we do are in no way unusual. There should not be a need to hack in most situations. Make sure that your problem is really truly unique before you go beyond the normal way of doing things for your framework.

Monday, April 2, 2012

What Makes a Good Designer

I have the privilege of working with a lot of different designers.  Some I enjoy working with, some I don't. I like some of the designs, some I don't. (Gosh, sounds like most groups of people, huh? Some are better or easier to work with than others!)

Now, here's my little secret - I'm no designer. That's for sure. If I'm asked to design something you're almost certainly going to get a look and feel thats.... well, we'll be generous and say "safe." Given a mood board or other work of a designer, I can iterate and come up with something cool, but the fundamentals are definitely designer territory, not Catherine territory.

The difficulty comes when I have to hire a designer. Hiring an engineer is easy: I know what to look for; I know what makes someone effective or ineffective; and I can look at a small sample of someone's work and extrapolate. Hiring a designer is a somewhat more difficult proposition. Over the years, though, I've developed a pretty good idea of what makes a good designer.

I think of design as being broken down into two parts, aesthetics and reasoning. Aesthetic is the designer's "style." Reasoning is how the designer intellectually approaches the problem.

This is one design aesthetic (thanks to http://www.jessewillmon.com/):

This is a very different design aesthetic (thanks to http://www.mariusroosendaal.com/):


These are both portfolio pages - accomplishing the same purpose - but with very different aesthetics. The first is casual and friendly, while the second is much more corporate.

In my experience, most designers work with one or two aesthetics comfortably. When hiring for a designer, choose one who's aesthetic matches your own; those designs will be much better for you than if you're asking the designer to work in an uncomfortable way outside their aesthetic. If you're corporate, choose a designer with a clean corporate aesthetic. If you're going for an urban vibe (let's say you're starting the next great urban shoe craze), then choose a designer with an urban aesthetic.

But really, most designers have an aesthetic. They have a style.

What makes a good designer?

Reasoning.

A good designer can analyze problems and analyze designs, and can explain the reasons behind the design. For most poor designers, the reasons behind the designs come down to, "Well, it just looks right. Trust me." A good designer will be able to explain the design decisions clearly, pointing out things like:
  • eye tracking, including how the user will get information and in what order
  • mouse movement and target size
  • use of white space for clarity or emphasis
  • inclusion of corporate branding elements
  • conformance to or knowing deviation from similar problem spaces, either within the company or competitors
  • the overall priority of the design (for example, guidance or creating excitement or accomplishing known workflows quickly)
  • how the design scales up or down across the various target interfaces
So when I'm hiring a designer, I'm looking for only two things: an aesthetic that matches that of my client; and the ability to reason. Simple, right?