Tuesday, January 8, 2013

Fun Things I (Re-) Learned About IE CSS

We're building a customer's web application, which means I've been getting my hands dirty in CSS again. Our front end stack is pretty standard: Sass, 960 Grid System, and Compass. I frequently use bootstrap but here the design just doesn't work with it.

Now, I have two confessions to make that are relevant:

  1. I'm a mac user.
  2. The designer we're working with on this one has an aesthetic that I would call "application-oriented corporate clean".  (This is a compliment, I swear!)
The design language calls for gradients in a lot of places for texture, uses rounded corners and button edges, and favors text on whitespace over controls and cluttered layouts. 

So I went along styling things, and I think it looks pretty good. Here's what it looked like in Safari and Chrome:

Not too bad! There's still some polish needed, but it's looking pretty nice.

Then I launched Internet Explorer (this is IE9), and here's what I saw.


Oooops. Forget polish. That's just ugly. A few hours later, I've got it looking better. There were some fun things I learned (or re-learned) about IE9 CSS along the way. The rest of this post shares the changes I made to get Internet Explorer up to snuff. Note that I've just included the Compass includes where I was using them.

No Image Borders
The logo had that purple image border. No, I don't know why it does. Let's get rid of it. Just add this to the img tag CSS:
border: none;

Create Non-Gradient Backgrounds
I warned you - lots of gradients! IE9 doesn't like gradients. See that header? It looked like this:
@include background-image(linear-gradient(#7184a8, #27486b));

I changed it to look like this:
         background: #536d96;
@include background-image(linear-gradient(#7184a8, #27486b));

Notice that the background color is neither the start nor the end stop color from the gradient. It's in between the two. Also notice that I specified the background first.

IE9 Doesn't Like Text in Button Tags
This showed up with no text at all on the button. The text was in the DOM but it wouldn't display.

This worked like a charm:

jQuery Required for Placeholders
We use placeholder text in a few forms (not shown in the screenshots) for questions that might be confusing. Too bad placeholder is an HTML5 thing that isn't supported on IE9 and earlier. Unfortunately, you have two choices: put the text elsewhere on the page, or use JavaScript to set values. I added the jQuery placeholder plugin and a little bit of CSS styling, and it magically worked. (whoo hoo!)

And we're done! Sure, IE doesn't look quite as polished without the gradients, but it's at least not hugely bad. Just four things take"yikes" to "could use polish". It's amazing how big a difference just a few things can really make.

Friday, January 4, 2013

On Resolutions

New Year's Resolutions are hardly a new phenomenon. They're ubiquitous in our daily lives. Everywhere you turn magazines and TV shows and friends and family are making resolutions and encouraging you to make resolutions: lose weight, exercise more, give up smoking, tidy the house, get organized, be a better friend, laugh more, get that raise, find a new and better job, etc.

It gets tempting to make work resolutions, too, either individually as a team. THIS is the year we finally get serious about technical debt. This year we resolve to not let product management impose arbitrary deadlines. This year we're going to pair program at least 25% of the time. Work resolutions are great. They're basically goals, and goals are good things in general. They give you a direction and something to strive for as a team.

January is a terrible time for work resolutions.

Think about all the things going on during January at work:

  • Those year end projects that didn't quite get finished are now really urgent so they still count toward last year. (No kidding - I have more than one client that counts the end of the quarter as January 15th for bonus and goals purposes.)
  • The business is often making annual and quarterly goals
  • It's roadmap update time: the 6, 12, and 18 month roadmaps all need a quick refresh
  • People are still rolling in from vacations at the end of the prior year
  • Finance is doing end of year close outs and bugging everybody for final paperwork and expense reports and whatnot.
  • Many teams have new members or offer internal reshuffles around this time.
All of that activity means that in January you're spending less time doing your day to day job and more time dealing with the activity. This isn't a bad thing, but it's the state of the work world. Adding work resolutions on top of that, though, is a recipe for failure. Sure, we'd love to pair program more, but we've got 4 hours of meetings today, and can't get two people at a keyboard for longer than 30 minutes.  Yes, we do need to tackle technical debt, but gosh, we just sat through a presentation of the quarterly business goals and truly I don't know what that means for development.

All that conspires to make sure that we're not going to succeed at our new resolutions. There's a lot of change going on. There are more distractions than normal. It's just not a good environment to succeed.

So don't try. Don't make work resolutions in January.

Make work resolutions in April.

By April, we've all achieved a routine.  Things are back to normal, and we have a good idea of how busy we're going to be (so just how ambitious was the business back in January?). April is a good time to make resolutions that you'll be able to keep. You'll know how much you'll be able to work together, and how much technical debt you really can tackle among the features you're being asked for. In other words, you'll be in a position to make resolutions you can keep.

And making resolutions is good, but keeping resolutions is great.

Monday, November 12, 2012

A Few CSS Pet Peeves

I just started a greenfield project, and remembered again that one of the things I love about greenfield projects is that you get a chance to Do It Right(tm). I'm also working with a team, which means we have a few different people helping decide what Do It Right means, exactly. In particular, we've been having discussions through the pull requests about how to handle CSS and markup. In doing so, I found another benefit - it's really making me crystalize some of the things that I kind of knew but hadn't articulated.

I wanted to share the CSS tips and bugaboos I've articulated for myself (and my team) over the past few weeks:

  • Use a dynamic stylesheet language. I don't care which one - sass, less or something else - but use a stylesheet language. CSS is much easier when you can use variables, mixins, and other things we programmers take for granted elsewhere. Plus, who really likes doing search and replace on hex codes?
  • No bare text. Don't use text that's not in a container of some sort. It will be affected by any styling changes that affect the body text, which tends to lead to unintended consequences. In addition, bare text is a lot harder to place appropriately on the page. Put it in a span or a div or a p or something - anything!
  • No br tags. I'm not a fan of
    tags. That's mixing layout and content, which I think is a bad idea, even on a static page. If you really want a line break, it's almost certainly because you're changing to a separate type of content - another paragraph, or another item in a list. That means you should make it a paragraph or a list. Oh, and if you really must use the br tag, at least close it!
  • html_safe is a minor code smell. The project is a Rails project, but other languages have equivalents. Using "FooBar".html_safe is an indication that you're doing something odd. This can open up a security hole if any of the contents are user input. And if you're not using user contents, then you're probably putting html tags inside a single element - again mixing content and layout, which is not a good idea.
  • Lists are your friend. Not all lists have bullets or numbers. They're really useful for any layout that is sequential, horizontally or vertically. That means navigation menus, sequential results, etc.
  • Deep is bad. Particularly when you're using a CSS layout (e.g., Bootstrap) and/or a lot of different backgrounds, it gets really easy to nest a lot of divs to achieve a layout. As the application grows it will start to show in the render time of the page. Use as few divs as possible for simple positioning, and see if there's a way to make them shallower where possible.
A lot of these thoughts tie back to a separation of concerns. Content, layout/markup and styles are three different things and should be handled in three different places. You create a layout in your views, inject content from the controller (using an MVC framework or similar), and style it with your stylesheet language (ultimately CSS). Don't mix them. 

What are your CSS-related bugaboos?

Wednesday, October 31, 2012

How I Do Pull Requests

I've been working with a couple new developers lately and we've been talking about how we actually write, review and check in code. Early on, we all agreed that we would use a shared repository and pull requests. That way we get good peer code reviews, and we can look at each other's work in progress code.

Great!

One of the developers approached me after that conversation and said, "umm, so how exactly do I do pull requests?" On Github, here's how I do it:


# CD into the project
$ cd Documents/project/project_repo/

# Check what branch I'm on (almost always should be master when I'm starting a new feature)
$ git branch
* master

# Pull to make sure I have the latest
$ git pull
Already up-to-date.

# Make a new branch off master. This will be my feature branch and can be named whatever I want. I prefix all my feature branches with cmp_ so people know who did most of the coding on it.
$ git branch cmp_NEWFEATURE

# Switch to my new branch
$ git checkout cmp_NEWFEATURE
Switched to branch 'cmp_NEWFEATURE'

#####
# DEVELOP.
# Commit my changes because they're awesome
# Develop more stuff. Make changes, etc.
# Commit my changes, because they're awesome.
# Pull from master and merge to make sure my stuff still works.
# NOW I'm ready to make a pull request.
#####

# push my branch to origin
$ git push origin cmp_NEWFEATURE

# Create the pull request
# I do this on github. See https://help.github.com/articles/using-pull-requests
# Set the base branch to master and the head branch to cmp_NEWFEATURE

# Twiddle my thumbs waiting for the pull request to be approved.
# When it's approved, huzzah!

# If the reviewer said, "go ahead and merge", then the merge is on me to do.

# Check out the master branch. This is what I'll be merging into
$ git checkout master

# Merge from my pull request branch into master. I use --squash so the whole merge is one commit, no matter how many commits it took on the branch to get there.
$ git merge --squash cmp_NEWFEATURE

# If the merge isn't clean, go ahead and fix and commit.

# Push my newly-merged master up to origin. The pull request is done!
$ git push

# Go into github and close the pull request. I do this in a browser.

# Delete the feature branch from origin. Tidiness is awesome.
$ git push origin :

# Delete my local copy of the feature branch.
$ git branch -D


More information can be found here: https://help.github.com/articles/using-pull-requests.

Happy pulling!

Friday, October 26, 2012

Use Your Own API

One of the early milestones for an engineering team is the development of the first API. Someone wants to use our stuff! Maybe it's another team in the company, or another company altogether. Either way, someone will be consuming your API.

That's truly awesome news. It's always fun to see the great things teams can accomplish with your work.

So how do we create an API? Most of this is fairly straightforward:

  • Figure out what people want to do
  • Create an appropriate security model
  • Settle on an API model (RESTful JSON API? XML-RPC? Choose your technologies.)
  • Define the granularity and the API itself.
  • Document it: training materials, PyDoc/rdoc/javadoc, implementation guide, etc.
The trouble is that most people stop there. There's actually one more step to building a good API:

Use your own API.

All the theory in the world doesn't replace building an application with an API. Things that seem like great ideas on paper turn out to be very painful in the real world - like a too-granular API resulting in hundreds of calls, or hidden workflows enforced by convention only. Things that seemed like they were going to be hard are easy. Data elements show up in unexpected places. This is all stuff you figure out not through examination, but through trying it.

So try it. Create an API, then create a simple app that uses your API. It's amazing what you'll learn.

Friday, October 19, 2012

Mind Your Words

When you're surrounded by other engineers all day, it's easy to forget that some words have other meanings. Take "CRUD", for example. In engineering, that's shorthand for Create Retrieve Update Delete - your standard data operations. In real life, it's a derogatory term.

At several of my client sites, we use CRUD as engineers do. "Yeah, I've just implemented the CRUD operations; it's not pretty yet." "That's just a CRUD workflow? Yup!" "CRUD looks good."

When you get in front of non-technical folks, though, that exact conversation takes on a whole new tone. All of a sudden, you're a coarse, derogatory engineer. Oops!

Long story short, mind your words. Context changes meaning!

Tuesday, October 16, 2012

Management by Monopoly Money

I work with a lot of different teams, and almost all of them have some concept of a backlog. Basically, it's a list in some approximation of priority that the team works from. How they pull off the backlog and how stuff moves around the backlog varies a bit, but that's the basic idea.

One of the frustrations with this kind of backlog management is that certain things get short shrift. The person who owns the backlog has - through no fault of their own - certain biases. Often, it's whoever shouts loudest or most recently, or things that relate to that person's background get priority (e.g., an ex-IT guy will prioritize technical debt and deployment/monitoring items, while an ex-sales person will prioritize customer and sales requests). This leads to fun conversations and ideas like, "well, 10% of our stories should be paying down technical debt," or "every fifth story should be a customer request." In theory this works well. In practice, things aren't nearly that smooth.  In practice, sometimes things come up and we skip our customer story in favor of a strategic priority, or we don't hit our technical debt goal.

There's another way.


Yup, Monopoly money!

It sounds silly, but bear with me. When we make statements like, "10% of our time should be on technical debt", we're talking about a budget. So let's use budgeting tools - money.

Here's how it works:

  1. The product owner and management work to set a budget. It usually is as simple as agreeing that sales gets 25%, product management gets 50%, engineering gets 25%.
  2. Set a total budget amount that reflects the total amount of work you expect the team to accomplish.
  3. The representative from each group gets monopoly money according to their proportion of the budget. Usually it's a dollar a point (or $10 a point, if you're feeling rich).
  4. As you create the backlog, each group spends their money. When they're out, they're out.
Try to keep your budget over a fairly short period of time - a few sprints, or no more than a quarter. That way there's a reset opportunity before anything gets too skewed.

This turns fun very quickly. By providing money, you're creating an economy. The IT guy will trade away some of his money in exchange for getting it back from marketing next quarter, for example. Or two groups will band together to outspend everyone and push a feature they don't want onto the backlog. It also makes the process very concrete. There's no stretching or remembering. There's just how much money each group has left.

Making abstract work concrete and sticking to a budget.... now that's worth a shot. Give it a try.

Thursday, October 11, 2012

Installing Umbraco Packages

Just a quick tech note today, for anyone using Umbraco.

I was attempting to install a package from the Umbraco package repository and kept getting an error: ERR_NAME_NOT_RESOLVED. It looked like a DNS error, but I couldn't find any DNS problems!

I finally found the right answer: it's the world's most poorly formed permissions problem. Give your IIS user read, write and execute permissions on c:\Windows\Temp, and the problem will go away.


Tuesday, October 9, 2012

The Overloaded Tester

As most of you know, I came up through software testing, before branching out into development and management. I still like to keep in touch with my roots, and spend a lot of time working with my clients on the test function in software.

I've noticed a trend recently toward overloading the tester. It's no longer fashionable to simply have testers. That's "old school" "retro" thinking that produces script monkeys and poor manual testers. The bright shiny way now is to have test automation gurus - basically developers who specialize in developing tests and test frameworks - and a customer or customer proxy who helps define what to test.

All in all, I don't have a problem with the thinking behind the trend. Yes, script monkeys give testing a bad name. Yes, test automation can be a hugely valuable tool. Yes, listening to the customer is great. Like many trends, it went too far, but it's starting to swing back. And that's where things are getting interesting.

You see, we have all these manual testers, and some of them are very good. Some of them know the business and the uses like the backs of their hands, and we're starting to recognize how valuable that knowledge is. Others know the attributes of testable, maintainable, scalable software and can be great testing in the code review or architecture era. What we used to call a manual tester can now be a great future proofer.

We're really starting to overload the tester. Now instead of hiring a senior manual tester, we see job reqs for a senior tester that include phrases like:

  • "owns the quality process"
  • "evangelist for good software development and delivery practices"
  • "works across the organization to instill a culture of quality"
Your most senior testers are becoming - at least in some places - process gurus and influencers more than testers. They may never actually test a piece of software.

This actually isn't inherently a good or a bad thing. Some companies have experimented with similar ideas on the development side for years, often under the title "Office of the CTO". It's a kind of roving expert consultant who works exclusively within one company but works across the organization. Seeing this start to come up in testing.... well, it's a career path for testers who don't want to get into managing people.

I think overall I'm for this trend of testers taking on a broader role in the organization. Just be careful you know what you're getting into!

Wednesday, October 3, 2012

Team Neutral

I have a client who has outgrown their scrum team. Daily standups are taking longer and longer, and the team doesn't really fit into their room very well at all. Sprint planning is getting more and more... fun, too. And by fun I mean long.

So we're splitting the teams. Going forward, we will have two functionally equivalent teams. First order of business: a team name!

Sounds simple, right? Nope, not so much. Let's back up for a second.

This isn't the first client I've had who's split teams. The last client that split wound up with three teams needing names. And then the fun started. One team suggested greek alphabet as a theme. They would, of course, be Team Alpha. The second team would be Team Beta. The third team would be Team Omega. Team Beta was particularly unamused. Another team suggested a theme of 80's TV shows. They would be Miami Vice (cool cars!). The second team would be Growing Pains. The third team would be the Golden Girls (this team, incidentally had no women). Neither team was amused. The third team suggested a simple numbering system. They would be team 1. Someone else would be team 2. And the last team would be team 3. The squabbling continued.

It took about three weeks to land on colors as a relatively neutral theme. Granted, this was an extreme example of a group that collectively contained some pretty fragile egos and a highly developed sense of hierarchy.

The underlying point is interesting, though. It's useful to give teams names. After all, we have to refer to them somehow. It's also fun to let teams pick their own names.

But.

Make sure those names are neutral and don't imply positioning or relative importance in any way. Neutral things like colors, places, video games, and others are a good idea. And make sure every team gets to pick their own name.

One day, I'll get to be on Team Neutral. Until then, you can find me on Team has-a-neutral-name.

Tuesday, September 25, 2012

Clean Project Checklist

Some projects are harder than others. On a few of my projects, everything just feels harder than it should be. I can't find the classes I need to change; the tests on a clean checkout fail; configuration is manual and fiddly; getting set up to try your code is slow and unwieldy. They're just painful all around. Other projects are smooth and easy. There's a solid README for setting it up - and the README is short. Tests run well on clean checkout. Classes and methods are in the first place you look. The only hard part of these projects is any difficulty in actually solving the technical problem.

The main difference, I've discovered, is in the tooling around the projects. Some of the smooth projects are huge and involve different services and technologies. Some of the difficult projects are actually quite small and simple. When the tooling is clean and works, then the project is much more likely to be smooth. Over time I've come to look for a set of things that tend to indicate a clean project.

This is my clean project checklist:

  • Deployment and environment setup is automated using an actual automation tool. I don't care if it's Capistrano, Chef, Puppet, RightScale scripts or whatever. It just has to be something beyond a human and/or an SSH script. This includes creating a new box, whether it's a cloud box or a local machine in a data center somewhere. If it's scripted then I don't have to fiddle a lot manually, and it also tends to mean there's good isolation of configuration.
  • Uses Git. This makes the next item possible. It's also an indicator that the developers are using a toolchain that I happen to enjoy using and to understand.
  • Developers are branch-happy. Feature branches, "I might mess this up" branches, "just in case" branches - developers who are branch happy tend to think smaller. With that many branches, most don't survive for long! Smaller changes means smaller merging, and that makes me happy. It fits nicely with the way I work. I should note that I don't care how merging happens, either by merge or by pull request.
  • Has a CI system that runs all the automated tests. It might not run them all in one batch, and it might not run them all before it spits out a build, but it runs them. The key here is that automated tests run regularly on a system that is created cleanly regularly. This cuts down on tests that fail because they weren't updated, or issues that relate to data not getting properly put in the database (or other data stores).
  • Cares about the red bar. The build shouldn't be broken for long, and failing tests should be diagnosed quickly. In policing, it's called the broken windows problem: if you get used to small bad things then bigger bad things will happen. Don't let your project windows break.

I'm sure there's more, but what else am I missing?

Tuesday, September 18, 2012

Automate What?

I was in a meeting the other day, talking about an upcoming project. I made some offhand comment about needing to make sure we allowed for a continuous integration and test automation environment when we were sizing our hardware needs. The response was immediate: "but automation never works!" After a bit of discussion, it emerged that this team had never attempted automation at any level under the GUI, and their GUI automation was, well, not good.

Automation at the GUI level is a legitimate thing to do. It's far from the only automation you can do. Consider all the other options:

  • Automated deployment. Manual scripts or with tools - this is an entire discipline in itself. And it doesn't have to deploy to production; automated deployment to test or staging environments is also useful.
  • Unit tests. Yes, this is test automation, too.
  • API tests. Automation of a user interface, jut not a graphical user interface.
  • Integration tests. You can test pieces together without testing the whole thing or without using the GUI.

So here's my automation heuristic:
Automate as much as you can as low as possible.

Let's break that down a bit. "Automate as much as you can" means just that. Not everything is automate-able, and not everything should be automated. If you have a GUI, testing that the colors go nicely together is probably not automate-able - so don't do it. Do that manually. If you have a method that takes inputs and produces outputs, you can probably automate the test that confirms it works with different kinds of inputs and outputs. If automating something involves massive architectural changes or fails one in three times randomly, then you're not going to maintain it and it's either not automate-able or simply broken.

(On a somewhat related code, the ability to automate something frequently correlates with how modular and maintainable an architecture it has. Hard-coded configuration parameters, nonexistent interfaces, etc. make code less testable and also a lot harder to maintain!)

"As low as possible" means that you should target your automation as close to the code you're trying to affect (deploy or test, for example) as you can. If you're testing that a method does the right thing, use a unit test. Sure, it might be possible to test that same method through a GUI test but the overhead will make it a lot slower and a lot more brittle. The team I was talking with was right about one thing: GUI tests tend to be brittle. If you can go below the GUI - either through an API or using an injected test framework (think xUnit tests) - then you can avoid the brittleness.

Yay automation! But even more yay, appropriate automation!

Friday, September 14, 2012

Don't Forget the Joy

Work is, well, work. It's also an open secret that we're better workers when we have fun at work. We're more productive, we help the bottom line, and we're more likely to stay in a job longer if we enjoy our jobs. So let's roll up our sleeves and get to work, but don't forget the joy!

Some joy comes from the work itself. There is a thrill in solving a hard problem. Learning a new technique provides a quiet satisfaction. Finally understanding some concept - really understanding it! - puts a spring in my step. Looking at a demo or a customer report and knowing I built some feature they really love sends me home bragging. I know I'm not alone in quite simply loving what I do.

Other joy coms from the culture around the work. This is where your colleagues and your work environment come in. Joking around the coffee machine is just plain fun. Having your desk set up the way you like it - with your Star Wars figurines and footrest - makes the physical office more comfortable. We're people, after all, not automatons.

As an employer, there are things I can do to help make work more fun. Most of them aren't even very expensive! Consider these:

  • A training course - online or at a local university's extension school - for an employee who asks usually costs under $1000.
  • Letting someone go speak at a conference costs about 20 hours for prep and the cost of plane/hotel. (The conference itself is usually free for speakers.)
  • Letting the team choose a theme for sprint names and name their own sprints. Free and often hilarious.
  • Bringing in bagels or cookies or cake once or twice a month - either on a schedule or ad hoc, depending on your team's sense of routine - is surprisingly fun. Keeping snacks on hand accomplishes the same thing.
  • Don't do management by walking around. Be available and show up but don't hover. You don't want to be your employees' friend, but you don't want to be the heavy, either. Free, too.
Joy has a place at work. Encourage it.

Tuesday, September 11, 2012

Status Message Messaging

For those of us who use hosted services, status pages are an essential communication point. They're a way to indicate service health ("it's not us, it's you") and, when there are problems, they provide a forum for disseminating updates quickly and loudly. The "everything is okay" update is pretty easy. The "stuff's broken" update is a much more delicate thing to write. It has to reflect your relationship with your users, but also reflect the gravity of the situation.

Here's part of a status update Beanstalk published this morning:

"Sorry for the continued problems and ruining your morning." 

Oh man. That's an update you don't want to have to publish. To provide some context, we'll just say that Beanstalk has had a bad few days. Beanstalk provides git and subversion hosting; that makes them a high-volume service. Pushing, pulling, committing, checking in/out, etc. happen very frequently and, well, software teams on deadline are not known for being nice when their tools get in the way. The last few days have been hard on Beanstalk: they got hit by the GoDaddy attack, then had a problem with load on their servers traced to an internal problem, and finally are again having difficulties with their file servers that host repos. And you can see it in that status update. "[R]uining your morning" is the phrasing of someone who is seriously exasperated. That update does some things well: it shows they understand the problem is severe; and it reflects the increasing frustration users are likely experiencing. It's escalating, just like user's tempers are probably escalating. However, it goes too far for my taste. It reeks of frustration on the part of whoever wrote the update, and that's clearly not a good head space for actually solving the problem. It also implies a sense of fatalism. That update was at 9:23am - my morning might still be salvaged, if they can get the system back up relatively quickly. Don't give up, guys!

There's an art to writing the status update when things are going poorly. When I'm working with a team fixing a major problem, I'll dedicate someone to doing just that. They sit in the middle of the war room (or chat session or conference call) and do nothing but manage status communications. Let the people fixing the problem focus on fixing the problem. Hand off status to someone who will maintain a level head and write a good status update, considering:

  • Frequency of updates. At least every hour for most issues, and whenever something changes there should be a status update. Silence does not inspire confidence.
  • Location of updates. As many channels as possible are good. Use twitter, the status page, email or phone calls to really important customers, and any other tools at your disposal.
  • Tone of updates. This needs to match your general tone of communication (see the Twitter fail whale - still cute and fun, even in error!) but also show that you know your customers are getting frustrated.
  • Inspiring confidence. Providing enough information to look like you are getting a grip on the problem and will get it fixed is important. Providing a proper postmortem also helps inspire confidence.



Friday, August 31, 2012

Consider the Message

A couple days ago I was at the butcher picking up some meat for supper (burgers!). My card got declined. And here's the thinking: "Oh wow embarrassing! I come here all the time! I'm sooo not that person. I'm fiscally responsible, darn it! Besides, I'm nowhere near the limit. How annoying!" It's about a 5 second panic, but let's be honest, it's not a good feeling. It's embarrassing and annoying for both parties - for the cashier and for me.

So I paid with cash, and as I was going out, the cashier handed me the receipt from when it got declined. Here it is:




Seriously?! Seriously!?!? That 5 second panic and the annoyance for me and for the cashier - and check out that decline reason:

"Could not make Ssl c"

I'm going to assume that means "Could not make an SSL connection to ". Bonus points for the truncated message and for the interesting capitalization.

That's why error messages matter. The error shouldn't have been "Declined". It should have been "Error". That would have saved us all the embarrassment, at least! (Yeah, it still would have been annoying.)

So please, consider your error messages. They matter.

Wednesday, August 29, 2012

Autonomous and Dependent Teams

Almost all of my clients are currently doing some variation on agile methodologies. Most of them are doing SCRUM or some version of it. And yet, they are very different teams. In particular, lately I've noticed that there are two kinds of teams: those that encourage autonomy and those that encourage dependence.

To be clear, I'm speaking entirely within the development team itself. Within a development team there are invariably leaders, usually the ones who have been around for a while and show the most ability to interact on a technical level with the rest of the team. The character of those leaders and how much autonomy non-leader team members have says a lot about the team.

Teams that encourage autonomy take the attitude that team members should help themselves. If a member of the team needs something - a new git branch, a test class, a method on an object in another area of the code - then that person is responsible for getting it.  How the team member accomplishes that is, in descending order of preference: (1) doing it; (2) asking for help to do it with someone (pairing and learning); (3) asking someone else to do it; (4) throwing up a request to the team at large.

Teams that encourage dependence have a very different attitude. These are teams where each person has a specialty and anything outside that specialty should be done by a leader. If a team member needs something - a new git branch, a test class, a method in another layer of the code - then that person should ask a team leader, who will provide it. Sometimes the leader passes the request off to another team member, and sometimes the leader simply does it.

Let's look a little deeper at what happens with these teams.

Autonomous Teams

  • Emphasize a leaderless culture. These are the teams that will say, "we're all equals" or "there's no leader." There are people who know more about a given area or technology, but the team considers them question answerers more than doers in that particular area.
  • Can better withstand the loss of one person. Whether it's vacation, maternity leave, or leaving the company, the absent person is less likely to have specialized knowledge no one else on the team has. It's a loss that's easier to recover from.
  • Tend to have more tooling. Because there's no dedicated "tools person", everyone introduces tooling as it's needed, from a continuous integration system to deployment scripts to test infrastructure to design diagramming. Over time this actually winds up with more tools in active use than a team with a dedicated tools engineer.
  • Produce more well-rounded engineers. "I don't do CSS" is not an excuse on this team. If the thing you're working on needs it, well, you do CSS now!
  • Work together more. Because each team member touches a larger area of the code base, there's more to learn and team members wind up working together frequently, either as a training exercise, or to avoid bumping into each other's features, or just because they enjoy it.
  • Tend toward spaghetti code. With everyone touching many parts of the code, there is some duplication. Coding standards, a strong refactoring policy and static code analysis can help keep this under control.
  • Have less idea of the current status. Because each team member is off doing, they don't always know the overall status of a project. This is what the daily standup and burndown charts are supposed to help, and can if they're done carefully.
Dependent Teams
  • Have a command and control culture. These are the teams that say, "we'd be dead without so-and-so" or "Blah tells me what to do." They look to the leader (or leaders) and do what that person says, frequently waiting for his opinion.
  • Can quickly replace non-leaders but have a huge dependence on leaders. When a leader is missing - vacation, meeting, or leaves the company - then the team gets very little done, and uses the phrase, "I don't know. So-and-so would normally tell me, but he's not around."
  • Have a good sense of overall status. The leaders tend to know exactly where things stand. Individual team members often do not.
  • Do standup as an "update the manager" period. The leader usually leads standup, and members will speak directly to that person (watch the body language - they usually face the person). Standup often takes place in shorthand, and not all team members could describe each task being worked on.
  • Tend to work alone or with a leader. Because individual team members tend not to work on similar things or to know what everyone is doing, they'll often work alone or with a team leader.
  • Tend to wait. You'll hear phrases like, "well, I need a Git branch; has one been created yet?" Instead of attempting to solve the problem - for example, by looking for a branch and creating one - the team member will note the problem and wait for a leader to fix it or to direct the fix. 


Overall, I vastly prefer working with teams that encourage autonomy. There are moments of chaos, and times when you find yourself doing some rework, but overall the teams get a lot more done and they produce better engineers. I understand the appeal of dependent teams to those who want to be essential (they'd like to be the leaders) or to those who just want to do what they're told and go home, but it's not for me. Viva the autonomous team!








Friday, August 24, 2012

"Who Was That Masked Man?"

When we were kids, we'd occasionally get to watch the Lone Ranger. We watched the fifties version, with the Lone Ranger and Tonto riding into town in black and white and saving the day. Inevitably, it would end with someone staring into the camera and asking, "Who was that masked man?" as the Lone  Ranger rode off to his next town.
"Here I come to save the day!" (Wrong TV show, right sentiment)

I watched the show again not too long ago with a friend's son, and got to thinking that there really was something to this idea of someone riding into town, clearing out the bad guys, and riding off into the distance. It's really rather like being a consultant in some ways. You ride in, find the bad parts, fix them, and ride out. Fortunately for me, there is a lot less horseback riding and gunplay in software than there ever was in the Lone Ranger!

But let's look at each of those steps:

  1. You come in
  2. You find the bad part(s)
  3. You fix them
  4. You leave
All of those steps are important. If you leave without finishing every single step, well, you're no Lone Ranger.

Come In
Coming in doesn't have to mean going to the client's offices. It just means that you need to show up. You have to interact with the client  - and not just the project sponsor. This is how you'll know the full extent of the problem, and start to build trust to fix it. This means you have to be around and be available for casual conversations. This might be in person, by phone, in chat rooms, or over IM.

Find the Bad Part(s)
You're here because there's a problem the client can't or won't solve internally. Understanding that problem gives you the bounds of your engagement. Sure, there are probably other things you could optimize or improve, but don't lose sight of the thing you're actually here to fix!

Fix Them
You have to actually fix the bad part(s). Don't offer a proposal; don't outline the problem and note how it could be fixed. Do the work and fix it. This is what differentiates the Lone Ranger from the stereotypical management consultant.

Leave
This part is surprisingly hard sometimes. Sometimes the problem will be fixed and you just keep coming around, or start working on a new problem, or maintaining the fixes. This is all well and good until you're sure the fix is stable, or if there is another problem to be solved. When it's just maintenance, though, then it's time to leave. Don't forget to actually do that part.

And that is how 24 minutes with the Lone Ranger turned into a rant on consulting software engineers. Now, back to the fake Wild West for me!

Monday, August 20, 2012

The Dreaded Feature List

We've all seen them: the lists of features. Whether they show up in a backlog, or a test plan, or a competitive matrix, or an RFP, feature lists are everywhere. It's the driving force of software development, in many ways. "How many features have you built?" "When will that feature be ready?" "Does it have X feature?"

There's one big problem with that feature focus, though: it's not what customers actually want.

There are only two times I can think of where a customer actually explicitly cares about features:

  1. When comparing different products (RFP, evaluation, etc)
  2. When they're using the presence or absence of a feature as a proof point in an argument about your product
The rest of the time, they care only that they can solve their problem with your product. Having a "print current inventory" feature is useless. Being able to take a hard copy of the inventory report to the warehouse and scribble all over it while doing a count - that's what the customer actually wants. "Print current inventory" is just a way to get to the actual desire. These stories - tales of things the customer does that involve your software - are the heart and soul of the solution.

So - with the exception of RFPs and bakeoffs - ignore features. Start focusing on the customer and their stories.

Thursday, August 16, 2012

Good Software Makes Decisions

A lot of software is complex. Sometimes it's not even the software that's complex; it's the landscape of the problem that the software is solving. There are data elements and workflows and variations and options. Requirements frequently start with, "Normally, we foo the bar in the bat, with baz. On alternate Tuesdays in the summer, though, we pluto the mars in the saturn. Our biggest customer does something a little different with that bit, so we'll need an option to jupiter the mars in the saturn, too." Add in a product manager who answers every question with, "well, let's make that an option", and you end up with a user interface that looks like this:

No one wants to use that.

It's software that has taken all of the complexity of the problem space and barfed it out onto the user. That's not what they wanted. I promise. Even if it's what they asked for, it's not what they wanted.

Building software is about making decisions. Good software limits options; it doesn't add them. You start with a massive problem space: "Software can do anything!" and limit from there. First, you decide that you're going to solve an accounting problem. Then you decide what kind of inputs and outputs go into this accounting problem, and how the workflows go. You do this by consulting with people who know the problem space really well. Define what you will do and - just as important - what you won't do. Exposing a decision says, "I don't understand this problem; you decide." Making a decision builds customer confidence; it says, "I know what I'm doing." All the time you're creating power by limiting choices.

It's okay to make decisions in software. Take a stand; your software will be better for it.

Tuesday, August 14, 2012

Don't Claim It If You Only Read It

I've been hiring on behalf of a client lately, looking for a head of QA. One of the trends I noticed as I was reviewing resumes was that a lot of candidates had a list of languages on their resumes: C++, Perl, Ruby, Java, etc. "How cool!", I thought. It's great to see the testing profession attracting people who are capable of writing code but for whatever reason find testing a better fit, Maybe that profession is finally overcoming the stigma of "not good enough for dev" and "tester == manual tester".

And then I started the phone screens.

I've screened 12 people so far who listed languages on their resumes, and so far not one of them has actually been capable of writing code. Nope, that language list? That's the languages they can read. As in, "I can follow along on code written in C++ or Java."

Ouch. Now I'm disappointed in all of them. Here I thought they had some skills they don't have.

I understand what the candidates were going for. They're "technical testers" rather than testers who work purely on the business level and don't understand how software works. I get it. I really do. That such a distinction has meaning is sad but true.

But don't claim languages if you mean you can read them! You're an engineer, darnit! Claim a language if you can write it. There are enough engineers - and enough testers - who can write code that to claim a language without qualifying it means people will think you write it.

If you're trying to say, "hey, I'm a technical tester", then please do so in your resume. Just do so without unintentionally telling people that you're more technical than you are. The right way to say, "I can read but not write C++" is to say: "Languages: C++ (reading), Perl (read/scripting)" or something similar. That way hiring managers don't start with higher expectations than you can meet... And that's good for candidates, too.