Friday, February 26, 2010

Still a Test

Tests are easy to recognize. They run in a test framework. They're named things like testFrobble or FrobbleTest. They do assertions and spit out results, generally some variant of "pass" and "fail".

There are other things that are tests as well, though. Even if they aren't named according to convention, or if they run outside of your test framework, they might still be tests. A test has a pretty broad definition:

A test is anything that exercises the system under test and which produces results that are analyzed.

Let's parse it out:

Exercises the system under test: To test a system, the test has to use the system. It could be humans (click through that GUI); it could be a sample program that uses the system's API; it could be a unit test that runs a single method deep in the system with various inputs. The only real requirement here is that it actually touch the system in some way.

Produces results: The test has to accomplish something. In the case of automated tests, this is often an assertion that something has an expected value or is in an expected state. In other cases it may be a more indirect result. For example, a script might use the system to get some information, then reinstall a machine based on that information. The successfully reinstalled machine is an indirect result; the system probably provided the right information if you ended up in the right place.

Are analyzed: This is the test version of "if a tree falls in the forest...". In order for it to be an actual test, you have to look at the results and do some analysis. Sometimes this is as simple as checking for the "pass" word and agreeing with it. Other times this involves digging through logs or output to make sure the system is getting exercised in the intended way, and to interpret the resulting behavior.

As long as it meets these three criteria, it's a test. It may not be the most effective test in the world, and it may not be the most efficient test in the world, but it's a test. So when you're looking at the tests you do, make sure you look at all of them, not only the ones that appear in your test plan, or that are named testFrobble.

Thursday, February 25, 2010

Indirect Tests

It's good to test things.

Usually, that means we write tests for things. Unit tests, system tests, tests that test our test code, we produce a lot of tests. Sometimes, though, an explicit test might not be necessary. When you go to write a test, consider whether you already have an indirect test.

For example, we have a system at work by which we can install one of several different operating systems onto a test machine. You simply set the OS you want, run a script, and it shows up in about 5-10 minutes, fully configured on the new OS. It's code that does, this, so we should have a test, right? Well, sure. We kind of already have a test, though. We have a utility called a migrator. It monitors the lab, and when there are fewer than 10 available machines of any given OS, it automatically reserves a machine and moves it to the new OS. That migrator uses the OS installation code, and it tests that code. If the OS installation code breaks, the migrator starts failing (and emails and Jabbers its distress to the entire dev team).

The migrator is an indirect test for the OS installation code.

So, do we still need a test? Maybe, maybe not.

An indirect test is good enough when:
  • The indirect test uses the piece of code under test explicitly and directly
  • The failure of the indirect test is noticeable and traceable (that is, the "test" fails quickly and loudly)
  • The consequences of failure are not catastrophic
An indirect test is insufficient when:
  • A direct test would be much quicker to run and the code changes fairly frequently (in other words, the test should be such that dev would run it prior to checkin, and if the indirect one doesn't accomplish that, it's not sufficient)
  • The indirect test is difficult to run or runs intermittently
  • The indirect test does not provide the desired level of coverage
More tests aren't always better. In a situation where indirect tests exist, evaluate whether they give you the feedback and coverage you want before you put the effort and time into a direct test.

Wednesday, February 24, 2010

Drop Everything Tasks

A lot of things can wait, in the end. Usually, running that test case can wait another day. Fixing that bug that fails every night can wait. Adding that other check won't really kill you if you go without it for another day.

Then there are the tasks that really truly can't wait. They are very few and far between, but there are some tasks that are drop everything tasks. A Drop Everything Task is something that threatens the running of your group (or worse) the business if they don't happen Right Now.

Examples of drop everything tasks are:
  • a major customer who is down
  • large numbers of customers who are down
  • the complete failure of your automated tests
  • the complete failure of your lab
As a rule of thumb, if you can think of more than five Drop Everything Tasks, you have too many. If you have more than one at a time, you have too many. Narrow your criteria. Drop Everything Tasks are the really really bad ones, the times when nobody goes home until this is fixed.

When you do have a Drop Everything Task, it really does mean to stop doing everything else and work on this. Meetings? Skip them, or reschedule. General daily duties - triage and the like? Skip them, too. You can swing back around and pick them up later.

Learn to recognize when you're faced with a Drop Everything Task, then drop everything and work on it. Your group, maybe your business won't get on track no matter how many little things you do; you need to do the one Drop Everything Task first.

Tuesday, February 23, 2010

Who's Going to Read Your Test Plan?

There are a lot of different ways to create a test plan. A test plan can be a simple list, or an excel spreadsheet, or some code, or a Word document conforming to your favorite test plan template, or the output of a test management system. All of them are legitimate, but some are more appropriate than others. You have to write a test plan, so what do you write? What's the proper format? The correct contents?

Guess what...? It depends.

There is a relatively simple heuristic for this:

The more remote your audience, the more information you need to provide in a test plan.

Let's say you're writing for your internal test team:
If you're writing a test plan for your test team, you don't have to put much information in, because a lot of it is assumed by the team already. You can skip the information about tools in use; the team is already using them. You can skip information about why you're using session-based testing, since you've probably already talked about that when you introduced it.

Let's say you're writing for a client who is consuming the software you produce:
The client is going to be looking for comfort that you know what you're doing and to make sure they cover any holes you didn't test. So you need to give them pretty deep background so they can see what you're covering and what you're not. You'll need to include some background on your test methodologies, what you choose to test, what you choose not to test, and why. The analysis you do to determine test breakdowns should be in the test plan as well. Don't be overly verbose, but pretend you have to explain all of this without talking to a person.

So, given those examples, things to consider putting in a test plan are:
  • Overview of your test methodology. Do you have a test philosophy? What is it? How do you fit in with development? What are your goals in testing? Use this when you're presenting to an audience that is not steeped in your methods (aka who doesn't work for you).
  • Analysis leading to test cases. Are you doing boundary value analysis? System modeling? Stochastic event modeling? How do you approach the analysis? What parts of the system do you use it on? Use this when you're presenting the test plan outside the test team, and when you're using a new form of analysis. Note that the analysis itself and the resulting tests may be documented separately.
  • Test ordering. Which tests will you do first? Which ones last? Why? This is important if you're at all worried about a crunch and possibly about not finishing your plan before you ship (in other words, do this one almost all the time!).
  • Timelines. Describe when you'll do tests and roughly how long each one will take. In particular, address overlap with other team work, like tests you'll do while dev is finishing another feature, or fixing bugs. This can be a reference to a schedule or other document.
  • Test categories. Describe the kinds of testing you'll do, including functional testing, performance testing, etc. These will be different depending on which heuristic you're using to break down your tests, but as long as you briefly describe each of the test types it's generally possible to translate between heuristics. Be sure to include tests you're explicitly not running and why. This one will almost always go into your test plan, since most audiences will need this (even your future self trying to remember the release!).
  • System model. It helps to provide a brief overview of the system from a tester's perspective. Perhaps it's an overview of the various interfaces, perhaps it's a set of use cases, perhaps it's a data flow diagram. This provides insight and a reference point for the test categories and analysis you do. Often the system model is associated with the analysis as it's performed, so it may be incorporated by reference.
There are some things that are not in the test plan, namely: test cases and test results. These are large enough items that I find them usually best contained in another (referenced) document and/or format. Put them in if it suits you, though.

The point of a test plan is to communicate information. It describes for some audience what you intend to do with the software prior to putting it into the field. That intention is driven by how you test software in generally and the particularities of the given release and/or system. Learning how much your audience knows will help you provide relevant information and share the reasoning an assumptions in your testing. In the end, that's what a test plan is about.

Friday, February 19, 2010

Test Plan Motivation

I sometimes have occasion to look back at our older test plans. Usually I'm digging through it looking to see if we tested something consistently with a newer test plan, how we might have missed a defect that leaked, refreshing my memory on performance for that release, etc.

Generally this look brings up other questions about what was going on in that release. Why did we skip performance testing that time? Why did we do extra tests around Active Directory integration? It's fairly easy to remember that for the last release or two, but it gets a lot harder the further back you go.

Our test plan should say why.

This would be a lot easier if we simply put the reasoning down in the test plan. There is an intent in a test plan, a set of circumstances that motivates you to look at a product in some ways and not in others. You'll forget that if you don't write it down.

Test plan motivations (or intents) can be very simple things:
  • "Point release with tightly controlled changes, so only doing a smoke test in other areas."
  • "Performance tests will not be performed because the new hardware to do it won't be in before release."
  • "Many recent customer complaints about Active Directory ease of integration, so additional focus there."
Just write down why you're doing the test plan the way you are. In a year, when you don't remember the motives behind the plan, it will be very useful to be able to look it up.

Wednesday, February 17, 2010

The If It Goes Wrong Principle

Let's say you're doing a date-based release. You have a hard and fast date and whatever happens*, on that date you will ship. (* Yes, we all know there are exceptions for asteroid collision and nuclear war, but other than that, you'll ship on that date.) So you have a date, and you have a rough feature set consisting of whatever the team has committed to. You also have an implied minimum level of quality and stability. After all, you don't want to be embarrassed or deal with crashes and the like.

Given all these constraints, what do you test first?

Use the If It Goes Wrong Principle

The If It Goes Wrong principle states that you should consider what would happen if your test goes wrong, and put potential long fix turnarounds early in the test cycle. The longer errors will likely take to fix, the earlier you should test them.

For example, if you have a feature that you outsourced to another company, problems in that feature will probably take longer to fix, just because you have to go back to that other company, get them to fix it, have them run it through their process, and only then can you retest it (and hopefully declare it fixed).

For example, if you have a component that has changed a lot, you should put that component near the top of the test list because mistakes happen when you change code, and so code that's heavily modified is more likely to have problems and be a little less understood, and therefore take longer to fix. Depending on your developers, sometimes things take several tries.

For example, if you have a component that is a brand new design, problems in that code are more likely to be design flaws that show up only when you push the system. A major design flaw can take a long time to fix, so push the edges of the design early.

Using the If It Goes Wrong principle and combining it with an understanding of the risk in each feature or component (how likely that feature is to go wrong), there is an implied order of tests:
  • Tests in areas that changed in third-party code
  • Tests in areas with new design or design changes
  • Tests in areas that changed a lot, in particular for teams or developers that tend to have follow-on issues (bug A blocks bug B or the fix for bug A introduces bug B)
  • Tests in areas that didn't change, but that if broken will likely take a while to fix
  • Tests in areas that didn't change and that are generally easy to fix
There are numerous other factors influencing the order in which you run your tests, but as a general rule of thumb, move up the tests that trigger the If It Goes Wrong principle.

Tuesday, February 16, 2010

The Everything Else

When you work with a piece of software, you can break it down into two basic camps: (1) the software; and (2) everything else around the software. The everything else around the software includes documentation, build systems, test or helper scripts, training guides, sample code, etc. Guess what? We get to test all of it!

Guess what else?

It's not uncommon to forget the everything else.

If you have a functional requirements process (or story process, or backlog, or whatever you want to call it), generally features that come in will have requirements for parts or all of the everything else. A feature to add a new widget to the GUI will include a reference to needing to update the documentation, etc. There is other work, though, where this is more likely to be forgotten. When dev is working on a support tool, or on a diagnostic utility needed to track down a field issue, those often bypass or are rushed through the requirements process (or whatever you're calling it). In those cases, the everything else is often forgotten. In particular, these things are often intended for use by a different audience - support, or dev, or sales engineers - and not by your general user population. It's assumed that this group - support, or dev, or sales engineers - know what they're doing, so the documentation, training, and other needs are lower.

Even when your audience is the "people in the know", that's not an excuse for skipping the everything else.

Yes, support probably knows your product really well. Yes, dev is smart and can just read the code to figure it out. Don't make them do that. Just because they're users who have the same company name as you doesn't mean you get to treat them any less like users. You still have to provide the documentation, the utilities, the samples - everything else that goes with the software you deliver. They'll make fewer mistakes if you help them along with a quick howto. You'll make fewer mistakes if you write that howto (and have to figure out what this thing will actually do!). So stop cutting this corner, and be good to your internal users. They'll thank you, and you'll be glad you did.

Monday, February 15, 2010

Save Successes

A test can have three results:
  • ran and passed
  • ran and failed
  • didn't run
Often, we focus on the tests that run and fail. Those tests are describing problems or potential problems with your product. We analyze those results, we log them into our defect tracking system or backlog, etc. This is all useful.

We sometimes fail to differentiate the other two categories, though. If I look at a test run from two months ago, and I don't see a test failure, did the test pass? Maybe. Or maybe it didn't run for one reason or another. Either I got information that the software in this way conforms to our expectations, or I merely think I did, and the test didn't run so I got no information at all about it.

When we track test results, we need to track the successes as well as the failures. These should be recorded into a test run database, summary log file, wiki page (whatever you're using for reporting). This will help you more easily answer questions like:
  • How long has this test been failing (which is analogous to saying when is the last time this test passed)?
  • What kind of coverage are we getting on the sprocket module? (If you have a test and it's not running, you're not really covered.)
  • Looking at a failure, this related test SprocketBend_t1 should also be failing if the problem is with the frobbles. Is it really passing?
Keep recording your failures. Just don't forget to record your successes as well. The green bar has about as much to say as the red bar.

Friday, February 12, 2010

Why Do You Ask?

I get asked a lot of questions:
  • "Did you test feature X?"
  • "What if I set up this configuration, what kind of performance could I expect?"
  • "Can we do this test Y?"
  • "So we might need to qualify a widget in 3 weeks, can you do it?"
  • "How fast can we get X into the field?"
Questions like this are fine; they're part of my job. Of course, I do have to answer them! Here's where it can get a bit tricky.

In order to answer a question completely, you need to understand the questioner's intent.

You've heard what the person is asking, but you may not understand their entire agenda or their entire need. To give an answer that will hold up, first figure out why they are asking and what they really need to know.

For example:

If the question is "when can you qualify feature X?", the intent might be to see if a potential deal that requires that feature is viable. In this case, the real question to answer is "We have a deal worth $Y but only if we have feature X within Z weeks. Can we make that happen?" This can informs your answer. If you had just answered the original question, you probably would have mentally sized it, slotted it in after what you're currently promising to people, and come out with a date. Now that you know the intent, you can understand that a deal worth $X is large enough that this request should come before your other obligations. So you mentally size it, slot it in earlier because of the urgency, and give a different date. Knowing the reason behind the question changes your answer.

When you get asked a question, pause for just a second and ask yourself if you understand the intent behind the question. It can make the difference between a useful answer and an answer that doesn't help in the end.

Thursday, February 11, 2010

"Easy to Use"

Many projects I work on have the "easy to use" requirement:
  • "The signup process should be user friendly"
  • "The API must be easy to use"
  • "Untrained users must be able to generate reports easily"
It's a laudable goal. Sure, some things really do take a rocket scientist (see: transporting someone to the moon). For a lot of things, though, it doesn't have to be hard. Making your software easy to use makes a lot of sense most of the time.

How on earth do you test "easy to use"?

It's a form of usability testing. It has nothing to do with what the software does, but with how that action or activity is presented to the person or thing consuming the software. Let's assume for the moment that we don't have a usability lab, a lot of time, or extensive training. We're going to be doing poor man's usability testing.

Our task now is to go from each person's opinion to something we can agree constitutes "easy to use". We have to take our benefit - ease of use - and make it into something we can actually measure. To do this, we break "ease of use" down into three separate things:
  • Task accomplishment
  • Ease of discovery
  • Ease of use
Task accomplishment describes how efficiently a user can accomplish a given task or purpose with the software. For example, a task might be "sign up for this great new service", and we might decide that we want users to be able to do this in under three clicks and one form. That count is somewhat arbitrary, but hopefully comes from comparison with similar or competitive products. Now that's something we can measure, and we can raise a flag if we start to violate that policy. We've defined, at least for this one task, our "ease of use" requirement.

Ease of discovery refers to a brand new (untrained) user, and how quickly that user can figure out how to successfully interact with the system. Usually this is also task oriented, and the goal is for the user to discover how to accomplish a task. For this test, you'll want to find someone not familiar with the software. Sit them down in front of the system and ask them to accomplish a list of tasks - no helping! See how long it takes. Depending on your budget, time, and rigor this one can go far to screen capture and recording mechanisms, etc.

Ease of use refers to someone who knows how to use the system and how simple it is to accomplish their tasks. Basically, it's like ease of discovery, except you get to train the user up front. There's also some overlap with the task accomplishment test, but rather than counting steps, you're looking at the user's actions.

If you can do full usability testing, that's great. Please do. If you can't, though, and if you have a requirement to test something "easy to use", you can at least start testing that, too.

Wednesday, February 10, 2010

A Use for Counting Bugs

For the most part, proposing a metric comparing number of bugs found to number of bugs fixed will get you laughed at. After all, bugs are not alike, and a dozen teeny bugs will look far worse under a "bugs found/bugs fixed" metric than one bug that crashes the system and causes data loss on its way down. In practice, often the one crash is more important than the dozen teeny bugs. Plus, a "bugs found/bugs fixed" metric is incredibly easy to game. Find an error in a page header? That's a bug per page right there! Woo hoo! Numbers goin' up (and this isn't helping anybody).

But...

Not using a metric to measure tester or developer performance doesn't make it totally useless. Check out this chart, which tracks number of bugs found (red) and number of bugs fixed (green) over thirty days:

There are a few things this chart actually is useful for:
  • Charting overall volume. If you find 25 bugs in one day, is that a lot or not too many? Well, looking at our chart, we see that we've never found more than 8 bugs in a given day. Unless something has changed on the project (more people, different focus, new test technique), then you've found a lot of bugs. Now, stop congratulating yourself and figure out what's going on. Did we just do a big checkin? New feature? Refactoring gone wrong?
  • Project Stability. A chart with a lot of spikes like this one is often an indicator of instability in a project. A lot of things are getting fixed, and the find rate is keeping up. This doesn't tell you why the project isn't stable, but it's an early sign you should go look at what's going on. Maybe it's simply mid-development and a lot of new feature are coming in. Maybe there's something wrong with your process or code base and you have a lot of people breaking things as they fix things. Perhaps the fixes are exposing more problem areas, in which case you may simply be working through a nasty code base.
It's rare that looking at any number - including number of bugs found or number of bugs fixed - will give you a lot of answers. Looking at these kinds of trends, though, can give you clues about where to look in a project to see what's going on. Don't throw the baby out with the metrics bathwater; take a look and see what clues you can find.

Tuesday, February 9, 2010

Code Inspection

Many of our tests exercise code. They might be unit tests, they might be system tests, they might be GUI tests, etc. Sometimes, though, it behoves us to stop exercising the code and start looking at it.

Code inspection can help identify potential problems, and it can help understand problems that you've already found. Code inspection really is about as simple as it sounds: you open it up and start looking.

Just randomly reading code, however, is not particularly useful. To work effectively, you need direction in your code inspection, just like you need direction and purpose in your tests.

There are several things you can do to make code inspection more useful:
  • Short times only. This kind of concentration is difficult, so don't try to do it for hours and hours. Spend an hour or two and then stop. Take a break and do something else for a while.
  • Pick a single purpose. Look for conformance to coding standards, or look for logic flow, or look for variable naming, or look for memory allocation/deallocation. It doesn't matter what purpose you pick, as long as you make it fairly small and fairly defined.
  • Explain it. As you're walking through the code, explain it to someone. That will help you not gloss over code and just figure it looks right. (It's hard to tell when your eyes are glazed over!)
Code inspection can be a very useful tool, but only if you take the time to use it effectively.

Monday, February 8, 2010

Oops

Some days you make a mistake and you say:

"oops"

Some days you make a bigger mistake, and you have to really acknowledge it:



Image courtesy of a coworker, who I don't think has yet rendered this in ASCII!

Thursday, February 4, 2010

Can You Work Remotely?

For many of us, there is a routine: we wake up, we go to work, we leave work, repeat. We're effective at work. We have meetings, we work with team members, we focus on work (not laundry!).

What if you couldn't do that? What if you had to work remotely?

There are many reasons that you might have to work remotely. Maybe it's snowing and getting out of your driveway isn't going to happen. Maybe you have a sick child who needs to stay home. Maybe it's simply a weekend and you just don't feel like coming in.

Can you work effectively when you're remote?

There are a number of simple things you can do while you're in the office that will make working at home easier:
  • Create an online team forum. Even in the office, use chatrooms (Campfire, IRC, or the like) to keep in touch. It feels silly to IM with a colleague who is 20 feet away, but when one of you is home, you'll be glad you can IM.
  • Make lists electronic. The whiteboard has its place, but you can't see it from home. For basic status and metrics, it's fine to leave them there. Tasks, designs, etc. should be made electronic so they can be accessed by the team remotely (or if the white board gets erased by an overzealous officemate!).
  • Get the VPN working. You probably have a VPN. Use it. Make sure you can get to your work machines remotely, with VNC, Remote Desktop, SSH, or whatever. Try this before you need it. Many IT departments have external network connections you can use to test this while you're in the office.
  • Find a fast connection. In your house, or in a local coffee shop, or wherever you're remote, make sure the network connection is reasonable. Laggy work is frustrating work.
  • Set up your home workstation. Install the VPN. Install either your work tools or methods to get at those work tools. For example, I use Transmit to sync files back and forth transparently. That way I can work on them locally and every time I save, it gets saved to my test servers. A simple SSH session later, and I'm running tests in my work environment, without having to remember to transfer files manually.
  • Remember you're working. Remote work is still work, so don't pretend you can do laundry and work, or make dinner and work. One or the other gets done, but generally not both. (I can't tell you how many dinners I've burned this way!)

Even if you have an office job, working remotely is likely to come up. Make sure you're prepared, and you can be effective, wherever you are.

Wednesday, February 3, 2010

Tainted

I've been working on a project for a little while now that is brand new. The next version we'll ship is 1.0 - and it'll be the first version actually to leave the four walls of our office. Things are starting to near the end, so we're working on things like packaging and documentation, and final tweaks of the software. All of that needs to be tested.

Specifically, we need someone to test the initial user experience. Someone has to walk through that first delivery and get it to a running system. This experience is incredibly important, because it will guide how the consumer thinks of the product throughout it's life cycle. This test will cover the feel of the package, ease of initial deployment, and some of the getting started and installation guide.

So great! I fire it up and.... stop.

I'm tainted.

I've been working on this project. I know how it's supposed to work, and I've been deploying and running test versions for a while now. I know too much. I know the assumptions the team has made, and I'm probably working off knowledge I don't remember I know.

This time period is a gift. I have a fresh product that needs a "first user" experience, and I have a couple of testers who have never deployed this thing (they've been working on other projects). Use it. I'll get them to do this test.

Today's lesson: Using a product and testing a product gives you knowledge and assumptions your users don't have. If you want to create the new user experience, get a user who doesn't have those assumptions and that knowledge. Find someone untainted.

Tuesday, February 2, 2010

Specs Change

A couple times, I've worked with contractors on specific projects. I've also been on the other side of the table a few times, as a contractor brought in for a specific project. In these situations, there's often a spec that describes the project. This is intended to be the vehicle describing what the project is supposed to do when it's delivered, and it may be incorporated into a project order or contract and thus become a binding part of the agreement.

But...

Specs change. The spec lifecycle winds up looking something like this:
  • Client provides a spec (email trail, screen mockup, Word document, etc)
  • Developers ask questions, point out holes, etc.
  • Client updates the spec. Contract is signed incorporating the spec.
  • Work begins. Developers work. Client works. Interim drops occur to show progress.
  • Client finds something that just _has_ to change.
  • Developers agree to the change.
  • Launder, rinse, repeat.
Ideally, everyone goes away happy. The client got what they actually wanted. The consulting developers got to build something useful (and hopefully the fees were adjusted appropriately along the way).

There are lots of potential problems with this workflow, but given a good relationship with the client it can work. There are a few things to look out for here:
  • When the first change comes up, that is the time to negotiate how changes will be billed. If it's time and materials, this is simple. If it's fixed fee, consider time and materials for mutually agreed scope changes, or a separate fee. Don't wait until you've delivered changes
  • Don't be rude about unstarted work. If work hasn't started in an area, the change probably won't require substantial rework. It may be scope change, but it might just be a different thing with the same scope. If the latter, there's not really any reason to make a big stink about charging extra for all this work. The work hasn't been started, so it's not like anything's getting thrown away.
  • Don't be rude about started work. If work has started in an area, and the client wants a change, then the client should still have to pay (in time and in money) for whatever part of the old way has been done. If you didn't want it, it shouldn't have been in the spec, so you should have to pay for the bits about which you changed your mind.
  • The spec parts that didn't change still have to be delivered as described. This is probably the biggest one that developers working on the project forget. A spec will have several parts. If a client changes one part but doesn't change another part, you still have to deliver the other part. This is true even if the client doesn't explicitly call out the lack of change.
  • Flexible doesn't mean pushover. Changes should go through your same development process that the rest of the spec did. That means no quickie releases without testing, no cutting corners just because the client really wants to see the change. Changes should as much as possible be slipstreamed into the existing project, and should not cause disruption unnecessarily.
Whether you're the one who's hired the development contractors, or you're the development contractors, going into a project prepared for change will help make sure the end result is something everyone can be proud of.

Monday, February 1, 2010

Git branch naming

I have a git project that I've been working on for a while. One of the devs branched it and I need to work on both branches. So now I need to have the two branches - master and 2-0-stable - on my local repository. Simple:

Make sure I'm up to date
git pull

Get a local copy of the new branch
git checkout --track -b stable origin/stable

Confirm I can push and pull to both branches
git remote show origin

Which shows this:
penitentes:csh_home cpowell (2-0-stable)$ git remote show origin
* remote origin
Fetch URL: git@github.com:abakas/csh_home.git
Push URL: git@github.com:abakas/csh_home.git
HEAD branch: master
Remote branches:
2-0-stable tracked
master tracked
Local branches configured for 'git pull':
2-0-stable merges with remote 2-0-stable
master merges with remote master
Local refs configured for 'git push':
2-0-stable pushes to 2-0-stable (up to date)
master pushes to master (up to date)

And then I can switch between branches with:
git checkout [my branch name]

There are two gotchas that I ran into:
  1. If you don't give your local branch the same name as the remote branch, you won't be able to push.
  2. When you pull the new branch, your changes will go on the new branch as well.