Thursday, October 28, 2010

Ant on OS X

I ran into this last night and scratched my head for a little while, so this is for anyone else who runs into this:

I have a Java project that I build with Ant on my OS X machine. It compiles and produces a war file called "project_client.war". After the most recent OS X update late last week, that stopped working. Instead, it started producing a war file called "project_${}.war".


Here's the relevant ant task:

<target name="buildWarFile">
<delete file="${current.war}" failonerror="false" />

<war destfile="${current.war}" webxml="${current.resources}/facilitate_core/resources/facilitate.xml" duplicate="preserve" update="false">
<fileset dir="${current.resources}/${current.secondary_resource_dir}/html" />
<fileset dir="${current.resources}/facilitate_core/html" />
<webinf file="${}/templates.jar" prefix="WEB-INF/lib/" />
<webinf dir="${current.resources}/${current.secondary_resource_dir}/resources/templates" prefix="WEB-INF/resources/templates/" />
<webinf dir="${current.resources}/facilitate_core/resources/templates" prefix="WEB-INF/resources/templates/" />
<webinf file="${current.jars}/velocity-1.4-p1.jar" prefix="WEB-INF/lib/" />
<classes dir="${}">
<include name="com/spun/**" />
<target name="pushWarFile">
<isset property="" />
<ssh host="${}" username="${current.ssh.userName}" password="${current.ssh.password}" version="2">
<sftp action="put" remotedir="${current.ssh.remoteDir}" verbose="true">
<fileset file="${current.war}" />

That references two properties files, a (read in first) and a (read in second).

This is

This is

When I ran the ant task with -v, it showed an error:
" is not defined"

Well, that's funny. This worked just before the update, with no code change. Also, I see name right there - it's defined!

The trick turned out to be removing "current" from the war property, making look like this:

And that did it. My best guess is that it's something that has to do with the new namespaces features for properties in ant. But if anyone else runs into something like this, maybe this will help.

Wednesday, October 27, 2010

Beware the Matrix

Matrices are very common in testing. Here's one we use at one of my clients:

It looks simple, and it really is. We have our tests written. All we have to do is run the test in a loop for each of the squares in our matrix. Hooray!

There's a catch. (There's always a catch.)

In this case, the catch is that each test takes about 5 hours to run, and we don't really have the machines to spare to run 100 different configurations at 5 hours each.

That's the problem with matrices; they seem like such a good idea, and they're really easy to design as test cases. Unfortunately, they're often something that simply can't be completed within your constraints, and even presenting a matrix like you see above generates an expectation of completeness on the part of anyone who looks at it. If I show this to my boss, for example, his response will be, "Great! Let me know when we can see the results." Unless he's willing to spend a whole lot of money on machines (and we do have a budget), the answer is "No time soon, and frankly there are better ways to spend our time."

Depending on what's in your matrix, it might be a good candidate for combinatorial testing. In particular, if your equivalence boundaries are accessible, consider using this technique. If your needs are different, then consider flexing along one variable at a time or simply skipping steps. In my example above, we're testing performance, so we don't necessarily care about every single step along the way. We can run with the extremes and a few values out of the middle. Only if something "looks funny" (that's a technical term!) do we need to go back and fill in the blank. In the end, we wound up running every other cache size and only at two chunk sizes. From that we learned that cache size didn't seem to make a major difference, but chunk sizes did. Test complete, information gleaned; the total cost of the effort was 10 tests - 10% of the cost of the entire matrix.

Sometimes you really do need to fill in the whole matrix, and when that happens, get cranking and start filling in boxes. Just make sure before you present a matrix that you really do need the entire matrix. If you only need part of a matrix, only show your test consumers (dev, management, whoever) the part of the matrix that you actually need and that you actually intend to test.

As with all tests, when you're looking at a test matrix, ask yourself what you're intending to learn from running these tests. Then run only the tests that give you the learning you need. Skip the rest; there are plenty of other things to do!

Tuesday, October 26, 2010

Rspec and Generators

I learned a new trick today!

When working in Rails, I use the generators as easy ways to create models with migrations, and whatnot. I got used to running my generators with "--rspec" since that's the test framework I'm using currently.

I can save myself some time by adding this to my application.rb:

config.generators do |g|
g.test_framework :rspec

Simple, right? But it sure saves me from forgetting and having to hand-make my own spec files. (It's the little things...)

Monday, October 25, 2010

Anybody Need a Pair?

Here's a dirty little secret: testers are probably the biggest generalists on any team. Your average tester can write some code, parse a customer's request into a requirement, draw boxes on a board to explain the system design, install a new package in the test lab, and go through logs with a support engineer to see what on earth the customer might have done. Oh yeah, and test. Generally, your tester isn't your best developer, is not your greatest business analyst, probably doesn't understand the system as well as the architect, but he's pretty good at a lot of things (and often very good at testing).

So if you're a tester and you want that kind of breadth of knowledge, how do you start?

Pair with anyone who will let you.

Okay, maybe you're not in an agile environment and ya'll don't do pairing. In that case, the question is "can I sit with you for a while and...?". It's still pairing on some level, we just won't call it that! You get to learn about what they do and about the system from a different perspective, which can only help you test better. They get the benefit of fresh eyes, and of having to explain things to a newbie.

I don't think this was set up intentionally, but in every functional engineering team I've worked with, the test team is the glue that crosses silos. You want to be in that position. You want to be the team that sits with development and says, "You know, I heard the support team complaining about that taking a long time to do serially. I think they said they had a customer that added 200 widgets a day, and the customer was none to happy about having to do it one widget at a time." It may not get the workflow changed, but it starts the conversation, and you have a decent chance at fixing real problems before you ship.

So when in doubt, find someone and go pair. It's amazing what you might find, and you didn't even know you were testing!

Thursday, October 21, 2010

Metric Types

I've been at STPCon this week, giving a few presentations and going to several presentations. One of the more controversial sessions there was on metrics, and how to use metrics to get to "success" (whatever measure of success you're using). The most interesting part to me was the discussion afterward.

You see, metrics sound like they ought to be awesome: we can measure what we're doing and then we'll know whether we get better or not! In practice, they tend to be pretty much unrelated to things that actually help us and actually help the business, and they are either neutral or they can do more harm than good. I'm not quite ready to throw this baby out with the bathwater yet, though.

Metrics are a time for

Not a time for

There are three main types of metrics that you can use:
  • internal metrics
  • process-oriented metrics
  • business-oriented metrics
Internal metrics are the ones you don't tell anyone outside of engineering about. They're the things you track to measure your own performance. Internal metrics include checks on how accurate your estimates are: "On this last project, we were on average 20% over our estimates." or "The five last releases, we found a lot of bugs in the whizbang component relative to all the other components. Maybe we should go looking at that one a bit more". Internal metrics can be great information providing tools. Some of them are really only useful to point out potential problem areas in a one-time look, while others you can use for years. The important part of internal metrics is they provide information to engineering and don't have a lot of relevance to other departments. These are the things you go find when you say, "I wonder what..." and don't need to put up on a big project dashboard.

Process metrics are the ones that can be the most dangerous, and they're really common. These are metrics like "number of bugs found per story" or "number of bugs per thousand lines of non-comment code". The problem with process metrics is that they measure how you do things, not what you accomplish. Given that most people are trying to game metrics, gaming these changes how you do things, but not necessarily what you do.

Business metrics are the ones that measure effect on customers. Revenue is an example of a business metric, as is % of returning versus new customers, or brand value (Coke is worth over $66B). These are the metrics that are directly tied to the success of your business. Bringing these metrics into test is often somewhat difficult since better testing rarely can be tied directly to new customers (indirectly, of course, the overall quality of your product affects how many customers you bring in and how many you keep). However, if there is an issue will cause you to lose customers, or to be publicly embarrassed and hurt your brand value, the effects of that issue can be tied to these business metrics.

Don't throw metrics out entirely. Just be careful that your metric is limited in scope to ways that it will be useful, or that it can actually measure what you affect for the company's overall purpose. Throw the rest of it away - you'll have more time to spend actually testing.

Monday, October 18, 2010

They Don't Have To Justify

Here's the situation:
You found a bug. You think it's a really annoying bug. You reported the bug, including your opinion on the likelihood and the annoyance of having to do this really obnoxious workaround.

Then it got deferred. The official reason: "Not important enough to hold the release. Fix it for the next service pack."

Wait, what?! It's really annoying! Why on earth would you defer it?! It's time to go to the person making the decision and get an answer. You need to understand why they would make such a counterintuitive decision.


Deep breath.

Lesson #1 in corporate politics: you are a tester. The person making the ultimate call is probably a director, VP, or someone else who's been around for a while and seen some things. They do not have to justify themselves to you. It doesn't matter if they've been promoted to the level of their incompetence; they still outrank you. Demanding justification will get you nowhere.

Your job is to make sure they understand the bug and its implications. They have to "get it". Once they understand, they can make a decision. If the person then wants to explain that decision to you, that's wonderful (and not uncommon), but it's not actually a required part of the process.

Seek confirmation of understanding, not justification.

Remember, you are an ally of and an advisor to the person making the release decision. You need to be someone they can trust, someone they want to see. If you're constantly asking for justification and making them defend their decisions, then you can't be an ally. So do what you really need to do - be an advisor and ensure understanding. Don't force your understanding of the ultimate decision; it'll generally come out anyway, and you won't have had to be obnoxious about it.

Friday, October 15, 2010

On Network Debugging

I've been working on a project that involves essentially screen scraping. Basically, I perform an http get, parse the response, form a request, and then perform an http post (launder, rinse, repeat). It's not pretty, but this particular site doesn't offer an API and this was their suggestion.

As I work through this, I spend a lot of time looking at the requests and responses going over the wire, trying to figure out what parameter I didn't set properly, etc. To do this debugging, I had my choice of network monitoring tools. The ones I use most are:
  • Wireshark
  • tcpmon

Wireshark is a network packet tracing tool. It runs as a wrapper around your network driver and picks up all the traffic. This is great when you are trying to figure out basic connectivity, any sort of network congestion, or the like.

tcpmon is a tool specifically for monitoring TCP traffic. It lets you see the requests and responses, and even lets you modify a request and resend it.

Choosing a network monitoring tool depends on where you think your problem is. The OSI Model for networks has seven layers, and you should aim your tool at the layer(s) where you think you have a problem. Think about the kinds of problems that you might see (bottom to top):
  • Physical Layer
  • Data Link Layer
  • Network Layer - Wireshark shows here up
  • Transport Layer - Tcpmon shows here up
  • Session Layer
  • Presentation Layer
  • Application Layer

For this problem, I only cared about the transport layer and up - what was in my tcp request and what the app did with it. So I was able to use tcpmon rather than Wireshark. Neither is better than the other, but tcpmon showed me what I was looking for without the extraneous information Wireshark offered.

I tend to choose tools that are as high level as possible while still showing me the error I'm seeking. It's not a perfect rule, but as a general rule of thumb, it works pretty well.

Twist Podcast

Matt Heusser and I had a chat not too long ago, and he's put up this Twisted Podcast (free registration required). We talked about... how we got into test, some about test teams in an agile environment, how testers and devs can pair to accomplish all sorts of tasks, and a few other things.

Go have a listen!

Wednesday, October 13, 2010

Test Variations

Once you have a base of test automation, it becomes easy to do variations on a test. What if I ran test X with changed configuration Y? What if I ran it with different hardware Z?

These are one of the benefits of a good test automation stack; it becomes fairly easy to run lots of variations on the same thing, and that can result in a lot of learning about how system behavior changes as various things flex. For example, I recently took a performance test we run and tweaked the setup to make it run with different memory allocations - and in not too long I had a much better understanding of how system throughput changes as a result of adding or constraining that particular resource.

This isn't the kind of test i need to run frequently. I run it every once in a while to make sure the system behavior in this area hasn't changed, but it's overkill to run it every single night with a lot of our automation.

So here's the dilemma: Do I check in the test with my modified configuration?

Pros to checking in:
  • Next time I run it, I'll be sure I have the same setup, so my results are comparable
  • Someone else could run the same test and gather their own information from it (or a variation on it)
Cons to checking in:
  • Checked-in code that doesn't run is likely to get stale and stop working as other code changes around it. This goes for test code, too!
  • Someone might try to run it more frequently, and that will take up a lot of machines for very little benefit
There's no particular right answer here. Which one you choose will depend on how hard it was to set up, how much consistency matters, whether it's really likely to stop working, and how frequently you do want to run it. Pick the one that works for you; just make sure it's a choice you make, not something you fall into.

Tuesday, October 12, 2010

On My Own

Well, it finally happened. I quit my job.

Readers who have been around a while will know that I was an employee during the day and a writer and project tester at night (and some weekends, when it wasn't too pretty out). Through no fault of my employer, I have decided that it's time to pursue my own path.

I've joined Abakas full time as a test consultant.

So what's an Abakas?

Abakas is at its core a group of engineers who really like diving into new code bases and working with a lot of interesting people. We're developers, testers, and a designer or two. We've been doing - actually doing, not just talking about - various forms of agile development (SCRUM, XP, and lots of "tweaked" variations thereon) for years, and we like sharing what we've learned. We've developed everything from a cloud deployment module to a personal health website complete with calorie calculations and exercise logs to a huge restructuring of a multi-language multi-hour build system. Working mostly in Java and in Ruby on Rails, we've done many different things, and we've gotten pretty darn good at it.

Not to sound casual, but engineering can be fun and engineering can be collaborative, and part of what we do is help you get that excitement, too. There's no reason you shouldn't see your work in progress, and there's no reason we shouldn't work toward your end goal iteratively.

So if you have a project and don't have the guys to do it (maybe they're all busy on other things, or maybe you don't have an engineering team yet), or if someone said "hey, we're scrum" and your testers don't know what that means for them, drop us a line. Let's plunge in and get started - we'll see where it goes, together.

Oh, and tomorrow I promise a testing-related post!

Thursday, October 7, 2010

Between Us and Our Creation

Lo back in the mists of time, there was the engineer. And he had punch cards. And that was it. And since then, we've grown many tools. We have compilers and debuggers and libraries and test frameworks and code generation and IDEs and runners and automation tools. And all these things are good. Unfortunately, all these things stand between us and our creation.

I wonder if we really still understand the software we create?

I've been hiring for development and QA positions, and discovering that people can't tell me about the systems they've worked on. They can only describe what their tools showed. They don't really understand what's going on. I find that very disappointing: when I ask a tester what portions of their systems are prone to race conditions, I get a blank stare. When I ask what a failed disk would do to their system, another blank stare. They don't really understand what's happening to a level where they could really identify a potential problem area or and track it down. They only understand the lumps of things their tools expose.

So here's my challenge to you: Throw away your biggest crutches for a day. Get rid of your IDE and your log parsing tool. Really look at your system and what it's doing rather than just what your tools are telling you. Let's see how good our understanding really is.

So here's my challenge.

Tuesday, October 5, 2010

What Will You Do With

Tests are good. I like tests. Tests show us things. Sometimes we call those things bugs. Sometimes we call those things information. Sometimes we call those things validation. Sometimes we call those things "someone should tell support because customers aren't going to understand this one".

The point of running a test is that we get something for our efforts. We have, after all, spent time, computing resources, brain cells, and possibly some money on this test. So whenever anyone wants to do a test, I ask one question:

What will you do with the results?

This question serves two purposes:
  1. It helps us understand whether doing the test is worth the cost.
  2. It helps us structure the test so we get the information we need from it.
In short, considering the output - including the reporting - before we start the test helps make sure its a good test.

For example, if I'm doing a performance test, I might ask "what will I do with the results?" If the answer is "give them to marketing for the website", then I'm going to design a performance test to give me the biggest number I can get out of a configuration that could exist in the real world (hey, it's marketing - we're trying to show off a little bit!). If the answer is "figure out what our biggest customer is likely to see" then I'm going to design the test to match the customer site and workload to the best of my ability.

Thinking about what you want - and it doesn't have to take long - will help make your test results much more usable. So before you start, ask "what will I do with the results?"