Friday, February 29, 2008

Semi-Blind

I had been writing a post and was completely unhappy with it, so I went to delete it. I couldn't find the delete button.

Here's the screen (I use Blogger, and you're gonna want to click to see the whole thing):



I've blurred the text a bit, but see the blue stuff? Those are actions. The black stuff is information - title, date, author, that kind of thing. Most of the work is on the left side of the page - creating new posts, editing existing posts, the titles of all your posts, etc. This is pretty normal for sites (at least, in the right-to-left world).

And then there's delete. It took me at least 10 minutes to find delete.... waaaaaaay over there on the right side. I know delete is something you should take lightly, but you'd think it would be in with the rest of the controls, either at the top of the page or on the left side. There's discouraging use of a feature, and there's hiding a feature, and this one crossed the line for me.

I really don't think that one is a usability win for Blogger.

Thursday, February 28, 2008

Is Unexpected Behavior a Bug?

There's the way you coded it, and the way it should behave. Most of the time, these should be the same. Occasionally, though, this is the kind of thing that causes nightmares. Usually the problems are related to the latter half of that statement - disagreements about the way something should behave.

For example, there is a ... well, we'll say item... introduced in the Linux 2.6.19 kernel. When doing NFS mounts, if you mount a file system first as read-only, all later mounts of the same file system are also mounted read-only. The issue was closed as: "According to the explanation on the upstream bug report, this is a logic change in the way the kernel handles multiple mounts of the same remote nfs file system. Not a bug."

So, is this a bug?
  • Yes. It's a semi-documented (at best) change in behavior that breaks backward compatibility.
  • No. It makes the behavior of mounted remote file systems consistent with the behavior of local file systems.
  • Yes. This has the effect of causing the -rw flag to be useless in some scenarios.
  • No. The flags are requests, not requirements on the file system. Just because you request it, the file system doesn't have to grant you that level of access (for example, you might not have permissions to write to that file system, and no -rw flag will help you there).
I honestly have no position here, but be mindful that when you say something is or is not a bug, you also say what you expect - that way the issue can be resolved, whether it's with code or with expectations.

Wednesday, February 27, 2008

Speculation Vs Hypothesizing

The line between speculating about a problem and creating a hypothesis about a problem is very thin but incredibly important.

A speculation:
  • is not backed by data, although it may not be negated by data
  • may or may not be provable
  • generally requires a leap in imagination (often phrased as "if XXX is also happening")

A hypothesis:
  • is explicitly based on available data
  • has a specific way to be tested and proven or disproven
  • accounts for all known data points
A speculation may lead to a hypothesis, but working based on speculation alone is likely to result in churn. Working from a hypothesis (or set of hypotheses) allows you to winnow away possibilities. In the end, the important difference is that given speculation there is very little you can do to move forward. Given a hypothesis, you can test it and move on, having found your explanation (or not).

When you're doing root cause analysis (in our case of a bug, but that's just a hazard of the profession!), ask yourself whether you're using speculation or a hypothesis. Speculation will cause you to jump from possibility to possibility, but there is no order or method to it. Speculating leads to churn. Hypotheses lead to answers.

If you can answer the question "Why do I believe this may be so?" you have a hypothesis. If you can't, keep digging.

Tuesday, February 26, 2008

"It's Hard" Sometimes Means More Than "It's Hard"

I sat down this morning to write another diatribe against the "it's hard" school of bug closing.* Then I took a deeper look at some of the bugs that trigger that reaction.

Sometimes "it's hard" is shorthand for "we don't get a lot for all this effort".

There are legitimate times to say that a bug is hard to close, or a lot of work to close, and that you don't get much for all that work. For example:
  • You show space utilization to the hundredths of a MB. For a lot more work, you could show it to the thousandths of a MB. Your customers typically write 100GB plus of data to the system. It's not really worth that extra significant figure.
  • You can cause a 1% increase in the performance of your system... if you take the team into isolation for a month to get it done. The only customer complaining about performance is that guy who won't be happy no matter what, and is only worth 0.02% of your annual revenue.
  • Your app allows users to name their modules. There's a bug that the name can't contain an umlaut. None of the rest of your app is globalized. Fixing it in that one spot won't really make your app work in countries where umlauts are in common use.
The more appropriate way to put "it's hard" is "the ROI of making this change is low, so let's not do it". Typically, bugs that fall into this category are those that:
  • Make small performance improvements when performance isn't a problem.
  • Are only a small portion of the total problem and won't help fix the overarching weakness.
  • Don't matter to enough of your customers (or potential customers) to be worth the cost of fixing.
So next time I hear it's hard, I'm going to look hard enough to see if it's whining or if it's really that fixing the bug ain't worth much.


* No, no reason in particular, all you Permabit readers - ya'll are wonderful as usual.

Monday, February 25, 2008

What's Really the Problem?

When I was at lunch today I sat next to a table with three people talking about Windows Vista. Their positions on Vista went something like this:
  • Guy 1: HATES it. Calls it very very unstable.
  • Guy 2: Wants to try it since he just got a beefy new system. Worried about all the anti-Vista sentiment in the air.
  • Guy 3: Mostly interested in his food.
Let me back up for a second and note that I ran Vista Ultimate on a Dell laptop for most of 2007 (starting in January before the official release) and had no stability problems. Sure, there were things I liked about the OS and things I didn't like about the OS, but I didn't experience any of the freezing, hangs, crashes, runaway processes, etc that this guy apparently had. So why were our experiences so different?

What's really the problem here?

It's easy to blame Vista - heck, everyone else is! - but before we go casting aspersions, let's look at what else can cause OS instability.

  • Antivirus. Most of the antivirus software I have ever used is horrible on a system. They tend to run lots of privileged and high priority processes, and there is a ton of "always on" usage. These are applications prone to memory leaks, runaway processes, etc.
  • Hardware Instability. A bad memory card, dying hard drive, etc. can cause your computer to appear unstable.
  • Too Cheap/Old Hardware. The minimum requirements are just that - a minimum. They're going to run minimally acceptably. If you want a good experience, make sure your system can handle it.
  • Unsupported Drivers. That wacky "speed up the internet" thing that you downloaded from your dialup provider? That's a program that's running at a very low system level - even down to your network drivers - and any issue there will have wide-ranging effects.
  • I'm sure there are more, but these are the common non-OS causes of system-wide issues that I've sen.
So just because Vista is unstable, don't assume it's the problem. Perhaps it's just a victim. Remember that operating systems run in an environment just like any other program.

Don't look at the proximate problem. Keep looking until you find the real problem.




* Disclaimer: I have not run Vista SP1 at all, and can't speak to its quality or stability.

Friday, February 22, 2008

TDD and Order

Short one today on a snowy Friday...

There's an interesting post on the Scruffy Looking Cat Herder blog considering the outcome of a TDD effectiveness study. I'm reserving judgement on this one for a while.

Thursday, February 21, 2008

Beware the Testing Calling

Every QA candidate I interview gets asked this question:

"So, how did you get into QA in the first place?"

There are three classes of answers that I hear regularly: those who have a calling to test, those who fell into it, and those who aren't really into testing but can't get their preferred job.

Test Is a Fallback
These are the candidates who consider themselves "really developers, but the market is tight right now". Or else they're just out of college and looking around, with development as a default. Sometimes, they really love the idea of working for your company and will try to get in to QA with the intention of moving to development "as soon as something opens up".

If you take this candidate with a clear understanding on all sides, this can actually be a good hire. First of all, points for honesty and openness about viewpoint. Secondly, if the candidate really does end up in development, they will have some exposure to how you test and can be an advocate for QA after they've moved. Be careful to look for some ability here, though, as the desire to learn how to test well can be very limited. Lastly, if it's someone coming right out of college, give the kid a break - there are very high odds that the exposure to test is minimal at best. It may simply have not occurred to the candidate that a career in test opens him to some of the more exciting and even innovative work he can do.*

Fell Into Test
I have a soft spot in my heart for this candidate, mostly because this is how I got into QA in the first place. Let's face facts, though - testing as a profession within software development isn't one of those things you're going to hear touted in university Computer Science programs. When you get out of your average business school, there is lots of talk about being a marketer, or an accountant, or a financial analyst. When you get out of your average Computer Science program it's simply assumed that you'll be a developer, and the only real question is web developer, or database developer, or server developer. Test simply isn't on most lists. A person like this will often tell a story about being in tech support and being pretty good at figuring out what's going on, so moving into QA. Or they'll talk about an internship they happened to get and discovering that they really liked this test thing.

The best hires I've had are from this pool. There are a few flags to watch for - specifically, the talent has to be there and the technical skills need to be present. Also, if they haven't been in QA for a while you may be looking at a person who is just good at disguising that this isn't really the job they want. This type of a varied background often makes for a tester that is sensitive to the technical and business issues around the tests. The end result is often a very effective tester.


Calling to Test
These are the ones who feel like testing is a calling. Typically, they'll say things like, "I've always wanted to be a tester." or "When I was growing up, even then I was testing things." A candidate who mentions growing up a tester invariably tells one of two anecdotes. The first anecdote is about taking something apart to see how it worked, almost always a vacuum cleaner.** The second anecdote is about being introduced to something and finding a bug in it without any seeming effort. Either way, this candidate feels like they were "born to test" and have a lot of innate talent for testing (which usually means an innate talent for finding flaws).

Beware this candidate. It's possible that this will be a great hire, but all my bad hires have come from this pool. The flaw you have to watch for here is that belief in the person's innate talent - that talent may or may not exist! This type of candidate, particularly when inexperienced, also tends to be resistant to learning or instruction; why should they learn techniques and strategies when their own natural abilities will get them through?  Lastly, there's the risk that this candidate simply has a strong idea of what they think you want to hear, and that's not a good sign. I want employees who are forthright, not playing the game of "please the boss".



* If you need to sell a candidate who is thinking development simply because there's no exposure to testing, then by all means do it. I'm of the firm opinion that test today is much more exciting than development (sorry, any of the devs who read this - don't kill me!). After all, who wants to get out of school and go be a junior developer implementing yet another AJAX control with a login form you'll build 50 more times in your career? Test is much more of an exciting frontier. It's where a lot of the innovation is, and wouldn't you want freer reign to solve a problem that hasn't been solved well before?

** Side note: Those poor vacuum cleaners! Why do they always get picked on? And will the next generation be telling stories about dismantling Rumbas?

Wednesday, February 20, 2008

Tools Don't Fix People

"Hey, our process isn't working. This great new tool will help!"

No, it won't.

No tool can fix a human problem.

There are a lot of different software development processes - RUP, XP, SCRUM, etc etc etc. Most of them have tools associated with them, ranging from the RUP templates and all the various products in the Rational Suite, to agile process management tools like Rally. These tools have a purpose, and can be an amazing productivity gain, but they don't do any good if your problem is your process or the people in your process.

What a tool can do:
  • Help document your process
  • Help document conformance to your process
  • Act as a memory aid to help people remember the ins and outs of your process
  • Provide reporting to help you measure your process improvement
What a tool can't do:
  • Make people use it
  • Force people to follow your process
  • Give you the benefits the process improvements can
Don't mistake a people problem for a tool problem. There's enough software gathering dust; fix the people, then fix the tools.


Tuesday, February 19, 2008

Mary Had a Little Lamb

One of the techniques I use for developing test cases is "Mary had a little lamb." Sure, the name of it is kind of long, but bear with me.

Essentially, what you do in this technique is take each word of your spec and exercise it. Ask yourself: What else could this be, and how would that affect things? Not all of these will be relevant, but it's a good way to stress each portion of the spec. Then simply chuck what you don't need.

For example: Mary had a little lamb.
Mary
  • What if it's Bob? Does gender make a difference?
  • What if there are 4 people named Mary? Who has the lamb?
  • What if Mary is a nickname and her full name is something else? Does how you address it matter?
had
  • What if she had it but doesn't any more?
  • What if she doesn't have a lamb, but it should be here tomorrow?
  • What if the lamb belongs to her but isn't in her possession? (i.e., what if "had" is not literal)
a
  • What if there are no lambs?
  • What if there is more than one lamb?
little
  • How big is little? Is this age or size?
  • What happens if the lamb is bigger than expected?
lamb
  • What if it's a sheep?
  • What if it's a chicken? (or some other non-related animal)
  • What if it's a shovel? (or some other non-animal thing)
  • What if the lamb is somehow unusual - color, or baa-ing?
This technique doesn't yield test cases directly, but it does help you ask intelligent questions about the system.


Pros:
  • This works particularly well when you're helping with requirements definition. The specs tend to be defined in a positive manner (what will the system do) rather than in a negative manner (what if the system doesn't)
  • Exercising each word can help you break out of a thought rut that is causing you to miss some test parameters.
  • It's a fast exercise to train on and can be used with non-testers fairly easily.

Cons:
  • This really only works if you have a fairly short spec. I can't imagine doing this with a 20 page document.
  • Non-comprehensive. I don't know of any technique that is fully comprehensive!
  • It can bring up overlapping test areas. Post-processing is necessary to distill this to testable items.
  • It can be tedious if you have a long example. Schedule this one in chunks so people don't burn out.

Good luck! Oh, and don't worry about sounding funny walking around saying, "MARY had a little lamb. Mary HAD a little lamb. Mary had A little lamb." It really does help.

Monday, February 18, 2008

Code Reviews Are Insufficient

One of the things about having a lot of automated tests is that it can slow down your story acceptance process.

Accepting a feature that involves a manual test looks like this:
  • Sync the code (or take a build)
  • Set up your environment with that code/build
  • Perform the manual tests specified in the feature
  • Play with the feature for a while - using exploratory testing or your other favorite non-test-sequence-based technique
  • Give a thumbs up or thumbs down
Accepting a feature that involves an automated test looks like this:
  • Sync the code (or take a build)
  • Run the automated test yourself
  • Play with the feature for a while (see above)
  • Wait. Allow the automated tests to run in their normal (nightly, for example) environment
  • Give a thumbs up or thumbs down
The waiting is particularly painful, because it means you can't just sit down and accept that story. So the temptation is to simply run the automated test, perform a code review, and say you're done. Resist the temptation.

In theory, theory and practice are the same. In practice, they're often different.*

Code reviews tell you the theory of the code. Don't accept the feature until you've seen the theory AND the practice.



* This was put to me this way by someone on my team. I don't think he made it up, but thanks for the pointer, Chip! Further update 20 Feb 2008: Chip tells me he got this quote from Rachel Silber.

Friday, February 15, 2008

The Politician's Fallacy

I ran across this - the Politician's Fallacy - today and it made my entire team crack up:

Something must be done.
This is something.
We must do it.

Kudos to The Old New Thing for leading me to it.

And then it got me to thinking (as these things tend to do)....

One of my responsibilities is to handle escalations from customers. Basically, if a customer has an issue and it's not something support can handle, it comes into engineering. This is fine, and often we're able to find the bug, diagnose the issue, etc.

The real fun part, and the part where the Politician's Fallacy starts to apply, is when the issue is a non-showstopper bug and there is no workaround. For example*:

An issue comes in that a client on an unsupported configuration is seeing duplicate log messages for certain events. We track it down to an issue with the configuration. This particular configuration is already scheduled to be supported in the next release.

Enter the Politician's Fallacy: We must do something. Hey, look, a script to remove duplicate messages is something. Let's do it!

The short answer is no, we don't really have to do something. Don't fall into the trap of believing that a fix means changing something. In the end this often leads down the path to one-offs and other scary unsupportable hacks. Sometimes acknowledging the issue is sufficient - inaction can be as beneficial as action.



* As usual, names and some circumstances have been changed to protect the innocent.

Thursday, February 14, 2008

Stories Should Say WHAT, Not HOW

We describe features as stories. This is incredibly standard in XP and other methodologies. A story usually goes something like this: "System allows user to log in with Active Directory credentials". Then it goes on to describe what that means, the tasks needed to implement it, and finally an estimate.

All good.

However, one thing that you have to be very very careful of is what you put in the story.

Stories should say WHAT the system does. Stories should not say HOW the system does it.

If a story says how, it forces the implementor into a design, and that design may or may not be the right way to do it. For example, here is a story two ways.

Story Describing What
----------------------------
Summary: Automated test logs bugs for failure
Motivation: Prevent QA engineers from having to manually parse error logs and log the bugs
Details: The automated tests currently spit out a results log with the test that failed. QA engineers go through, find the actual failure, and log (or update) a bug based on it. The automated tests should, for each failure, either log a bug or update an existing bug (if one exists already).

These bugs should be assigned to a QA engineer for review. QA engineers will manually review the bugs and send them to the correct developers. This step may be removed by a later story.
---------------------------------

Same Story Describing How
-----------------------------------
Summary: Automated test logs bugs for failure
Motivation: Prevent QA engineers from having to manually parse error logs and log the bugs
Details: The automated tests currently create a summary log with the test(s) that failed. QA engineers go through and find the actual failure, and log (or update) a bug based on it.

The automated tests should, for each failure, look up the test name in a database of prior test results, compare the failure with all related open tickets and determine if it's a new failure. If it's a new failure, the automated tests should log a new bug and create an entry in the database of prior test results. If it's a previous failure, the automated tests should update the existing bug and update the entry in the database of prior test results.
---------------------------------

In the second story, I'm describing not just what I want to have happen, but how it should be implemented. While the method described above is one way to handle automatically logging bugs, it may not be the best way. Nonetheless, my story is committing us to that particular way.  
The purpose of a story is to describe what about the system will be better for the user. Don't expand the scope of the story and lose sight of the user. This is a requirements document, not a design document.

Wednesday, February 13, 2008

5 Whys and Escalation

One of the things my QA department does is handle escalations from support. Basically, if support can't figure out a problem or if support believes that a problem is caused by a bug, it comes to QA. We figure out how to reproduce the issue and track it down to its source, then assign it out to the appropriate team for fixing.

Generally this works pretty well, but one of the things we would like to improve is how much support knows before things are kicked over to us. So we're starting to formalize around the 5 Whys.

The 5 Whys method is a cause analysis heuristic that can be used for all sorts of problems. The idea is to look at a problem and - much like your average 3 year old - start asking why. Asking why takes you from step to step and helps you avoid assumptions or leaps of faith.

For example:
1. I couldn't write to the network drive
Why?
2. The drive wasn't available
Why?
3. I can't get to the system that the drive is on
Why?
4. The system is down
Why?
5. The disk failed

This is an easy example, but it gets harder quickly. 

The "whys" are now part of the formal defect writeup.  I suspect this will reduce the number of bugs that end up in QA with a "huh?" attached.

We'll see what happens in a month or two...

Tuesday, February 12, 2008

Endless Discussions...

There are a lot of process discussions that go on at work. We talk about how to move stories forward, how to create a story, how to create a bug, how to triage a bug, etc etc etc. That's a lot of talk about the beginnings of things.

We so rarely talk about the endings.

One of the trickiest issues we have is how to handle customer issues. Here is where the beginning is well-defined, and the end is not. Here's the process, roughly:
  • Client logs an issue (or support notices an issue at the client stuff).
  • Support looks at the issue to figure out (1) what's wrong; and (2) how to get the client running again, if necessary.
  • Support logs a bug for dev to figure out what the problem is. This only happens if the issue is something new or is unclear.
  • Dev looks at the issue and figures out what's going on. Where necessary, dev comes up with a workaround.
After that, it kind of falls apart... Usually by this point the client is up and running again, so the urgency is somewhat less. One or more of the following may occur:
  • The bug that support logged gets moved into a dev queue.
  • The bug that the client logged gets moved into a dev queue.
  • The bug that the client logged gets closed.
  • The bug might be closed and a story created to add new functionality.
  • Someone sends an email saying, "So what's next on this item?"
So when you're talking about how to handle incoming data, be sure to talk about how to handle closing it out. Starting things is only good if you can end them!

Monday, February 11, 2008

Skating By

I hate the notion that we're doing the least we can to get by. And it's popping up everywhere:
  • In Software by Numbers, we see the notion of "Minimum Marketable Features" - the smallest chunk of work that a customer or potential customer would find viable.
  • Extreme Programming espouses the notion of Doing the Simplest Thing That Could Possibly Work. When you solve a problem, solve it in the simplest way possible; that is, make the smallest change possible to get the desired functionality.
  • Many SCRUM practitioners require that stories be small - anywhere from "fits in an iteration" to "no more than X days" (where X is smaller than a breadbox!). Yet you still have to have a shippable product at the end of each iteration. And if it's a shippable product, then you've met customer expectations - with little changes only.
I can understand the desire behind these concepts. Overbuilding something, or spending a long time building a "framework" or a "platform" can take you away from what your customers really want. When you're overplanning or overbuilding you can work for a long time on something only to find that the market has moved on you, or your customers don't like it, or whatever.

But...

I really think the notion that we're all going to do as little as we can get away with does a disservice to the customer. Plus, that's not what really good development teams are doing. Good development teams are distilling ideas to customer desires - expressed and unexpressed - and building only that.

I don't think the problem is in what we're doing; I think the problem is in what we're calling it.

So down with minimums. Let's give it a better name. We're not doing "minimum features", we're not doing "small changes". We're not doing "simple things" or "easy things".
 
We're doing focused development*.

I think that sounds a lot more like what we really do.


* I'm open to better names... ideas?


Friday, February 8, 2008

Kanban-Style Software Development

Blog articles - at least in the blogs I follow - come in waves. Part of it is a self-perpetuating cycle: one blogger writes something, and other bloggers react, chime in, etc.

Anyway...

One thing that's been running around fairly recently is the use of Kanban-style processes for software development. The basic idea is that there exist a few simple highly-defined rules that enable a "pull based" process. Instead of a product manager determining what needed to be done and pushing it into development, we reverse the process. Development works on something, and when it's done, goes to product management and asks for the next thing.  This blog has a good article summarizing the idea behind it.

Interestingly, this is something that was quite popular in MBA programs when I was in school in the late 1990s - more waves!

I'll be curious to see if or how this takes off.

Thursday, February 7, 2008

Introduce Process Slowly

Because my current QA team has gone from 1-4 in about 3 months, we're lacking a lot of process. Formal team-oriented processes aren't really something that comes up when you're working alone. But now that there are 4 of us, and it's time to put a few processes in place so we don't trip over each other.

So here's are some of the process elements we need:
  • a task queue
  • a rotation schedule for automated test result analysis
  • measurement of our velocity
  • tracking velocity over time, with goals to increase that velocity
  • a story definition process (similar to dev, but with some twists to handle how our stories are different)
  • a process for handling escalations from support
  • a process for accepting stories
But to put all that in at once would be overwhelming. So let's take baby steps.

So far we've:
  • established a rotation schedule for automated test result analysis (this one is not really intimidating)
  • picked a person to handle escalations
  • picked a person to handle story acceptance
  • created a task queue. We are not yet requiring that these be fully fledged stories.
The idea here is to not create so much process that we get overwhelmed. This way we can all learn the product, learn the test style, learn how we work together. Then, and only then, will we add process elements.

So are we doing some things "wrong" or unsustainably? Absolutely.

Is that the right thing to do? Yes. As long as we're improving, then it's more sustainable to be flying by the seat of our pants than to try to change too much at once.

Wednesday, February 6, 2008

When the Doing is Faster

Today, I broke process. (EEP!)


Our software development process, which applies to QA just like everybody else, says that you get a new feature (or new test, in this case) by doing the following:

  • Create a story stub candidate describing why you want it, what it is, and a very preliminary idea of size
  • Take this to the XP Customer Team, who will either defer it, send it to someone for more details, or approve it and put it in your queue
  • Get a good estimate on the story and re-prioritize in your queue based on the actual estimate (rather than the SWAG you had earlier)
  • Patiently work on other things until the story hits the top of your queue
  • Implement the change
  • Get the change reviewed
  • Check it in
  • Have someone accept the story
It looks like a lot of steps, but for something really urgent this can all be done in a matter of hours.

Today I skipped a few steps. I did all the right things getting the story into the queue. But when it hit the top of my queue I took a look and realized that to estimate it would take me approximately 15 minutes. To actually do the work would take me approximately 30 minutes. So I skipped straight ahead to "implement the change".

Moral of the story? If it's faster to do than estimate, don't waste time just because your process says you should. Just do it.


Tuesday, February 5, 2008

Ambiguity in Specifications

On the T on the way to work this morning there was an ad recruiting for ITA Software. The ad ran something like this (I'm paraphrasing):

================
Problem:
If you take all the numbers from 1 to 999,999 and write them out as words, then concatenate them, what is the 51 millionth letter?

If you can solve this, come work for us!
===============

Coding for this problem is not difficult. However, I could come up with several different answers. I assert that the question is ambiguous.

Ambiguity 1: "from X to Y"
  • Is this inclusive? That is, do I include the numbers 1 and 999,999 or not? 
  • I assume this is whole numbers (integers) only; the problem isn't really solvable unless some interval is defined.
Ambiguity 2: "concatenate"
  • In most languages, this means that you simply write out the numbers one after the other with no delimiter (no space, no line ending, etc). Confirm this assumption.
  • In a few languages, calling concatenate results in removing all white space. Confirm that this is not the underlying assumption.
Ambiguity 3: "write out the number"
  • 1171
  • one-thousand, one hundred, seventy-one
  • one thousand one hundred seventy one
  • one thousand one hundred and seventy one (I believe this one actually expresses 1100.71)
  • one one seven one
None of these ambiguities is a difficult thing to determine, but they will change your answer. The moral of the story is to really read your specification, eliminate ambiguities, and then start coding.

Somehow, I think this is more interesting as a test question than as a coding question!

Monday, February 4, 2008

Quick Releases Aren't Always Good

One of the things I love about Ruby and Rails is how incredibly convenient they make the steps after implementation. After a feature or bugfix is done, it's simple to run the automated tests (let's assume you have written them). And deploying can be as simple as three or four words - "cap deploy production" or the like.

Sometimes, convenience means sloppiness.

Easy deployment can sometimes mean you don't actually look at a feature, or you don't run the tests because "that couldn't possibly have broken anything". The net result is that you deploy something thoroughly broken.

Case in point: one of my projects uses Lighthouse for defect tracking. It's generally a good little system - ASP-hosted, written in Ruby on Rails, quite simple. ..... And about once a month, someone deploys something totally broken. The entire site is down for anywhere from 15 minutes to 3 hours.

So use the tools that make deployment easy, but keep asking yourself if it's too easy - and don't let easy become sloppy.

Friday, February 1, 2008

Ambiguity in Testing: Link

I try to not play "follow the link" normally, but this one is worth it. Check out this blog post on ambiguity in specifications.

More on this later.