Wednesday, October 31, 2007

Which Came First?

My current job uses an XP (aka Extreme Programming) development process. Now, lots of companies say they do, but these guys really do, right down to stories, a force-ranked queue, pairing, and writing tests first.

There are a lot of tenets of XP that affect test, but the two that I would like to consider today are:
  • Test first.
  • Automated tests are good (yes, this is a simplification, but parsing this better is a job for a later post)
This is all well and good, but what should you really be testing first? There are three ways to approach testing a feature:
  1. write the automated tests, then test manually for usability and context
  2. test manually, then write automated tests for regressions
  3. split automation from manual testing and try to accomplish them roughly in parallel
Extreme Programming lends itself very well to approach #1. You write the test automation as (or even before) you write the code. Then, when it's basically done, you take a look at it as a user would see it - through the GUI (however that's expressed).

More traditional software development methodologies put test automation on QA's shoulders, and approaches 2 and 3 are very common. You look at a feature first as the user would see it, then you automate.

So, now that I find myself in an Extreme Programming shop, how do I change the more traditional approach? Or how do I change the XP approach to provide some of the benefits of the old way?
  1. Do not eliminate manual testing. Yes, I know this violates XP methodologies, but if you don't see what your user sees at some point, then you will make a product that is very hard to use. If you can figure out how to automate the user experience once you're happy with it, great, but do have the courtesy to step into your user's shoes at some point.
  2. Do allow automation to reduce your manual testing time. Automation can't cover everything, but it can eliminate many classes of tests. No more manual boundary value testing of that form - just automate it and move on. Save the manual tests for the things machines can't do.
  3. 100% defect automation is good. If you can describe it well enough to be a defect, you can automate the test for it. This saves a lot of time in regression testing in particular. I wrote a whole blog post on this earlier.
  4. Automate the boring stuff. All those tests that have to be done for every release or on every build and that are the same over and over are good candidates for automation. Humans make mistakes and take shortcuts because they believe they know what it will do, so make the computer do it instead. This also frees up your humans for more advanced, more interesting, more complete testing - and keeps your team engaged.
  5. Test as the user. Figure out what tests of yours users really care about and hit them manually, at least sometimes.
Put these all together and you've accomplished two pretty important things. First, you still put your user at the center of your testing. Second, you build a base of test automation so you can test more and test better.

Tuesday, October 30, 2007

It WILL Get Done

Commitment is very important in QA. Generally by the time QA is in the spotlight, it's close to a release and tensions are starting to get just a little bit high. Part of QA's job at this point is to maintain and encourage calm.

So what's the best way to do this?

Be very very careful of your commitments.

If you say you're going to do something, get it done. If you have to stay late to finish, stay late. In our profession, more than many others, slippage is a problem. It's easy to say that it's getting late, and you have to get home, and tomorrow's good enough. But there are consequences. If you said you would finish something today, and it doesn't happen, then there is often a house of cards that will fall around you.

Just today I was at work late installing a demo system. Why? I had said I would get it done today. So what was the difference between 10pm one night and 2pm the next day?
  • 6 hours less of demo practice time for sales
  • The demo data would finish loading 6 hours later - too late to ship out in time for the demo.
  • QA would seem less dependable.
The last item is by far the most important part. As QA, your job is to eliminate surprise. You should be the most dependable part of the entire software development process. The goal is simple:

If you say you're going to do something, everyone around you needs to know that it will get done.

Monday, October 29, 2007

Test Commonalities: File Attributes

Welcome to part three of my Test Commonalities series! In this series we talk about common items that come up over and over across projects.

A file is generally treated by a program - and by most users - as a unitary item. That is, a file is a thing, one single thing. In general, though, that's simply not true. A file is actually a thing (the file itself) and a bunch of meta-information. Depending on the context, these are given different names, but in general they are attributes of the file.

So, when we test file system attributes what do we need to consider?

Well, first we need to define the scope of the concern. In some cases, the system will be isolated on a file system. For example, a simple desktop application may never access a remote file system. In other cases the system will involve a remote file system. Examples of this include sync applications and applications allowing or utilizing a network drive.

If we are isolated on a single system, we must consider the following:
  • Performing legal operations. For example, read a hidden file, write to an archiveable file.
  • Attempting illegal operations. For example, write to a read-only file.
  • Displaying things to the user that match what the operating system displays. For example, show a read-only file to the user.
  • Displaying things to the user that don't match what the operating system displays. For example, show a hidden file to the user.
  • Changing an attribute of the file
  • Preserving file attributes across modifications of the file. For example, writing to a hidden file should leave it hidden.
  • Avoiding unintentional modifications to attributes. For example, reading a file should not update the modified time.
  • Inherited attributes. For example, if a directory is read-only, a file inside that directory can be expected to be read-only.
When there are multiple systems to consider, we must test everything above and some additional items:
  • Unavailable attributes. Some systems may not support certain attributes that other systems do. For example, older versions of Windows do not have an archiveable attribute.
  • Additional attributes. This is the inverse of unavailable attributes. For example, Windows Vista offers extended attributes not available to older versions of the operating system.
  • Different attribute names. In particular, when crossing operating systems (e.g., Mac to Windows), some of the same attributes may have different names.
  • Entirely different attributes. In particular when crossing operating systems. This is a special instance of unavailable attributes and different attributes.
  • Transferring attributes. When transferring a file, the attributes for that file must also be transferred. This is often a separate action from writing the file itself.
In short, there's a lot more to a file than just the file itself, and it all needs testing. Maybe your program deals with attributes already, maybe it doesn't, but no matter how isolated your program is, you have to assume that files and file attributes can change underneath it, so test away!

Friday, October 26, 2007

Self-Governing Groups

Mishkin Berteig published a blog entry yesterday about truthfulness in Agile (blog entry here:  http://www.agileadvice.com/archives/2007/10/truthfulness_in.html). His point, put shortly, is that "truth" is an essential part of agile. He defines "truth" as visibility and honesty.

The post got me to thinking about visibility and honesty in software groups. The actual issue isn't so much visibility. The issue is that agile assumes a self-governing group. The only way a self-governing group works is if you have self-governing people. However, that's not the only thing needed to make a real team work effectively.

So, how do these self-governing groups (like agile teams) work effectively?

First, the group must share risk. Either the group succeeds or the group fails, and the entire group has to believe that. If this doesn't happen, you foster competition among the group, and that will destroy the group's effectiveness. After all, if you're competing with me, why would you stay an extra hour to help me finish a task? But if the group will fail with the task undone, then you'll stay the extra hour to help finish. 

Second, the group must have mutual respect. If you think you're better than me, you'll behave accordingly. And I'll react accordingly. All of a sudden, I'm doing the easy tasks and doing them less well than you would - all because you seem better than I am. But if we respect each other, than we'll each push the other to our limits, and that's good for the whole group.

Lastly, the group must have the ability to discipline or expel its own members. This provides the group with sustainability. A group that cannot solve its own problems, even with its own composition, is a group that is governed by some external force (thus defeating the self-governing portion of our group definition).

So, if you want a self-governing group, get good people. Get people who are motivated and able to govern themselves. Then put them together and stand back.

Magic will happen.

Thursday, October 25, 2007

Test Commonalities: Paths and Filenames

This is part two (and the first substantial post) in my Test Commonalities series. In this series, we discuss the test areas that come up over and over again across many projects. The goal is to create a good cheat sheet for each so we don't have to reinvent the wheel every single time. Today: paths and filenames.

Handling paths and filenames is a huge part of testing any time you have interaction with the file system. Every time you have an application that writes a file, or creates a directory, or reads a file, you have a system that should be tested for paths and file names.

So, what is this testing? Basically, when you read or write a file, you subject your system to the rules and conventions of the file system on which you are running. You are interacting with a third party just as much as that interface with another system.

So for each type of operating system, you need to understand the rules of the file system.

On Windows:
  • Paths, including file name and extension must be less than 256 characters*
  • File extensions are optional
  • File system is case insensitive
  • Certain paths change per user (e.g., a %USERHOME% is usually c:\Users\catherine)
  • "Reserved paths" are usually addressed by environment variable (e.g., a user's home directory, the default program storage location, the temp directory location)
  • "Reserved paths" are not in the same actual location across various OS versions
  • Certain directories require special privileges to write to them
  • Hidden and system files are denoted by file system attributes
  • Paths begin with a drive letter
  • Certain characters (e.g., : or / ) are illegal in path and file names
  • File extensions may be of any length from 0 to 250 characters
On UNIX/Linux/Mac**:
  • File system is case sensitive (except macs)
  • Files beginning with a . are generally hidden from the user
  • Hidden and system files are denoted by the file name and location
  • Certain directories require special privileges to write to them
  • Certain paths change per user (e.g., ~/ takes you to a user's home directory)
  • Paths begin with a /
  • Directory delimiters in a path must be /
  • Certain characters (e.g., /) are illegal in path and file names.
You wouldn't skip testing your system's interface with a billing system, or with an HL7 feed. Don't skip testing your system's interface with the file system, either.

*Disclaimer: Yes, there are ways around this, but they're generally ill-supported and will get you in trouble.
** Disclaimer part 2: Yes, I know there are a lot of different file systems available for UNIXand UNIX-like operating systems and they don't all match. This post is intended to cover mostly client systems where this kind of interaction is likely. Do your own research for your own wacky file system if you want to use one.

Wednesday, October 24, 2007

Test Commonalities: A Series

A few weeks ago, I wrote a post about timestamps and how they keep coming up over and over. I'd like to expand on that, so now introducing a new series: Test Commonalities.

As you move from company to company, from team to team, and from product to product, some things come up over and over. So if we're going to see these again and again, let's make sure we test them correctly.

There's a lot to talk about here. We'll cover:
- timestamps, again, more
- paths and filenames (posted 25 October 2007) 
- file attributes, including hidden and system files
- OS interaction
- email (posted 5 November 2007)
- localization (posted 19 November 2007)
- logging
- and probably a lot more...

So come on over, this will be fun!

Tuesday, October 23, 2007

Institutional Memory is Transient

Institutional memory is a powerful thing. It's a shared base of information off which new concepts are parsed, decisions are made, and shortcuts are created. A team with institutional memory is closely bonded and can make decisions rapidly and well.

BUT....

It all falls apart when the institution changes. 

As it turns out, any change to the institution will decimate the team and cause the well-oiled machine to stutter. A new team member, a team member leaving, a change to the underlying assumptions of the institution, any change can be horrible for a team's ability to work together intimately. The "hive mind" is broken.

So, how do we preserve institutional memory?
  • Change the team slowly. Even if you could hire three people today, hire one and bring them in to the team before you hire another. In general, no more than 15% of your team should be new at any given time.
  • Write it down. Make part of the institutional memory the practice of writing down information. It may not be easily findable, but if it's written it's not totally lost. Wikis and other group-editing environments tend to be good for this.
  • Mentor. When someone new comes in, pair them with an existing team member. This type of encouraged camaraderie will help make the new person part of the institution. It also provides a clear path to impart information in a very easy (and casual) manner. Less formal teaching, more welcoming to the shared information repository that exists in the team's collective head.
Institutional memory is going to happen, and in a stable team it can be a large part of what makes the group successful. However, no group is stable forever. This group knowledge only powerful as long as it's backed by techniques to pass along the knowledge -- so that the team is strong through stability and through change.

Friday, October 19, 2007

Personal: Handel & Haydn Society

This has nothing to do with software at all.

Tonight we went to the opening concert of the year of the Handel & Haydn Society here in Boston. It was an all-Beethoven night, and I am still amazed. The piano concerto (number 3 in C major) is one of the things that I will remember for a long time. Fortepianist Kristian Bezuidenhout is incredibly animated and makes the very keys sing. I was also glad to see Daniel Stepner (concertmaster and first chair violin) back for another year.

If you're in the Boston area this weekend, go.

Thursday, October 18, 2007

Candy Wrapper Theory of Development

I have come to believe in what I call the Candy Wrapper theory of development. Basically, it describes the true role of each part of an organization in creating a sustainable process that doesn't drive people crazy.

Basically, the width of the candy wrapper represents churn. A certain amount of churn is inevitable, but it will drive people crazy. So, you manage the churn at a couple of key points. This way you can sustain your development cycle and create certain points where you can see through the churn to check your work.

Let's look at the candy wrapper from left to right:

Basically, a lot of stuff comes in from a lot of sources on the left side of the candy wrapper:
- requirements from customers
- requirements from marketing and sales
- requirements (aka ideas) from engineers

Product management's job is to catch all of that stuff, condense it, organize it, and prioritize it. Then it gets fed in a nice small stream through to development.

Development works in the center of the candy. There is some churn (hence, the candy gets wider) as features are implemented, infrastructure is created, bugs are fixed, etc.

QA stands at the right side of the candy wrapper. It grabs everything that dev puts out and make sure it's packed up neatly. QA confirms that what's coming out is a coded version of what product management put in.

Then the candy wrapper gets very wide again as support, customers, sales people, etc. take the product in the myriad directions it could go.

Hence, development, Candy Wrapper style.

Wednesday, October 17, 2007

Timestamps from Heck: Minimizing Test Scenarios

I'm putting the disclaimer up front this time because it's a big assumption:

Disclaimer: This assumes your testing philosophy does not require you to test every variable in isolation, at least at first.

We can talk more about that assumption later, but let's get on with the timestamps.

To review, we have 8 separate timestamp-related scenarios and nowhere near enough testing time (hey, it's an almost universal truth). So what do we do? Combine and conquer! There are some locales around the world that cover multiple timestamp scenarios, so we use them. When there's a problem, we isolate each element of that particular locale and figure out which is the individual cause. Lete's assume our server is in US Eastern time zone

And the locales are:
1. Estonia
2. Thailand
3. Newfoundland

Estonia:
- classic European time format: DD.MM.YYYY HH:MM (24 hour)
- uses dots as date separators instead of forward slashes
- fun to say!

Thailand:
- HH:MM A/P MM/DD/YYYY
- year is 543 years ahead of US year

Newfoundland:
- GMT-3:30
- MM-DD-YYYY HH:MM (12 hour)

These cover a lot of our timestamp scenarios. The remainder are covered separately.

So go ahead, move to Newfoundland (figuratively, anyway) and test!

Tuesday, October 16, 2007

Timestamps from Heck: Exposing the Problem

One of the doozies of software engineering - and particularly of testing - is timestamps. Between time zones, time formats, locales, different calendars, this is a particularly rife area for test. This matters even if you do not localize, because times are localized for you.

Of course, if all your clients are in a single office in Boston, this post is not for you! Thank your lucky stars and move on.

Our product is used literally worldwide - 1/3 of our users are in Europe, we have a strong South American population, and recently Russia and New Zealand had quite a spike in interest. On top of that, we're a sync product, so timestamps matter.

Let's look at one simple use case: "A file was modified after the user's last sync." So, how do times enter the picture?

1. "User's last sync" is a time, measured to the second. This comes from the server.
2. "file was modified" means the file has a modified time set on the client machine to whatever specificity the client allows (we're usually on Windows XP SP2, so we'll say to the second).

All is well and good except now we have to compare these two times and we have to be very very sure we're comparing apples to apples. So what are our scenarios:

1. client and server are in different time zones
2. client and server use different time servers that don't quite match (say, 12 minutes off)
3. client and server use the same time server but are off within the allowed margin of error (usually this is between 2 and 10 minutes)
4. client and server use different time formats (say, MM/DD/YYYY versus DD/MM/YYYY)
5. client and server use different calendars (say, Gregorian versus Julian)
6. client and server use different time formats (say, 12 versus 24 hour clock)
7. client and server cross the date line (this is a fancy version of different time zones)
8. client and server measure wtih different specificity (say, one to milliseconds and the other to seconds)

So now we have a basic set of time issues that we have to look for in any time-sensitive operation in our data. That's a lot of testing, so we're going to talk about how we've combined these into as few distinct test cases as possible.

P.S. I'm not going to tell you how we solved this (that's part of the secret sauce, but suffice to say that details have been changed to protect the innocent). The point is more that we have to test it.

Monday, October 15, 2007

Manual Testers Don't Write Code?

I interviewed someone last week and got to the portion of the interview where it was his turn to ask questions. The first thing he asked was:

"This is a position for a manual tester. I spent a loooong time learning to write code. Do you really expect me to throw that away?"

Can you see the assumption in that question? It's a doozy.

This candidate assumed that "manual tester" meant no code. He couldn't have been more wrong. Manual testers may certainly write code. Even if the test itself is manual, code can:
1. load data
2. gather state information
3. provide background "noise" in the system
4. be a tool to see something invisible to the human tester

A manual test is simply a test that a human performs to ensure that the system conforms to expectations.

P.S. Yes, the attitude struck me as obnoxious - "throw that away" is a combative choice of words - so I included a 10 second lecture on choosing one's words.

Friday, October 12, 2007

Patience is a Virtue

There's a Chinese proverb that goes something like this:


If you wait by the river long enough, you'll see the body of your enemy floating by.


I haven't quite figured out why yet, but this really speaks to me. I think there's a lesson in there for testers. Your enemies are the bugs in the system. If you really want to find them, you may have to live with your system for a while. Not all bugs are immediately apparent, no matter how thorough your testing is. Give those intermittent, or time-based, or volume-based, or subtle bugs a chance to go floating by. Live with a system or a feature for a while; your customers will and so should you.


Sometimes finding the bugs just takes a little patience.

Thursday, October 11, 2007

Hi, My Name Is _______

What do we call ourselves, we testers, we process analysts, we engineers? Boy do they have a lot of names for us:

QA (aka Quality Assurance)
QE (aka Quality Engineers)
QC (aka Quality Control)
Product Assurance
Testers


I'm not much of one for names or titles, so I'll generally respond to any of these. That being said, I tend to self-identify with "QA". I find that it best encompasses what I do.
QA - that's me.
QE - okay, but a little "big company".
QC - ick. Implies industrial, mechanical processes. I always think of someone measuring and comparing to an acceptable range but not really innovating.
Product Assurance - that's great if you're working on a product, but what if you work on a service?
Tester - this only covers part of what I do.

I say that I'm QA because my job is to:
1. test software, designs, and requirements to prove they conform to expectations
2. help support and troubleshoot issues
3. define and enforce a software development process that discourages opacity
4. find ways to measure how much our software does what our customers need
5. fill in all the holes where there is no release engineer, support engineer, product manager, etc.

I'm QA. I'm the glue that keeps this all together.

Wednesday, October 10, 2007

Test in Process: Agile

Before we start our standard test in process post, let's talk about "Agile". "Agile" isn't a development process any more than "Waterfall" is a development process. Instead, "Agile" is a family of processes including Extreme Programming, Crystal, etc. That being said, I walk into a lot of companies that claim to be using "Agile process".



Beware. Here be dragons.



What this usually means is that the company doesn't really have a development process. Instead, they have some ad hoc practices and maybe some home-grown methodologies that work for them.



How I got here:

Well, I work for one of these places.* Their software development process is centered around a team that's been together for several years and knows the software very very well. Everything else is luck, a good QA team, and a lot of fast bug fixes.


The role of test:

In this role, test is mostly intended to control the chaos. There aren't a lot of controls to prevent defect creation, so test is extremely busy finding defects. There are a lot of defects caused by these ad hoc changes.


Upsides:

There is almost no time spent in process improvement meetings (kidding!). In addition, having little to no actual process generally means that the environment is a good target for incremental improvement. You don't have to prove why the old process is bad before you can prove why the new process is good; you can just skip straight to why the new process is good.


Downsides:

Despite the old aphorism, order rarely comes out of chaos! This kind of process is a good way to overtax your test team, overtax your development team, and ship software that you can't say is good or bad. You will almost certainly wind up with a whole lot of firefighting.


In the End:

This isn't sustainable. If you can't help create a process, get out. Having a light process is okay; having no process is not.



Other Posts in this series:

Test in Process: A Series (http://blog.abakas.com/2007/08/test-in-process-series.html)

Test in Process: RUP (http://blog.abakas.com/2007/08/test-in-process-rup.html)

Test in Process: XP (http://blog.abakas.com/2007/08/test-in-process-xp.html)

Test in Process: SCRUM (http://blog.abakas.com/2007/08/test-in-process-scrum.html)



* I work here until October 12th. Then I'm going to work at my new job (http://blog.abakas.com/2007/10/eek-post.html).

Tuesday, October 9, 2007

Comparison of Defect Tracking Systems

There are approximately eleventy-zillion* defect tracking systems out there. Every once in a while I like to go through and discuss the ones I've used.

1. Bugzilla. This one is an oldie but still good. It's what I think of as a developer-oriented tracking system. I'd use this in a heartbeat, but I do find it rather complex for small projects or groups. http://www.bugzilla.org/

PROS: extremely customizable; very stable when run on Linux; lots of reporting; highly queryable; the price (Free) is right; pre-canned reports in Bugzilla 3 are quite nice; good tie-ins with automated test systems; web-based

CONS: lots of fields make it intimidating to non-engineering users; not so stable when run on Windows; ain't exactly pretty

2. Lighthouse. This is a relatively new defect tracking system. It's draw is simplicity. Overall I wouldn't recommend this one. http://www.lighthouseapp.com/

PROS: very easy and intuitive for end-users; email to a bug is very well done; web-based; pretty; ASP-hosted for those who don't like to administer this stuff; tagging idea rather than categorization is nicely flexible

CONS: not yet stable (especially for IE users it breaks a lot); very little reporting; searching is non-intuitive; ASP-hosted for those who are uncomfortable with that notion

3. Jira. This defect tracking system is starting to pop up all over the place. In general it's a compromise system that combines simplicity with some decent reporting. http://www.atlassian.com/software/jira/

PROS: lots of reporting for those in a metrics-heavy environment; quite stable; fairly customizable for end-users

CONS: not free (I think about $5000 for an enterprise license); searching is non-trivial

4. Homegrown. This has happened a couple of times, in which companies use defect tracking systems that they made themselves. Unless your product is a defect tracking or ticketing system this is something I would avoid. Maintaining it becomes a huge pain very quickly.





* Of course there aren't eleventy-zillion. But there are a lot.

Monday, October 8, 2007

A Puzzle

Here's an interesting puzzle:

Using only the number 4 and any mathematical symbols you like, make every number from 1 to 100. For example to make "1", I would do 4/4.

Some more rules:
1. combining 4s is legal. E.g., "44" is valid.
2. Although you could just add 4/4 (aka 1) as many times as you needed to get to a particular number, that's kind of a cop out.

Good luck!

Friday, October 5, 2007

Changing Jobs

Sorry for the long delay in posting. It's been hugely busy here.

After a bit over two years, I'm leaving my current position as QA Manager at Adesso Systems, Inc. I'll be going to a QA Lead (aka manager) position at Permabit, Inc. in Cambridge, MA.

Wish me luck!