Friday, August 31, 2007

Testing for Typos

We've been having a debate lately about how much QA should test marketing output.

The History
-------------------
Although we're a software company, there are things that (gasp!) engineering doesn't touch. One of the biggest of these is our corporate website. This is the domain of marketing. They've got a content management system to input information, they have their own graphic designer to do logos, screenshots, etc., and they have the ability to publish to production.

Updating the corporate site is something that happens two or three times a week. It's mostly small stuff (a new news article, for example), but it changes a lot.

The Trigger
------------------
Sure, we're not SUPPOSED to test the corporate site, but we do use it to get to our downloadable software (log in, click a download link). Being testers, and being slightly picky about the English language, we noticed some typos on the site.

Nothing major, so we dropped a nice note to marketing whenever we noticed things. Then one day it happened. Our "public offering" was missing a critical letter "l", right on the front page of the site. No one caught it until a tester happened to be on the site the next day.

The Debate
-----------------
Yes, marketing should have caught this. But they didn't, so how much should we test the corporate site? Sure, a tester found things, but that doesn't necessarily mean that QA should have to test the entire site. After all, QA is busy testing the actual software and shouldn't be blocking corporate website updates. However, quality control on the marketing side was clearly lacking.

The Resolution
-----------------
We finally wound up approaching this problem two ways:
1. marketing now uses a link validation tool (WebLink Validator) for spelling and grammar checks, and as a bonus gets a check for dead links.
2. A marketing intern now comes over to QA two hours a day and is learning how to test a text-oriented website. Just the intern, a dictionary, and lessons in close reading.

Thursday, August 30, 2007

Configuring for your Environment

One of the things that I set up wherever I go is a separate QA environment. This is distinct from development and distinct from production. It's a place where QA can install, test, uninstall, reinstall, upgrade, destroy, and recreate at will.



One of the consequences of maintaining a QA environment is the need to make configuration changes as you get into that environment. While this is sometimes time consuming and obnoxious, it's a preview of what will happen when you release, so it's something that has to be done. The number one thing to make this successful is to isolate your config changes into something you can manage. If everything's in one place you're far less likely to make a mistake, and if you do it's a lot easier to find your mistake.



Where To Put Configuration Settings:

Depending on your application, you could put this one configuration in several places: a database table, a config file, or the web.config or machine.config that is already in your web server.



Where NOT To Put Configuration Settings:

Do not make this one place your code! Particularly if you're using compiled code, decompiling or building different versions of code for each environment will slow you down. One config change and you're deploying a whole new build.



So, the lesson of this post:

It's not quite a cardinal rule, but for ease of deployment (and your own sanity!), put all your configuration in one place.

Wednesday, August 29, 2007

Date-Based Feature-Based Releases

The company I work at now is pretty typical when it comes to release planning. We engage in what I like to call "date-based feature-based releases".



Typically, a company will tell you that they do one of two types of releases: date-based or feature-based. The date-based release company picks a date and says, "whatever we have then, ship it!". This is pretty common in companies using SCRUM methods, or those with a big trade show (demo, customer meeting, etc) looming. The feature-based release company points to a feature and says, "we'll ship when that feature is ready." This is pretty common for a company that has a product with a perceived large hole in it.



Our company purports to be feature-based: "This will be the web release!" "This will be the HTML interactive release!". However, once the initial estimate is done, it turns into a date-based release. This is insidious but not at all uncommon. The root cause of this is that you have to coordinate a company around these releases. Marketing plans, sales goals, even company funding (if you're a VC-funded firm especially) all become tied to doing the release.



So what does this mean for testers? It's pretty simple. No matter how much this feels like a feature-based release, don't commit to a date unless you're prepared to meet it. Ever.

Tuesday, August 28, 2007

Release Management and QA

I spend a lot of time working in small companies. "Release manager" is one of those roles that typically isn't filled by an actual person. Instead, it's a combination of a developer, someone in IT, and usually QA. So when you walk in and they say, "well, QA helps out with releases", what should you look out for?
  1. Let's talk configuration. Manual changes are bad, because they're error prone. The fewer manual configuration changes you have to make, the better. I'll do a full post on this one later - configuration is difficult.
  2. Release night. Yup, you may need to be there. By the time it gets to releases, you should just be there to do a smoke test. QA really shouldn't do the release; whoever runs your production environment should do the release.
  3. Documentation. During releases is not the time to rely on people's memory. Whatever your release process, whether it's "cap deploy" or "run installer X" or "drop file set Y in location Z" should be meticulously documented.

The biggest thing to watch for is the Go/No Go decision. This is not your decision. It's tempting to get sucked into this, and it's very common to hear "well, is it ready?". But this is not your call. Your purpose is to provide information, but the decision needs to be made by the product manager (or appropriate substitute).

And that's it! Your particular situation may have other considerations, but really these are the basics

Monday, August 27, 2007

Cardinal Rules of QA

I've been talking with a lot of non-QA types lately, and get the same question a lot.

"What's the least you need to do your job?"

This is usually coming from a VP of Engineering (aka CTO aka Director of Engineering aka CEO aka someone who thinks they want a QA department and currently has nothing). I don't think the question is intended to be mean-spirited or unusually parsimonious. Rather, I think this is an attempt to ask the real question:

"What are the universal, inviolate, cardinal rules of QA?"

This I think is an interesting question. For me there are very few things that are absolutely required to make QA feasible:

  1. A separate QA environment. You must have a safe place for QA to test. This should be in control of QA or IT, and is where all certification takes place. This is a place that isn't going to have any of the vagaries of a developer's laptop or integration machine, no old code, no incorrect libraries, etc. This is also a good clean place to test install/upgrade/deployment.
  2. A defect tracking system. What defect tracking system you use is less important than that you actually use it. When in doubt, grab a free one - Bugzilla or the like - and set it up. This doesn't have to be high overhead, but you have to have some way of knowing what you found and what you fixed.
  3. A build system. This is along the same lines as a separate QA environment. A build system is a known good place that grabs all the code from the source control system, builds it, and drops the package somewhere everyone can get at it. No manual process means no mistakes (at least, once you've gotten your build scripts right!).
  4. Source control. Just like the build system and the QA environment, the goal of this is to eliminate the unexpected things that crop up on a developer's laptop. This also gives you good things like branches and merging, so you can take what you want and not take things you don't want.
  5. A test document. This could be in Word, Excel, a database-backed test case system, doesn't matter. Just write down what you're going to test and write down what you've tested. Otherwise, you don't know what you've covered and you wind up repeating yourself.
In the end, that's it. Everything else is negotiable.

Saturday, August 25, 2007

Details, details....

I'm currently in the middle of testing some UI work that's been done on a project I'm working on. This is the type of testing that many of us love to hate. The goal: find all the styling errors that remain after this "UI cleanup". I have five browsers to support (IE 6 and 7, Firefox 2, and Safari 2 and 3).

So how to best go about this?

This is one of those things that to me you just have to dig in and do it manually, at least the first time. This type of layout and front-end styling is extremely tweaky across browsers (pity the guy who had to code it!). Also, you have to experiment with how it handles data across your constraints, from no value to the longest allowed value. Still, that being said, this can be BORING testing.

I try to see this kind of test as an opportunity to get acquainted with the app again as a user sees it. A lot of the time we're deep in the guts of the app, checking boundary conditions, exercising deployment constraints, testing and verifying a limited portion of the app, doing configuration tests, testing behind the UI. This is our chance to use the app as the user sees it.

So embrace the manual test, reacquaint yourself with the feel of the app as a whole. You'll probably find more than styling issues!

Thursday, August 23, 2007

Test in Process: SCRUM

This post is one of a series examining the role of test in various software development processes. Today we'll look at SCRUM.

How I got here:
First of all, SCRUM is not a software development process. SCRUM is a project management process; it describes how to control the development of software, not how to actually do it. That being said, SCRUM is becoming quite popular.


The role of test:
SCRUM does not address test directly. Test items can appear on the product backlog, and testers are members of the team who commit to tasks just like developers or any other team member. SCRUM does state that at the end of every sprint there will exist a shippable product. This implies that test will be complete.


Upsides:
SCRUM emphasizes a very flat team structure, which forces QA onto the same level as development. This can help eliminate contention between the teams. Ideally you wouldn't have this anyway, but it's useful to have the process you're using help it.


Downsides:
Scrum states that at the end of every iteration, the product should be "shippable". You may not choose to ship, but the option should be open. Back in the real world, where estimates are optimistic and tasks have complexities not known until they're implemented, well, dev often runs late. How is test to react in this case? There simply isn't time in a given iteration to perform all the testing you would like to be able to say the product is shippable.


In the end:
I think SCRUM has a lot of potential to bridge the accountability and thought of RUP while allowing more flexibility. I also like how adaptable it is - you can have a 2 man team or an 8 man team, and you can have a 2 day sprint or a 2 month sprint. Getting around the problem of test is something that I haven't seen solved effectively. There are several possible resolutions to this that I've considered or attempted:

1. Schedule a test iteration right after the dev iteration. This violates the principle that the end of an iteration leaves something shippable - the end of the dev iteration doesn't give you a shippable product. If you're doing 4 week iterations, you could be quite a ways from shipping.
2. Do not schedule dev work that affects product stability for the last n (% or days or whatever) of an iteration. This way everything is done at iteration end minus n and the last bit of testing can be completed. Meanwhile, dev can do the tasks that don't affect the product itself (setting up servers, environment maintenance, design tasks for future implementation). This one I like in theory. In practice, that time gets squeezed because of overflows in dev. Your test burndown also gets a little scary toward the end.
3. Put dev and test into separate iterations and offset them. This is actually my favorite way to handle this so far, even thought it's far from perfect. Basically, dev keeps going to the end of a sprint. Test then happens to have n days left in their sprint to finish everything off. The big downside to this one is if test discovers a showstopper defect after dev's sprint is over you throw off the next sprint.

DISCLAIMER:
All thoughts on processes are my own and are meant as reflections of my experiences with them, not with the theory of the process itself. No flamewars, please.


Other Posts in this series:
Test in Process: A Series (
http://blog.abakas.com/2007/08/test-in-process-series.html)
Test in Process: RUP (
http://blog.abakas.com/2007/08/test-in-process-rup.html)
Test in Process: XP (
http://blog.abakas.com/2007/08/test-in-process-xp.html)
Test in Process: SCRUM (
http://blog.abakas.com/2007/08/test-in-process-scrum.html)

Wednesday, August 22, 2007

Early Phase Testing

As much as possible, I like to get QA involved early in the software definition part of the process. I'm not interested so much in early exposure so that we can do test planning, etc. (although that's important!). I'm interested in bug prevention as early as possible. At WidgetCo*, where I currently work, my involvement starts in release planning meetings. Our releases are very date driven, and we're not currently hiring, so controlling features is our best chance of success. My job in these meetings is primarily to point out what we don't know so we can fix it before we make assumptions in code.

For example, one of the features on the marketing wish list that we talked about today is adding the ability to sort widgets by date. This sounded easy enough, until I said "which date?". So just for kicks we went around the table and each stated which date we assumed we were sorting by. Here were some of the answers:
- date uploaded to the server
- date added to the client
- modified date of the file as reported by the operating system
None of these choices is hard, but someone would have been surprised at the end of the day!

The lesson here? Even the "simple stuff" is sometimes ambiguous. Your job as QA is to remove ambiguity as early as possible.

*I don't really work at WidgetCo. That would be kind of cool, though.

Tuesday, August 21, 2007

Test in Process: XP

This post is one of a series examining the role of test in various software development processes. Today we'll look at the Extreme Programming (XP).

How I got here:
This is one of those processes I've only used on small projects. But boy did it happen wholeheartedly; there were user stories, task cards, daily standups, pair programming, red bar/green bar development, and automated acceptance tests. The results have been a success, but on a small scale - the project simply hasn't grown for business reasons.

The role of test:
Certain types of testing are central to XP. In particular, red bar/green bar development is very test centric. There is also a role for "automated acceptance testing". Test is understaood to be automated first and foremost. The emphasis on rapid iterations, near-continuous refactoring, and other short cycle development techniques places huge value on very rapid testing across the application, almost always in automated form. Where manual test is mentioned, it is strongly discouraged.

Upsides:
Test takes a very central role in XP. It's the foundation that makes other practices - especially continuous refactoring and short release cycles - work. This makes test the problem of very member of the team, and can help developers think like testers. Testers are able to expand coverage quickly and easily, keeping up with the samller features added.

Downsides:
The rejection of manual testing makes certain types of tests very difficult. In particular, automated GUI testing is a weak point because of the high maintenance costs this type of testing imposes. This process is also difficult to sell to a typical development organization for planning (or lack thereof) reasons. Test creation is also centered around the developer, and training a developer to test thoroughly may or may not be feasible; it's simply a different skillset.

In the end:
For non-test reasons, I think using XP as a process is difficult in a corporate environment - although I certainly encourage it as a great team learning process. I also think the emphasis on automated test is ill-suited to GUI testing. 100% automation isn't a goal I think is realistic.

DISCLAIMER: All thoughts on processes are my own and are meant as reflections of my experiences with them, not with the theory of the process itself. No flamewars, please.

Other Posts in this series:
Test in Process: A Series (
http://blog.abakas.com/2007/08/test-in-process-series.html)
Test in Process: RUP (
http://blog.abakas.com/2007/08/test-in-process-rup.html)
Test in Process: XP (
http://blog.abakas.com/2007/08/test-in-process-xp.html)
Test in Process: SCRUM (
http://blog.abakas.com/2007/08/test-in-process-scrum.html)

Monday, August 20, 2007

Will the real customer please stand up?

Let's say your typical software development process works something like this:
define -> build -> test -> ship

In each phase, there are lots of references thrown around to "the customer" or "our users".
"Our customers would never want a widget button!"
"Users hate large downloads, so let's not require the .NET 2 Framework."
"What kind of a user would do that?! That's not a good test!"

In the end, despite the user personas, the focus groups, the feedback, none of us really knows what the user WANTS. At best we know what users want you to think they want. Your "customer" is a vague notion, a mere tool for the person positing the argument. So let's not talk about "users" and "customers". Let's talk instead about the real stakeholders. In every phase of the software development process, your actual customer is the next person in line. That's the person who really cares about how well you do your job. So in the define step, the person building this software is your REAL customer. In the build step, the tester is your REAL customer. In the test step, well, there your real customers are the actual people who use your product.

Sunday, August 19, 2007

Test in Process: RUP

This post is one of a series examining the role of test in various software development processes. Today we'll look at the Rational Unified Process (RUP).

How I got here:

I worked for a while in a consulting firm, and a lot of our clients used RUP. Think Fortune 1000 companies; they'd spent the time and money on the tools, training, etc. This was The Way They Built Software.

The role of test:

In many ways, RUP is the process I've worked with that most accounts for testing. Testing is a phase in this process, very explicit, and can rely on specifications created earlier in the process. Taken to an extreme these specifications even include verification and valdiation criteria. This process was also the most likely to produce non-functional requirements that made any sense. With RUP, load requirements were "support 1 million users simultaneously performing use case X". Most other processes had load requirements like "handle a lot of users" or my personal favorite "do load testing".

Upsides:

Having test be an explicit role certainly helped make the process easier for testers. The test phase is explicitly accounted for when creating the project schedule, defining requirements, allocating resources, even designing the code. This also provided a great way to get test throughout the process - in requirements and design reviews, not just testing of the code.

Downsides:

By today's standards RUP is very slow. The release cycles were typically 18 months, and selling that to upper management is pretty well hard - who knows what your competitor will do that you simply can't react to. It also takes a lot of resources, and the 50 person companies I've seen try to implement it simply crumble under its weight, even if they're skipping artifacts and taking other shortcuts.

In the end:

Please note that if I were building a medical device or writing software that powered cars, I would consider RUP. If I'm building a consumer website or a series of Flash games, I'm not going to go with RUP. Sometimes the overhead is necessary because the tolerance for error is that low. The lesson for me here is this: how do we get the thoughtfulness of RUP with the speed and flexibility of some other processes? How do we consider test explicitly without relatively arduous processes that wind up being seen as overhead?

DISCLAIMER:

All thoughts on processes are my own and are meant as reflections of my experiences with them, not with the theory of the process itself. No flamewars, please. Also, RUP and a lot of associated templates and tools are copyright IBM.

Other Posts in this series:
Test in Process: A Series (
http://blog.abakas.com/2007/08/test-in-process-series.html)
Test in Process: RUP (
http://blog.abakas.com/2007/08/test-in-process-rup.html)
Test in Process: XP (
http://blog.abakas.com/2007/08/test-in-process-xp.html)
Test in Process: SCRUM (
http://blog.abakas.com/2007/08/test-in-process-scrum.html)

Test in Process: A Series

"Process" is one of those notions that lends itself to silver bullet status. Each process has its adherents, and different processes get popular one after another. I'm not here to argue which process is best (we'll save that for a later post!). What I would like to do is consider how "test" integrates into each process. I'll do a separate post for each of the major processes I've used, so bear with me.



DISCLAIMER:

All thoughts on processes are my own and are meant as reflections of my experiences with them, not with the theory of the process itself. No flamewars, please.

Other Posts in this series:
Test in Process: A Series (
http://blog.abakas.com/2007/08/test-in-process-series.html)
Test in Process: RUP (
http://blog.abakas.com/2007/08/test-in-process-rup.html)
Test in Process: XP (
http://blog.abakas.com/2007/08/test-in-process-xp.html)
Test in Process: SCRUM (
http://blog.abakas.com/2007/08/test-in-process-scrum.html)

Saturday, August 18, 2007

I am a Problem Solver

I finally have something to say.


This being my entry into blogs, I suppose that makes this the obligatory first post. I'm here because I've embarked on a journey, and I'd like you to come. You might know me as a QA Engineer, but really I've chosen a very specific path. I'm not here to test, I'm not here as a user advocate, I'm not even here to ship software. I'm here to solve problems so that we can make something that we're proud of. Success follows naturally.


This blog is about success and failure, and ultimately about how we make software better for everyone involved.