Monday, August 31, 2009

Ruby Web Services

What a saga. I spent a chunk of this weekend working on a test for a SOAP web service.

(NOTE: For those of you suggesting RESTful is the way and the light, that's great, but in this case exposing a SOAP web service was an explicit requirement. So SOAP it is.)

Here's the setup:
Here's how I eventually got it working:

Prep cucumber
I assume you have at least one other cucumber test, so I'm not going to tell you how to set that up.

Install gems
Install the following gems:
  • datanoise-actionwebservice (2.3.2)
  • httpclient (2.1.5.2)
  • soap4r (1.5.8).
Also install any other gems your app needs.

Create cucumber files
Create a feature file of the form .feature. You can find the API name by looking in app/services/. My service is app/services/foo_bar_api.rb, so my feature is foo_bar.feature.

Create a steps file of the form _steps.rb. My steps file is named foo_bar_steps.rb.

Create base cucumber test
I freely admit this part isn't quite baked yet. My feature so far looks like this:

Web Services clients will need to talk to us from time to time. They should be able to communicate via SOAP
interfaces.

Story: Authenticate
As an anonymous thing
I want to activate
So that I can be active
Scenario: Activate
Given an "anonymous" thing
When I activate
Then I should receive an activation key

As you can see, I only have one scenario. My steps file looks like this:

require 'rubygems'
gem 'soap4r'
require 'soap/wsdlDriver'

Given /^an "([^\"]*)" appliance$/ do |type|
puts "okay we got it"
end

When /^I activate$/ do
wsdl = "http://localhost:3000/Foo_bar/wsdl"
user = 'catherine'
password = 'password'
driver = SOAP::WSDLDriverFactory.new(wsdl).create_rpc_driver
driver.options["protocol.http.basic_auth"] << [wsdl, user, password]
#driver.wiredump_dev = STDOUT
result = driver.activation("desc", "12345")
puts result
end

Then /^I should receive an activation key$/ do
puts "say hooray"
end

Also, obviously, not fully implemented. Stay with me, though. The important bits are at the top and in the "When" area. Let's talk about what I'm doing here:
  • I'm explicitly using the gem version of soap4r (the line "gem 'soap4r'"). That way I don't wind up using the version that shipped with Ruby.
  • When I define my wsdl, I'm using the format "http:/myserver/myapiname/wsdl". That seems to be the default place that ActionWebService puts it.
  • Note that I'm not actually using ActionWebService (we only installed it because the server needs it); I'm only using soap4r.
  • This web service is requires basic authentication. The "driver.options" line takes care of that.
  • I'm using the dynamically generated method definitions (that soap4r gets from reading the wsdl). Thus "activation" is actually the name of the exposed SOAP method I'm calling. If, for example, my SOAP server exposed a method "horse", I would call "driver.horse(params)".

Okay, moving on.

Configure the environment
Add these two lines to your environment.rb:
require 'rubygems'
gem 'soap4r'

This is so that the gem version of soap4r will load instead of the version that ships with Ruby.

Prep the Server
Before running the tests, you have to make sure that you have a user that can authenticate. Load this with factories, fixtures, mocks, a database insert statement, whatever, just as long as it's the username and password you specified in your steps file.

Then start your server. I've just been doing "script/server" in another terminal. There's probably a better way, but I haven't solved that problem yet.

Run Your Test
You're finally ready to run it. Use your preferred method. I used:
rake features FEATURE=features/cloudswitch_home.feature

And it should run (assuming your web service actually works!).

Lastly, here are a few of the mistakes I made along the way:
  1. Case matters, sort of. When you specify the wsdl, it doesn't have to match the case in your controller. However, httpclient will eventually turn this into an all-lowercase URL. If your URI is not all lowercase at that point, you'll get a 500 (internal server error) and it won't pass in your authentication information. (I had an ill-placed hard coding. And yes, I know I shouldn't have had it.)
  2. Ruby comes with a version of soap4r. Install the gem anyway and make sure you use that. The Ruby version of soap4r threw all kinds of errors for me.
  3. When you call the parameters, pass them in as direct arguments rather than as a hash. (i.e., call "driver.activation("desc", "12345")" rather than "driver.activation(:desc => "desc", :code => "12345")"). If you do the latter, it starts throwing errors like this: "wrong number of arguments (1 for 2) (ArgumentError)"
  4. If you see "uninitialized constant XSD::NS::KNOWN_TAG (NameError)" , it means you are using the Ruby soap4r instead of the gem. Put "gem 'soap4r'" in your environment.rb.
This is all still very rough, and I'll refine it as I get an actual test going. I did want to share, though. Thanks to the many blogs/API documentation/samples I found in Google for the assistance.

Friday, August 28, 2009

Rethink Your Test Plan

Testing for a release, our test plan breaks down about like this:
  • 90-95% regression tests
  • 3-8% new twists on existing stuff
  • 2-7% new stuff
Of course this varies a bit across releases. Every once in a while we get a huge new feature, and every once in a while we'll have a release with nothing truly new ("it's everything you had before... better!"). But in general we have a system that has been around a while, is fairly featureful, and has an awful lot of areas that should still work just like they did before.

(TANGENT) In general I like existing features to stay for the most part. I tend to be suspicious of products that are always doing "clean sweeps" or huge changes; I think it implies you didn't really think the product through the first time and you're probably not really thinking it through this time, either. When you had something good, keep it. (END TANGENT)

The downside for testing is that things start to get stale. You've started a new release and you have to test file-level enforcement of ACLs... again. To a certain extent you can automate away some of the boring parts, but not everything. So you fall into a rut, and then you start missing stuff.

So you change it up. You reformat your test plan. You move sections around. We've been doing this for about a year now and it's helped us find things. Some of the things have been new bugs, others have been areas that we weren't covering well at all.

The next step is to rethink the technique your test plan implies. One of our discoveries was that our test plans made a smoke test very hard. The testing provided by the automated checkin suite was fine, and that got an initial smoke test. The nightly automated tests provided some level of coverage in a guaranteed amount of time. But the human testing.... that got deep quickly and got coverage more slowly. Basically, the test plan was encouraging us to test the heck out of, say, volume creation, even as we managed to ignore, say, NIS integration until later in the process. We eventually hit it all, but we could do it better.

So now we're reformatting again. This time we're explicitly calling out a "smoke test" of each section. We'll do all the smoke tests early, and then go deep.

Test plans are living documents. They're a record of what you're doing, but they also guide your thinking. So when your results get stale, change the test plan. When your thinking gets stale, change the test plan. It will change your tests... see what you can find now!

Thursday, August 27, 2009

Test For No Effect

One of the aspects of our product is that it integrates into Active Directory. Basically, you point the product at an AD domain controller and it will join the domain. You can then set ACLs on files stored on our system, etc. As with any other feature, we set out to make a change recently. So we created a story that said, basically:

"We can join any OU in the domain, as long as the user specifies which OU." (There's more detail, but that's the gist of it.)

We were working on the acceptance criteria for the story, and worked our way through the usual suspects: join OUs with various characters in their names, attempt to join a non-existent OU, attempt to join an OU that the user doesn't have permission to join, etc. There's whole class of tests we're missing, though.

We also need to test things that should have no effect on the system.

For example, moving the computer object from one OU to another after the join is complete should have no effect. Renaming the OU after join should have no effect. Removing the original joiner's permissions to join to the OU should have no effect, as long as the join is complete. (There are several more in this vein.) Until we actually try this, though, we can't say for sure that there is no effect. I'd rather not find out in the field that we do have a dependency there that we simply didn't think of.

And that's the moral of today's story: Don't forget to test things that should have no effect.

Wednesday, August 26, 2009

Why Run This?

Generally when someone's going to ask us to run a test, we hear the words "we should".

"We should measure performance across 15 nodes."
"We should try multiple Active Directory servers."
"We should put faster processors in this node and see what happens."

Great. Why?

What is it going to tell us?

If you can't describe what you're going to get out of a test, you haven't really defined it. This doesn't have to be complex. It could be as simple as "We should put faster processors in this node and see if our ingest performance improves in single client or multi client scenarios". Other times it may be more specific to a potential customer need, or to a direction marketing wants to take the product, or to some external requirement (a regulation or the like).

In any case, make sure you know what you're going to attempt to show. That will tell you what to measure and what to look at.

Running tests is great, and having other people suggest tests is a good way to broaden what you've thought of to test. Just make sure you all know what you think you're going to get out of it, or you may be wasting everyone's time.

Tuesday, August 25, 2009

Classification

There are many many ways to conduct tests - from scripts (executed by you or by a computer) to exploratory tests to ad hoc test to... All of these, though, ultimately require some division of the system into manageable test parts. Maybe these parts of features, maybe missions, maybe some aspect of the system. So, how do we divide the system? What is our test strategy?



The living world is a great mass of things from horses to beetles and from paramecia to roses. So we divide and classify things into categories and subcategories. The kingdom of animals is divided into the family of vertebrates is divided into the class of mammals, and so on. We can do the same with our system.

First we choose our kingdoms. These are the highest-level ways in which we will approach the system. Personally, I'm partial to the FURPS classification (originally out of HP, I believe). So our "kingdoms" are:
  • Functionality
  • Usability
  • Reliability
  • Performance
  • Sustainability (some people say Supportability instead)

And then from there we subdivide. "Usability", for example, is divided into families:
  • End user ease of use
  • Code testability
  • Operational use
  • Deployability

And from there we can subdivide again. "Code maintainability", for example, is divided into classes:
  • Strength of coupling
  • Code complexity
  • Conformance to coding style
  • Comment accuracy and relevance
  • Build environment and documentation

Please note that none of these items tells you how to test something. You can do exploratory tests. You can write scripts. You can use any technique you like. This merely describes a way to break your system down into testable parts.

For those of you playing along at home, I have put together my current system classification. It's not universal; you will need to make changes. Hopefully this will give you a start, though.


- Functionality
- Your System Features Go Here
- Usability
- GUI ease of use
- Time to accomplish tasks or workflows
- Ease of identification measures (e.g., "figure out how to log out" takes X seconds)
- Style guide conformance
- API ease of use
- Documentation
- Clarity of exposed calls and parameters
- Clarity of messages, configuration, return codes
- Operational use
- "Care and feeding" of the system
- Manual steps (e.g., reboot server every X, or manual log collection)
- Required downtime (planned and amount of anticipated unplanned downtime)
- Notification mechanisms
- Self-identification of error and warning states
- Integration into existing notification tools (e.g., SNMP trap, splunk)
- Resource requirements
- Number of servers (fewer are often easier to administer)
- Power and cooling requirements
- Hardware and software lifecycle requirements (e.g., upgrading every 6 weeks is harder than upgrading every 6 months)
- Code testability
- Availability of interfaces for mocking
- Integration with common test tools
- Presence of unit tests
- Deployability
- Package for install and upgrade
- Dependencies on external items (e.g., libraries on the system, other software, specific hardware)
- Reliability
- Failure sustainability
- Ability to handle system failures (e.g., crash)
- Ability to handle external failures (e.g., power loss)
- Long-running tests
- Full system behaviors
- Long running behaviors (e.g., log rolling abilities)
- Memory and resource management
- Valgrind and the like
- Resource (CPU, memory, disk) usage to benefit ratio
- Leaks (memory, thread, etc)
- Disaster recovery
- System recovery tools and utilities
- "Bringing the system back up" protocol and tools
- Redundancy
- Replication and mirroring
- Failover and failback mechanisms between site
- Cross site or cross system synchronization (data and control)
- Performance
- Throughput
- Latency
- Your System Features Go Here (with a "how fast/how many" twist)
- Load
- Sustained client load
- Peak load handling
- Sustainability
- Maintainability
- Code smells (cleanliness)
- Component interfaces and interactions (e.g., can you upgrade one component without huge overhead?)
- Upgradeability
- Upgrade mechanism
- Re-install mechanism
- Downtime requirements
- Supportability
- Issue diagnosis features
- Reporting
- Log collection
- Support access mechanisms
- Underlying components
- Hardware components (e.g., can we still get hardware for this?)
- Software components (e.g., is this library still supported?)

Monday, August 24, 2009

Power of Demos

We've been working on a project, and it's been a bit of a whirlwind. It's a prototype, and it's a lot of feeling our way in the dark, and doing things quickly. On top of all this, we have external parties evaluating the output of the project, so we're having to quickly answer questions about what it can do and how fast it could do it. Marketing cares, sales cares, development cares, etc.

It's a lot of moving pieces and a lot of people watching.

So early on we sat down and wrote out what we had to show in about a month. Then we broke it into one week iterations (basically). The requirements we sketchy at best. Week two's schedule, for example, had two items on it: "faster ingest" and "modification/deletion support".

The risk of working like this is that different people can derive very different expectations from these vague requirements. So how do we overcome this?

Demo.

Every week we set up a demo and invited everyone who was remotely involved with the project. We sent out a demo invitation that basically said, "this is your chance to see what you're going to get, and this is the time we will take feedback" (we didn't actually say, "you miss it, you lose", but we sure implied it).

And then every Wednesday we showed it. We sat in a room with the software on a projector and said, "this is what we mean by faster ingest" and showed it. We said, "here's how deletion works" and showed it. People made comments, people applauded the wins, people said, "oh! we could do X".

The benefits of this were enormous:
  • It made the code showable. We got practice at not only building features but in making sure we could show those features off to an audience of various technical levels.
  • It kept us honest. We couldn't say, "oh yeah, something's done" if we couldn't actually show it.
  • It put the audience on the same page. There was no ambivalence about what was being produced. It was there for everyone to see, and if there was a variance in expectations it could be resolved early.
  • It forced us to figure out how to demo. Some things are hard to demo. Features without a UI, or features that only show up when a system is full or has been running for a while are hard to show off. But we were going to have to figure out how to show it eventually to potential customers, etc. This demo forced us to figure out how to do it.
Next time you're moving fast, try demoing something. It's amazing how illuminating a simple demo can be. Does it take time? Absolutely. Will you get that time back later? In spades. Give it a shot.



P.S. I know this isn't a new idea, but it's a good one, so I'm all for repeating it!

Wednesday, August 19, 2009

Reports

I've been writing a lot of report-type things lately. They've been a lot of different types of things: emails summarizing meetings; PowerPoint overviews; how-to documents; test results; and more. In all cases, I'm having to consider a fairly broad presentation to an audience I may or may not know. Since these things may be forwarded around, each report has to be self-contained and self-descriptive.

And I structure them all pretty much the same way. Here's what I put in and why:
  • Overview. This section describes the purpose of the document and its intended audience. This way I don't get comments about skipping implementation details in a document intended for a first overview for an external party.
  • Results Summary. What do we see? What changed or what benefits are there? If the audience never makes it past this section, at least they'll get the big picture and the end conclusion. (Depending on the audience, I don't necessarily expect the whole thing to be read in great detail.)
  • How We Did It. This is the section that describes the setup of whatever we did (whether that's implemented a product or installed something or tested something). Maybe it's an architecture diagram. Maybe it's a test setup and configuration. Maybe it's a list of prerequisites. Either way, this is the basis on which our work was done. I write this section in a huge level of detail because I know that if I (or someone else) is going to do this again then getting that same underlying basis is going to be important for consistency of results.
  • Details. This is the guts of the thing, usually. These are the tables of test results, the API, the product features and benefits, etc.
  • Discussion. Based on the details presented, there are almost always things to discuss. Maybe it's future extensions to the product. Maybe it's further tests to run. Maybe it's an odd pattern or a reason behind some of the details. These are the questions that I expect people will ask. Often when I'm writing this one, I'll start out with the actual list of questions I anticipate (or that reviewers have asked already). Then I just fill in the blanks.
The length varies, and the contents vary, but pretty much every report-type thing I write has these in it. I find that kind of consistent outline lets me do these reports more reliably and more consistently.

What tricks do you use when you're writing a report?

Tuesday, August 18, 2009

Elevator Pitch

I work in a place with a really cool test infrastructure. There are lots of pieces to it:
  • unit tests (in several languages)
  • cruise control for auto building and tagging, etc.
  • a checkin suite of tests (runs after every build)
  • a nightly test set that includes about 25 suites (and about 3000 tests)
  • a weekly test set that includes about 8 suites with multi-day tests
  • a set of performance tests that runs every night
  • some scaling and other tests that run occasionally (real machine hogs!)
  • a truly evil test that sets up a system and inserts faults until the system fails or we want to start testing newer code so we take it down and start over (this takes weeks, generally)
  • a pretty robust framework to manage and run all this
  • I'm probably forgetting something...
I can go on about this stuff for hours (and then there's the other testing we do...). So could you, probably. There's another need though: you need your elevator pitch. This is your chance to sell your testing, and to make people want to dive into details with you.

In 30 seconds, what is your test philosophy? What is it your company does to test? Why?

For us it's basically this:

"We're a storage company: reliability and scalability matter most. People want their data, they want it to always be there, and they want it to be consistently available. Our testing model is designed to prevent regressions, stress our reliability mechanisms, sustain performance, and above all ensure consistency of experience while adding new features and platforms. This is a multi-process, highly multi-threaded grid system, so we have an extensive automated test infrastructure that helps us flush out the deadlocks, races, concurrency problems, etc. that a system like this produces. We supplement this with nightly performance testing and automation for specification conformance. Lastly, we layer in manual testing for the human experience and to examine modes we may not have taught the automated infrastructure to see. Quality is everyone's job here."

I'm not explaining everything here. This is a teaser only, but it's important. You have to start with a summary so that people want to learn more. Starting with details loses people's interest; it's just too much information to really synthesize before they decide whether it's interesting. Hook 'em, then wow 'em with the details.

Monday, August 17, 2009

Convinced

We sometimes show our product to analysts. There's a bit of a funny relationship there: we want them to say our product is great and wonderful; they want our business... and they want to appear unbiased. The end result of this is usually a report or a press release or some words they're willing to say to potential and future clients.

In order for the report to look as rosy as possible, we assemble data:
  • demos
  • performance reports
  • analysis of how much more reliable/scalable/faster we are
  • theoretical backgrounds and abstracts
Yes, we're kind of geeky, so we tend to throw data at problems. There's one missing ingredient, though:

You also have to be convincing.

It's easy to fall into a trap where you start with a position ("our product is great") and work from there to the data that shows it's great. Anyone else looking at your data, though, does not share the same starting position. They don't yet see your underlying point ("our product is great").

Present the data; this is an important part of building your stature for your audience. But don't forget to present your position as well. You need to be explicit in the conclusions you think your data shows. Specifically say, "Here is the data, and here is the conclusion (our product is cool) that results from this data."

Your position is the belief. Your data is a reason to believe. Together, they're convincing. Be convincing.

Fresh Eyes

In QA we can learn and teach and think and improve our testing of a product. But there's one thing we can never get back.... fresh eyes.

Every time I hire a tester, we see a surge in bugs logged and/or changes to the test plan shortly after that tester starts. Why?

Fresh eyes.

That tester may not know our product particularly well yet, but he's looking at it with the compendium of his entire prior experience, and none of us can match that. He'll see things we won't. He'll see things we've simply accepted and say, "why that?"

It's part of the reason I like interns... a rotating selection of fresh eyes.

Unfortunately, getting someone new on the team (either temporarily or permanently) is not always an option. So how do you freshen your own and your existing team's eyes?

The trick is to change your approach, to try something new. There are a number of ways to do this, depending on what you're doing now:
  • Blink testing
  • Rotation through another team (e.g., spend a week with dev, or with support)
  • Walk away for a bit. Work on something else; take a day off; whatever - just go away
  • Have a Go Crazy Day
  • Walk through a section of your test plan.... backwards
  • Find a really old bug that was fixed years ago and riff on it. Try variations of what triggered the bug.
Ultimately, you're trying to see your product in a way you've never seen it before, or bring a new perspective to it. Try to give yourself fresh eyes.

Friday, August 14, 2009

Bug Title Of The Day

A bug came across my desk today, with the following title:

"Upgrader ailed when running on an HA pair"

"ailed" is the best typo I've seen so far today...

Thursday, August 13, 2009

Building An Idea

Sometimes you get lucky and you get an idea. You start to work the idea, and it seems to have some merit. It's a big idea, though, and a lot of work, so you start to bounce it off other people. Simple conversations, really: "Would you buy this? What would it look like if you did?" "Do you see a market need for this?" "Is there a hole here that my idea could possibly fit?"

If they go well, these conversations bring up more questions than answers: "How fast is it?" "When could I have it?" "Could it make my toast in the morning?" This is good but a bit overwhelming. Likely the answers to all of these questions are simply "I don't know yet".

And it's okay. There's probably a very large realm of things you don't know. But we can deal with it. As you talk to people and continue to think through those ideas, commonalities will emerge. So start doing, and start doing in a certain order:
  • First, plan what everyone has asked about
  • Second, do what your probable "first customer" has asked about
  • Third, do what everyone has asked about
  • Fourth, plan what a subset of people has asked about
  • Fifth, well, by now you have something resembling a product and a lot of other things have interfered.
Note the difference here between plan and do. When you're in early stages, as long as you're being honest, it's okay to say you are planning but haven't yet done things. But you have to do something fairly quickly so people understand that it's an actionable idea rather than an idle dream.

Plan for the large group that is "everyone". Do for the one you need to impress. Then do for everyone else.

Wednesday, August 12, 2009

Things My Phone Has Said

I got a new phone fairly recently, and it's my first smart phone. By that I mean, it's the first thing that does email and whatnot. My previous phones have all been the basic "make calls and a little light texting" style.

I'm still learning. Turns out, the phone has autocorrect for when your fingers slip on the teeny tiny keys. Usually this is helpful, but sometimes my phone says things I don't mean. To wit:

What I said: A note to a colleague saying I had put some test data locations on the wiki.
What my phone said: "I put the test data on the bike"
Subs: s/bike/wiki/g. Thanks, phone.

What I said: A note to the IT guy saying that the Symantec Antivirus install had gone fine.
What my phone said: "The Semantic install worked. Thanks."
Subs: s/Semantic/Symantec/g. Thanks, phone.

What I said: "Thanks. Catherine"
Why my phone said: "Thanks, Cathedra"
Subs: My own name! Thanks, phone.

I still have some typing skills to master, I think.

Tuesday, August 11, 2009

Help Yourself

Trying to do something new for the first time is generally a bit frustrating. I just don't know how to do it, and well, let's just say that I'm not the model of grace and elegance that I am when I'm doing something I know very well!

It's really tempting sometimes to throw up my hands and ask someone to walk me through it. Or I could show a little initiative and at least try once before I go running for assistance. People are happy to help, usually, but they're happier to help when you

It's a fine line between asking for help too often and sitting around being stuck because you won't ask for help.

In general, ask for help if:
  • Doing it wrong will break something expensive
  • You've been told to ask for help
  • You've tried a few times (really tried) and failed. Be prepared to state what you've tried
In general, don't ask for help if:
  • You haven't even tried it at all
  • You haven't read any available documentation
Help yourself. Then get others to help you. It'll go much better.

Monday, August 10, 2009

Ambition In a Different Form

I like Mondays. Sometimes, though, going into Monday you know the whole week is going to be rough. Maybe there's a lot going on, maybe everything you touched last week fell to pieces, maybe you are behind on a project and have a deadline coming up.... whatever it is, this week is gonna be a doozy!

When that happens (and it's this week for me), I like to make Monday one long day. I'll skip the gym and head in early, and I'll leave late. My goal?

Make sure the week starts off well.

It's that simple. If it's a rough week, I need to know that I've started it off right, and that's what I use Monday for. So I don't go home until I've made measurable progress. Mondays are for being ambitious, but ambition can mean different things.

Sometimes ambition consists of simply making sure you don't get behind.

I can sympathize, and it's hard to dread that kind of busy week... but think of all you'll have done at the end of it!

Friday, August 7, 2009

"ish"

There are warning signs that we all can see...


And then there are the quieter warning signs. One of the most common of those is "ish".

As in:
  • Sure, that's trivial... ish.
  • We'll be done in August... ish.
  • That's a smallish change.
"ish" here is code for "I don't know, exactly"; it's the verbal equivalent of waving your hands in the air vaguely. It's not bad, necessarily. It just means that something may change in the future.

So watch your "ish" and give yourself a bit more wiggle room when your task is a bit fuzzy.

Thursday, August 6, 2009

"Do Next"

Work ultimately comes down to, "Do this next". Sometimes I have a meeting and the answer is obvious. Other times, I can choose from a number of different things, and the answer of "what do I do now?" is less obvious.

One of the problems that I'm trying to solve here is the notion that I need to keep all the balls in the air across several projects. Let's say I have five projects going on:
  • main product release
  • prototype release
  • client issue
  • next release plan
  • lab expansion plan
I can choose to work them serially (today I'm doing main product release, tomorrow next release plan, etc). However, it means I'm touching each of them only once a week, and in the week that elapses, well, that's a lot of other people also working on these projects who are waiting. No good.

So I have to choose to work on each of them frequently enough to keep current. It doesn't have to be much, and the frequency varies based on the priority of the project overall. In general, I sit down and say, "what is most blocked by me?" and work on that.

How do you handle balance, at the fine-grained tactical level?

Wednesday, August 5, 2009

Medium Consistency

We message to a lot of different people through a lot of different mediums. Over the course of a project or even longer, we build habits. Our coworkers, managers, executives learn how to handle interactions from us: they go to the wiki every day, or they run a query in the defect tracking system, or they look for an email for us. And this is good - this is a habit we look to build.

Once it's built, though, we've created something of a monster. Habits are hard to change - just ask anyone who's taken up an exercise plan, or tried to quit smoking! So this means that now you've trained your audience, you have to ensure consistency. The message will change, of course. Over time it will morph from some variation on "in progress, have questions"to "finding some issues and working through them" to "done! hurrah!". But...

The medium must always be consistent.

If you've set an email precedent, stick with it. If you've set a wiki precedent, stick with it. Whatever habit you've made, you need to keep it.

So... we have achieved stasis. Forever and ever and ever... right?

Not exactly. Over time we can't simply keep doing the same thing. New people, new projects, new technologies - life moves on. And part of that is how you communicate, the medium through which you provide your message (10 years ago, a twitter project update would have been unheard of!).

If you need to change medium, choose a psychological break point.

Habits change at break points. Someone starts exercising after a heart attack. Someone else quits smoking after the cigarette tax doubles. These are break points, inflections during which people are more open to change. Find these, and use them to make your change. Start of a new project is perhaps the easiest break point to change medium. Staffing change or "process improvement initiative" is another.

As you communicate, remember, your audience values consistency. It's that consistency that lets them create a heartbeat for communication, and that lets them look beyond the medium to the message. Give them what they want, and you'll greatly improve your chances of getting what you need.

Tuesday, August 4, 2009

Party Trick

Over time in QA we get really good at remembering bug numbers. These are the defects we see for several nights in a row, or the bugs that tear through a whole swath of tests, or the defects that recur again and again.

It's a little eerie how things come out of memory, even after the bug has been closed:

30969: single scenario failure in concurrent transaction test
PERMA-11108: warning in the Java build
32273: performance issue with extremely large directories
etc....

I'm pretty sure this is a useless skill. But it's kind of fun. Plus I look really smart in meetings when I can just bring these out, computerless!

Monday, August 3, 2009

To Capacity

Most systems have ideal and maximum operating parameters. Think of things like temperature, size of disk available, number of concurrent users, etc. Whether or not you know them, your system has ranges within which it works quite well and ranges in which it's working but getting strained. Over time, you rewrite things and add features to become more efficient; the system's capacity grows.

People are the same way.

There is a range of things that the average employee can do well. And there is a range of things that they can do but the strain will show. After that, well, you're outside capacity. The trick is finding these ranges, using them, and growing them in a productive way.

For example, I once hired someone right out of college. He had an immense talent for finding weak points in a system, and he was quite good at whipping up scripts and utilities for data generation and the like. He had no capacity for writing up a test plan; he'd think of it and do it, but writing it all down was very hard for him. This was his base capacity.

Asking this person to do things outside his capacity simply doomed him to failure, just like asking a system to work with twice the max concurrent users will fail every time. So I didn't. I asked him to do things within his capacity, and sprinkled that with exercises to stretch his capacity. I asked him to:
  • Perform exploratory testing on a feature. This had him working within his capacity, a sort of spontaneous interaction with the system, and also helped him learn how to write things down. Sure, it was as he went along, but getting him thinking about expressing his tests in writing is the first step to anticipatory writing.
  • Create a script that everyone could use to generate data. The scripting he could do in his sleep, but for anyone else to use it, he had to learn to document it. This was teaching him to think of others and of their knowledge levels and how much communication they needed. This one took several tries!
In each case, the important part was to take advantage of what he knew, to make the meat of a task within his capacity. Some aspect, however, was just barely outside what he could do. Because it wasn't the whole task, the frustration levels stayed down. He was able to be a productive member of the team - the exploratory tests and data generation scripts had value - and also to learn so that he could over time become an even more valuable member of the team.

And over time, capacity grows.