Excel as a Test Management Tool

Because you're going to use it anyway...

Ruby-Selenium Webdriver

In under 10 Minutes

%w or %W? Secrets revealed!

Delimited Input discussed in depth.

Managing Knowledge Transfer Sessions

Read the article, grab the templates

Ask questions

If you don't ask, you won't learn. You won't teach either.

Sunday, 1 July 2007

The Importance of Documentation

Documents and other artefacts

The QA team can often be heard asking for documentation, much to the annoyance of those being asked. Those being asked are usually Developers or Project Managers of course. The documents being asked for are variously titled and might be Story Cards, Requirements, Functional Specifications, Technical Designs, Preliminary Specifications, a collection of these or something slightly different.

Either way, from the QA perspective they provide an overview of the requirements the development will try to deliver on. They'll describe what the intended design looks like and the developers are coding against and also explicitly stated requirements from which the implementation specifics can be gleaned. By explicit specifics I mean, for example, not just high level schema but details of table rows, columns, fields, mappings, allowable data, format of that data and so on.

Why? Because if requirements are explicit then the Test Cases we write are going to be specific, in which case they wont be general. See that not so subtle difference there? If requirements are not explicit or documents are incomplete or ambiguous then the tests we can write will be general in nature. For example a test of this type would be "The correct data should be output from the order capture system", you'll see data, it'll look correct, nothing bad will appear to have happened - the test will pass. They'll only prove the system doesn't do something we wouldn't want it to do. That is, we'll only test that the system does something that when observed could be generally assumed as correct.

Not really a very useful test though is it? Writing good Test Cases is a bit like writing SMART goals, you don't have to guess if it was achieved or not, you already know what the planned, predictable, intended outcome will look like. We'll be observing not testing.

However, if no one actually states what the requirement is - clearly, explicitly, unambiguously - we'll have no idea how to test that the intended outcomes of the development work really did produce the product that was wanted in a way it was wanted. We can never write SMART, useful tests. We'll just write a whole raft of superficial, not very useful tests that'll probably pass because the outcomes don't look wrong and be very unlikely to find important bugs.

Mark Crowther - QA Manager and Documentation Clerk