Monday, 4 August 2014

Negative and Positive Tests - you're getting it wrong

When we write test case sets, there's a commonly used idea of writing positive and negative test cases. The thinking is that positive test cases cover all the things you DO want the software to do, while the negative test cases cover what you DON'T want it to do. However, when applied in this way the negative test cases usually get confused in their focus.

All features and functionality that we code in intentionally, are POSITIVE scenarios. Yes, any error conditions, retries, warning dialogue boxes, etc. that are specified as requirements or standard expectations are POSITIVE test conditions. When you test the failure path, with expected outcomes, it's a Positive test case. Negative test cases need to cover the unintended and unexpected. If you thought writing Negative test cases was easy, you've definitely been doing it wrong.

So what do Positive and Negative really cover? In my view it's the following:

POSITIVE
Intended, Expected and explicitly stated features and functionality
- that are required - the stuff you know you do want
- that are not required or expected - the stuff you know you don't want

NEGATIVE
Potential issues and problems that can occur even if the probability is low;
- that experience tells you are possible given the technology you're using (inductive)
- that you believe are likely given the nature of issues you've seen before (deductive)

Writing Positive test cases^ is relatively easy, writing negative test cases is hard, period. To do it well, you have to draw on a combination of technical knowledge, systems know-how and raw experience. A way to do this is to apply Inductive and Deductive analysis. I've written about that in a separate post, but here's a recap:


- Inductive: Define a failure mode / bug and think of how you would know it was present in the system, what would failure look like?
- Deductive: Define some form of erroneous behaviour and work out what would be broken in the system for that behaviour to occur.

With this approach you come from one end or the other to the same conclusion, defining a failure mode and it's effect. Yep, if that rings a bell it's because it's not some super new way of thinking, it's been around since the 50s in the shape of Failure Modes & effects Analysis (FMEA). It's a key ingredient in the sauciness of being a test architect.

As a rule of thumb, expect to write a meaningful Negative test case for every 50 Positive test cases, that ratio wouldn't be unreasonable at 1 : 100. These are the tests that stop planes falling out of the sky, prevent satellites from failing and stop nuclear power plants exploding, so don't stress it if your numbers are low.

Pass or Fail?
When writing Positive and Negative test cases (or unit tests for that matter), a common question is whether they should be shown as Passes or Fails. Positive test cases should certainly pass, we're testing for something that we know should or shouldn't happen after all. When that expected behaviour does or doesn't happen, it's a pass. but for Negative? I see the argument for flagging these as Fails. They should Fail, a Fail of a Negative is a good thing, it means planes are in the sky and the world it's Fukashima'd.

However, in practice this approach proves confusing. The discussions with management about ongoing status and end of test reporting, where Negatives are passing when they're showing as bright red fails is just too much work. Pass them all. When a Positive passes it's green, when a negative fails it's green. When a negative passes however, you have a major unexpected issue and need to jump on it.. Separate out your Positive and Negative test reporting for extra impact and kudos.

^ Test Cases, Mind Maps or whatever your test currency is for your project.

Common Examples
It's hard to come up with a cookbook of Negative test cases ( I may have mentioned that...), but here's some I get reuse out of from time to time.

- Configuration files missing
- Entry to the system without authentication
- Web server not operational
- User able to access other user records
- Data or file encryption not working
- Backwards compatibility failure when upgrading to a new version of

These are common across many systems, but are examples of 'that could never happen'. We hear this too often as tech stacks usually come with whatever it is already included or enabled, it's core functionality beyond that being worked on day to day. Therefore it 'would just never fail', 'no one would make that change without checking' or the system 'couldn't work without it'. Hubris before the fall. The risk is there and putting tests in place to make sure the failure mode is caught as soon as it occurs is critical.

With a slightly shifted perspective on Positive and Negative tests, the robustness of your test regime can be increased greatly. It takes practice to write them well, but your application needs them!

Mark

0 comments: