Thursday, 31 March 2016

Testing activities BEFORE and AFTER release

Test activities Before the software is released

As test professionals and certainly as test managers, we focus a lot of our attention on the start to end process of testing whatever it is we’re making sure works as expected, before it leaves our hands. Once the software or system goes live, we pat ourselves on the back, congratulating ourselves on a job well done.

But, in an agilistic sense, are we done-done? No, we are not. Why would that be? Don’t kick yourself too hard if nothing immediately comes to mind. It’s a failing most of us are either forced into by project constraints or we’re literally trained to fall into by the way we think about projects and testing.

What we need to do first of all, is be mindful of a form of Product Lifecycle, where the product goes from conception to retirement, after its usefulness ends. Now, I’m not talking about the formal PDLC, I’m not convinced that applies as fully as some would think in the software development world these days. The model I want to promote, is one where testing does NOT end when that software leaves our hands. It also doesn’t start when it arrives in our hands.

I’d say (I hope) it’s an accepted practice today that testing activities start well before the software lands with us for test. I hope the act of throwing it over the fence is a practice that doesn't happen in a meaningful way anymore.

We understand that the test function being involved in analysis and planning early in the process of development only brings benefits. In an ideal environment, the test function is part of the process of defining requirements, in whatever shape they take. We’re looking at them to check for congruency, acceptance, testability if they can be automated, etc.

Even in the classic v-model, we try to pull forward writing acceptance tests and test plans before code lands. In an iterative/adaptive environment those activities are practically embedded in the process.

Testing activities After the software is released

So, what about when testing is done? This is actually where we step up our professional game and deliver a real value-add.

What is the objective of testing? To assure products work as expected, find defects, to confirm the software or system is performant, etc. One key aspect of the testing regime is to ensure ongoing improvement and assurance of the above - to improve the efficacy of testing release by release, by understanding the testing that was performed and what the outcome of that testing was.

This is critical – how else are you measuring the effectiveness of testing? By the number of defects found? By the number of test cases executed to show test coverage? These are useful measures to a degree though they are sometimes counter productive. We need to take it one step further, by doing things after we’re done testing.

Analysis of defects found during testing, to understand the types and causes of issues discovered and perform process and people oriented root cause analysis is critical to addressing potentially systemic issues impacting code and product quality. The other side is defects found post-release, by customers and users, those that are reported to the support teams.

We need to watch the defects that arrive and either be part of the analysis and root cause discovery on a defect by defect basis or at the very least participate in say weekly updates on these defects. Part of what we need to learn from this is the number of defects, by system area and their severity. How did we not find them? How do we modify our test approach in future? What analysis techniques do we need to strengthen or start applying, possibly even stop using altogether.

In this way we carefully and measurably improve the way we test. You can try all the agile, context driven, session based, scripted, automated testing approaches you like - but they mean nothing if they aren't measurably improving the code and product quality. You're probably getting a sense of why I don't do fanatical any school-of-testing-or-development, because it's a rabbit hole with spiky failure at the bottom.

Once you identify patterns, defect clusters and types, volume, severity, rate of discovery post live and other key aspects; you can then fix the root causes. Rinse and repeat and demonstrate those issues you saw last time are going, quality as measured by impact of defects found in live is improved, the test regime is demonstrably more effective.

These are some of the reasons we are still testing after we've apparently finished testing.

Not until the defect discovery is near zero or what's being discovered is trivial is our job done done.


Enjoyed this post?
Say thanks by sharing, clicking an advert or checking out the links above!
Costs you nothing, means a lot.