In preparing for test automation of existing scripts we'll go through a series of phases, sequentially or iteratively, depending on the way the project is being managed. One phase is the Analysis phase, a key activity of which is to analyse the existing set of Test Cases and assess various aspects of them prior to commencing automation.
Aspects we'll need to understand include;
- Amount: Total number of Test Cases to be automated
- State: are they current or out of date
- Quality: well written, easy to follow
- Complexity: number of steps, systems interacted with, set-up required
The risk is that we'll start scripting, crack open the next set of Test Cases and suddenly realise they're way more complex than expected. The delivery rate slows down, team members start doing crazy hours and stake-holders begin to shout about dates. We need to add analysis of the Complexity to Automate into the analysis phase. This will ensure our estimates are more accurate and our delivery stays on-track to the schedule.
What are the ways in which we can do this? The main issue is how we recognise complexity and assign some kind of value to it. In other words;
- What does complexity look like and how long does it take to address it when writing scripts?
Once that's worked out, we still don't have our running order for delivery, we'd need then to map that to the usual question of:
- What is the business criticality of the Test Cases to be automated?
From there we can cut a schedule of delivery which we'll have a better chance of achieving, while delivering scripts in an order relevant to the business.
The main question then is around describing complexity. What makes a Test Case complex? Here's some suggestions;
- Total number of steps to carry out, before reaching the test validation point
- Pre-conditions that need to be met that set-up the system in a state that allows the Test Case steps to be run, which might include
- Data needed, of a certain type in a given state, in order to run the case
- Access, permissions and accounts required
- The system and infrastructure dependencies
- Specific tools or scripts
- A particular kind of operating system or environment
As with all estimation, the margin for error is high as there's likely not complete consistency across all Test Cases. The trick is to do a little equivalence partitioning on the set, then apply your modelling in a consistent way at that level. With that we should address the idea that not all steps / data / etc is created equal, but in a common set of cases it'll be reasonably consistent.
YouTube Channel: [WATCH, RATE, SUBSCRIBE]