Excel as a Test Management Tool

Because you're going to use it anyway...

Ruby-Selenium Webdriver

In under 10 Minutes

%w or %W? Secrets revealed!

Delimited Input discussed in depth.

Managing Knowledge Transfer Sessions

Read the article, grab the templates

Ask questions

If you don't ask, you won't learn. You won't teach either.

Friday, 23 November 2012

Near Cause and Root Cause Analysis

Let’s look back to the Deductive Analysis section and consider the issue of “…settlement amount on a transaction, given its base price and a fee that gets charged on top, is not adding up to what’s expected.”

Thinking this through as a worked example, how might we have found the issue? It appears we had some kind of trade or purchase to make, it looks like there’s a spot or base rate for the item but we have to pay a fee of some kind on top too. So if the item was £100 plus a 10% fee, we’d expect the total to be £110, just a simple one for ease of illustration.

Now the issue appears to be that our expected outcome of £110 wasn’t achieved, let’s say it came out at £120 when we ran an end of day report. What would you do? Well, you wouldn’t just report “settlement amount on a transaction is not adding up to what’s expected.” That isn’t Near or Root Cause analysis, it’s just issue reporting. It means the developer has to do all the investigation and as a Test Analyst you should be doing some.

So, we’d do some checking… check the figures that were input and make sure it was £100. OK, entry of the base figure is not an issue. Run different figures and see if they come out at amount + 10%, that’s OK too, it appears on our summary page as 10% fee and £110. At this point we could be ‘under’ the UI elements, nearer to the scripts that submit transactions, closer to the transaction or reporting database, etc.

This is a very simple example of Near Cause analysis and is the MINIMUM we should do. We’re nearer to the Root Cause and it will help the team member who has to fix it get to a fix more quickly. Let me say again, Near Cause analysis is the MINIMUM you should do when reporting any issue.

If we’re skilled enough or have appropriate access, etc. we might then look at underlying scripts and maybe take a look at the databases or other systems. We might now inject data straight into the database with a bit of SQL and run the transactions report. Let’s say when we do the report is now showing our £100 as a £120. Aha, so we’re getting warmer. We could realise that what’s presented on the UI is merely the output of a browser side JavaScript, but the actual final, chargeable amount is calculated by some clever server side code, as you’d expect due to security/hacking considerations if that was on the user facing side.

Now we have our Root Cause, script ABC is calculating the 10% fee twice. A developer can quickly crack open the script, correct some erroneous if-then or variable or whatever the cause was and deliver a fix.

In summary, Near Cause as a minimum, never ever just report an issue and leave it there. Root Cause if you have the time, understanding, etc. it doesn’t mean developers blindly correct what you suggest as Near or Root Cause, we always expect things to be peer reviewed. But in this way we cut down the find to fix time and get working software delivered. That’s what we’re here for right?

Mark.

Liked this post?


Inductive and Deductive Analysis

More Advanced Analysis techniques

Once we have the basic analysis techniques of Reference, Inference and Conference understood we can look at some more advanced techniques. Many moons ago when I worked in manufacturing QA these two other techniques were my Katana and Wakizashi, slicing through the planning and root cause analysis problems. As often happens in a career with changes in focus, I forgot them for a while and then rediscovered them a few years back. These techniques are Inductive Analysis and Deductive Analysis.

These two are my favourites and to keep repeating the point, I’m guessing you already use them. If you’ve ever written any Test Cases or thought through why a bug might be occurring – you’ve already applied Inductive and Deductive Analysis, at least at a basic level. I consider these to be advanced techniques as they are relied on, albeit supported by some analysis tools, by industries where quality is paramount and any failure must be absolutely understood. Industries that use these techniques include gas and oil exploration, civil and military aerospace, the nuclear power industry, medical equipment manufacture and pharmaceuticals.

The test group will naturally apply Inductive and Deductive analysis as they carry out their testing activities. For example;

- When a bug is found, thought will be given to what other functionality may be affected and these areas of functionality will be checked
- When errors are observed that seem similar in nature connected paths of the user journey may be tested to see if these lead back to a single cause

In other words we could use:

- Inductive Analysis: to identify meaningful Test Cases, before any test execution takes place
- Deductive Analysis: to find Root Cause of issues and add additional Test Cases once bugs are found

For those who have followed the Testing Vs Checking debate, it should be noted that we’re not using these two techniques to check for requirements being implemented. I don’t believe the skills of a clever, experienced Test Analyst are best employed in marking the work of developers. In some ways I don’t feel this testing (really, checking) should even be done by professional testers, but that’s a topic for a post on UAT. Just assume for the sake of this post, your developers can code just fine and you need to find the tricky issues.

Inductive Analysis
When we apply Inductive Analysis we work from the perspective that a bug is present in the in the application under test, then try to evaluate how it would manifest itself in the form of erroneous behaviours and characteristics of the application’s features or functionality. One way to remember the purpose of Inductive Analysis is to remember that we move from the specific to the general.

At this point we could be applying Test Case Design Techniques and asking; if invalid data was input by the user, if a component state was not as expected and the user tried to carry out some action – how would we know? What would the failure look like? How would the error show itself to the user?

For those who are versed in manufacturing QA we’re also moving close to Failure Modes and Effects Analysis (FMEA), yet another interesting topic for another post…

With Inductive Analysis we move from the idea of a specific bug (a specific failure mode), that we agree could be present in the system and then ascertain its effect on, potentially numerous areas of, the application.

As I’m a fan of the Gherkin way of writing test conditions here’s a slightly construed example in that style:

GIVEN connectivity to the trade database has failed (our bug/failure mode[that may not be obvious])
WHEN the user submits their trade data for fulfilment (expected functionality of the system)
THEN the order will not be placed (failure effect [we can’t assume the user will know])
AND end of day reconciliation reports will show an outstanding order (definite ‘how’ we will know)

Remember, with Inductive Analysis you’re thinking about testing risks and designing a test case to cover that risk. Isn’t that more like testing than just checking requirements we’re implemented?

Deductive Analysis
With Deductive Analysis we assume that erroneous behaviours and characteristics of features or functionality have already been observed and reported. That is, we have a bug that’s been reported and we now need to work backwards to the Near Cause or Root Cause of the bug.

An example might be where style and layout on multiple pages of a website is not as expected, the specific cause perhaps being a bug in a CSS file. Perhaps a settlement amount on a transaction, given its base price and a fee that gets charged on top, is not adding up to what’s expected. In this way we attempt to move from the general issue that’s being observed to the specific Root Cause. It may be that a range of issues have a single root cause and the Test Analyst assessing this will help development deliver fixes more efficiently.

Deductive Analysis is most often used when testing is under way and bugs have been raised. Similarly, when the application is live and a customer reports a bug, applying Deductive Analysis is a natural approach for test and development staff to get to the Near Cause and Root Cause of the bug. We’ll cover that in the next post.

Mark

Liked this post?


Elementary Analysis techniques

In thinking about the process of test analysis I recalled what Caner et al (2002) wrote in their book 'Lessons Learned in Software Testing'. This was that we could keep in mind the idea of three, what I'll call 'elementary' analysis techniques. Namely; Reference, Inference and Conference. The reason I consider these to be elementary techniques is they are in my experience the most basic, rudimentary and uncomplicated techniques that a test analyst will apply. With or without specific training.

As I'll often say, given they are so rudimentary they are most likely already being used. The issue is that in order to recognise them, we need to realise what activities we're doing and then give those activities a name.

Lessons Learned in Software Testing: A Context Driven Approach

Reference
The most accessible thing a Test Analyst can look over are the existing application and/or project documents that have been created.

The first analytical activity therefore would be to refer to the documents or artefacts provided at the start of a project. These are usually documents such as the Requirements and Technical Design documents, but there may be others. Remember these may be in a different form, such as task cards or items in a back-log. Either way there will be a definition of what the system to be tested should do and it's a first point of reference.

In addition there may be other sources of information available such as UI Mock-Ups, perhaps early prototypes or even an existing application that's being enhanced. All of these can be referred to directly and in this way the Test Analyst will start to identify the explicitly stated aspects that will need testing.

When this analysis is carried out, it may be done as part of Verification testing of documents and other items mentioned above.

Inference
When the Test Analyst is in the process of identifying the most obvious and clearly stated testing needs, they will of course be thinking in broader terms.

As each new testing need is identified the Test Analyst will refine their understanding; of how the functionality of the application should be, what characteristics it should exhibit, what business tasks it should support or enable. Testing needs that are identified will imply others that should both be present and of course not present in the system.

Inference is looking at any form of requirement statements that have been identified and asking; ”what” each time, for example:
- what requirement does a requirement that 'all data is validated' imply?
- what does 'data needs to be of a specific type and follow a defined syntax' actually mean in defined terms?
- what requirement is assumed to be obvious if 'only complete orders should be processed'?
- what 'users are alerted' when their order is incomplete?

In order to correctly infer what these implicit requirements might be, the Test Analyst will need to:
- apply their business and technical domain knowledge drawn, from previous experience with similar applications or
- review earlier versions of the application under test, if it exists
- apply their understanding of customer needs in context of the problems the software is meant to resolve

Conference
After referring to all the sources of information at hand and considering what additional requirements the identified testing needs infer, the Test Analyst now does what often is forgotten or is left far too long to do.

They can go and speak to the rest of the implementation team about the issues around testing. This includes Product Managers, Project Managers, Developers, Business Owners, System Users, Support Teams, etc. and ask them about the testing needs of the system - directly. They can share with these stakeholders the testing needs that have been identified explicitly and inferred implicitly.

This form of analysis can also be considered a Verification activity where the Test Analyst essentially conducts a Peer Review or Walkthrough with the stakeholder.

Business Analysis

Once we're aware of the elementary forms of analysis that are being applied by default, we can start to apply them with intention. By doing that we make our use of them and the application of our 'personal methodology', more consistent and effective.

Bare in mind, these techniques apply even MORE in an Agile context, compared to a traditional testing context.

From here we can start to look at some more advanced techniques. Yet again, I'll say many Test Analysts will apply these without realising it. But others will realise, yet perhaps not have a name for them. So let's break the spell on two more techniques and get them in-mind and used.

Mark.

Liked this post?


Monday, 19 November 2012

What is a Test Architect?

(Re-written Feb 2017)
This blog post was originally posted about 5 years ago and is consistently one of the most popular posts on the site here. The interesting thing was it mainly linked out to an article by John Morrison over at Oracle as I felt he summarised the role of Test Architect very well. Time moves on and my thoughts have diverged sufficiently enough that this article needs refreshing and more recent experiences need sharing.
So, 5 year on is Test Architect still a thing?
In short, maybe. A quick search for Test Architect pulls up the original blog posts by myself and John along with a bunch of jobs advertised on Indeed, CWJobs, Total Jobs and LinkedIn. 1000+ jobs - or do we?

Closer inspection reveals that the search is mainly pulling back tester roles and not Test Architect roles. Looking in Feb 2017 I see only 2 references to Test Architect in the job search. It would appear that as in 2012 this remains a niche role/title. In fact I'd go as far as to say that as in 2012, the Test Architect isn't a standard role at all.

Alan Page blogged back in 2008 that at Microsoft they avoided reference to it, instead seeing it as a senior test role and that there was no single definition of what a Test Architect actually was. That makes perfect sense to me and in fact it a GOOD thing, because that was the whole point of branding myself as a Test Architect.

Background
In a previous life I worked with architects in electronics manufacturing. They had the challenge of understanding electronics design, manufacturing processes, testing and consumer/product expectations. The result was a role that was filled by a senior electronics professional, with broad technical, process and business knowledge along side commercial awareness.

That to me is the essence of a Test Architect. It absolutely IS a senior role and it requires a depth of experience and breadth of knowledge applied in context of the business.

So given there's no standard definition of what a Test Architect is let's define some core characteristics and responsibilities.

What is a Test Architect?
A Test Architect is a senior testing professional who's primary function is to design solutions to testing problems the business faces.

These solutions are solved through the application of contextually relevant process and practice, the use of tools and technology and by applying soft skills such as effective communication and mentoring of both team and client.


What does a Test Architect actually do?

One thing I'd lift out is that ideally they do NOT do Test Management. That is the Test Architect is essentially a deep technical specialist focusing on design, implementation, use, enablement of the testing function and so the overall development function as well.

The Test Manager remains responsible for the strategic direction and tactical use of the testing function, for the line management of the team, the mentoring and training, the hiring and firing, etc.

It may be the roles are combined, but it must be understood a Test Manager isn't automatically going to cut it as a Test Architect. Likewise, a Test Architect with no experience of team leadership will struggle to design and improve the process and practice applied to the team. That said, my view is a Test Architect is likely senior to a Test Manager professionally, even if operationally they report to the Test Manager. (I propose however they report to the Scrum Master or whoever heads up the overall development function.)

Responsibilities of a Test Architect may include:
  • Supporting the Test Manager in achieving their strategic goals for the Test Team by providing technical support to the Manager and the team

  • Possess broad awareness of testing approaches, practices and techniques in order to help design and deliver the overall testing methodology used by the team

  • Have the ability to monitor the effectiveness of the testing function and bring about improvements through insights gained via analysis at all stages of the SDLC/STLC

  • Identify what tools and technologies can be implemented, aligning with that already used across the broader development function and in-line with the skill-set of the team

  • Design and develop the test automation framework, harnesses and code libraries to enable the team to both use and enhance them across successive projects

  • Take responsibility for test infrastructure including environments and software, liaising with teams such as DevOps and Support in areas such as CI/CR and IT budgets

  • Provide technical know-how, documentation and training to test and other business functions

  • Stay up to speed on process, practice and technology developments to ensure they are brought in-house and enhance the solutions applied to the testing problems

In essence the Test Architect works to ensure that approaches, tools and techniques are built into a relevant methodology. They monitor, optimise, mentor, collaborate and continually improve the test team on behalf of both the Test Manager and the rest of the development function. To that end the role must be held by someone of good experience and seniority.

Mark.


Liked this post?

Thursday, 15 November 2012

Making Connections and Critical Thinking


A key skill for any tester is the ability to 'make connections' between aspects of relevance, when thinking about the testing problem they have to address. This idea of making connections is closely related to the skill of ‘critical thinking’.

Making Connections
Making connections is about recognising such things as how - a certain aspect of the system under test, perhaps a particular testing need that’s been identified or maybe a risk that’s been highlighted; relate to things of a similar type or indeed a different type. Like many skills employed by a tester, making connections often ‘just happens’. But, we need to recognise and understand the skill if we want to improve and meaningfully employ it.

Examples of making connections between things of a similar type include;
  • relating several risks to each other and considering how one may affect the other
  • associating testing needs and perhaps reducing the number of test cases while maintaining coverage
  • considering an aspect of the system and identifying a dependency on another aspect of it, maybe a UI needs the database in place or vice versa
In addition to things of a similar type we need to connect things of a different type, examples of doing that might include:
  • relating a risk with a testing need, do all risks highlight a testing need, if not can we identify one that will mitigate the risk?
  • assessing if a certain aspect of the application under test introduces a risk which requires coverage
  • identifying where there’s a gap between planned aspects of the application, such as specific functionality, and stated testing needs that tests are being planned for
 Making connections relies in part on the knowledge and experience of the tester, in order to know how one thing relates to another. We could reflect and ask ‘… how does this thing I’m considering affect [x]?’ when trying to make connections. Developing the argument for and possibly against an aspect is the key to effective test analysis.

Understanding Arguments

Critical Thinking
The skill of making connections is closely related to critical thinking, because critical thinking is about thinking past the initial details and information that are presented about the aspects, needs and risks, etc. and critically evaluating them. When thinking critically we don’t just accept what’s presented at face value and assume no further meaning, we are not just accepting. When thinking critically we are evaluating, analysing, synthesizing and keeping our minds open to the possibility of new perspectives on the information presented to us.

We might choose to bear in mind the phrases ‘…what does that mean?’ and ‘…why is it this way and can it be another way?’, in context of the testing problem we are trying to address. For any tester it’s essential to develop reflective thinking skills and to improve the critical analysis.

An Introduction to Informal Logic

Make sure that next time you’re presented a piece of information about a system, a test or other item – stop and think.

You could always attend a free course too... https://www.coursera.org/course/thinkagain

Mark.

Liked this post?

 

Monday, 12 November 2012

Learning and Teaching - by asking questions

One thing that's often said is there's no such thing as a stupid question. It's something that's close to my heart as throughout my career I've had to ask a lot of questions, some of which have made me feel a bit less enlightened than others around me. The thing is we really have no choice but to take a deep breath and just ask these questions, stupid or not.

Think about it, how else are we going to learn? If we try to avoid asking what may appear to be daft questions then where will our information come from? The options are things like meetings or conversations with others, perhaps documentation that’s been provided or the application we may be testing. Now, what’s the likelihood of these sources answering all our questions and providing us with complete knowledge? Highly unlikely and in the main we know it and we expect to, and do, come to a point where we have questions to ask.

I’d suggest questions however can be put into two rough groups. The first being simple gaps in knowledge, often about technical or business aspects that are beyond our experience. For example you might ask, could you tell me exactly what the difference is in testing needs when something moves to a Solaris container? You could ask this or you may already think this is a daft question.

The second group of questions are those which relate to things you are sure everyone knows and understands. How many times have you heard the phrase “…but everybody knows that”, while you’re thinking, “…well I don’t know it!”. Ever been in a meeting feeling confused, yet everyone else seems perfectly clear on something about the slicing of a cube and how it gives a view on data or some such. You understand the words but the meaning is lost.

In other words and to follow the popular pattern; stuff you know you don’t know and stuff you think you’re supposed to know but know you don’t. Both are in need of you asking questions, but do you always do it?

Asking the Right Questions: A Guide to Critical Thinking

One thing I’ve often found it that if you’re not clear, others probably aren’t that clear either. But guess what, they’re afraid to ask daft questions! Remember, there is so much technology, a lot of it customised, that you can never know everything. What’s more, you can only be where you are right now in terms of your knowledge and experience, so don’t beat yourself up over it.

When you’re not clear, go right ahead and ask for something to be clarified. State that you’re not sure how that affects testing. Just ask “Just so I’m clear, can you run through some ways that affects testing?”. If you’re in a meeting and you think everyone else is rolling on with a conversation, “Just to say this, I’m not completely up to speed on this topic. If someone can give me the 2 minute rundown now then great, or so an not to slow the meeting who can give me 5 minutes afterwards?”. It’s easy enough and there’s no embarrassment.

While I’d encourage you to ask questions openly I realise there are some caveats. There are situations when others, perhaps clients, will expect you ‘to just know’ and not doing may cause your company or the wider project team embarrassment or at least just raise some eyebrows. In this case you still need to ask, just more discreetly. Make notes of some kind for any points or phrases, ideas or concepts that you’re not totally clear on and note who seems to be talking confidently about them. If they’re on-your-side, afterwards wander over to them and say “Hey, in the meeting before you mentioned (insert topic here), I’m not sure I’m as clear as I need to be about how this affects testing, could you give me a back to basics run down of it, just so I clarify my understanding?”. Most people will be flattered that you asked.

That's a Good Question: How to Teach by Asking Questions 

There will come a point on any project or in any employment when you really should be up to speed. Don’t leave it so long in asking your questions as to be a problem when you hit that point. There’s always a grace period at the start when it’s OK to not know, but it doesn’t last forever. You cannot hide and hope knowledge will just come your way. Go and seek it and use it to enhance what you do, then help others in the same way!

Mark.

Liked this post?