Thursday, 7 April 2016

How Every Tester can do Performance Testing

Performance testing is often passed onto a 3rd party provider of testing services in its entirety. That is usually because the test team don’t feel they have the ability, experience or perhaps the tools to carry out the testing.

Yet, just like Security testing we can break Performance testing down into a set of discrete test types under the overall label of Performance. In doing this we give more opportunity for the test team to a level of Performance testing that draw on their understanding of the system or application under test.
 
Let’s take the example of Performance testing a website, as it’s easy to get access to those and practice the techniques described.
 
Most Performance testing is either benchmark because the site is new or comparative, because some changes have been made and we want to ensure the site is as performant as before. However, that covers performance from the user facing perspective. To get a complete picture we need to do Performance testing of the infrastructure too. This testing would include both the underlying infrastructure and connected network devices, plus the site exposed to users and the actions they perform.
 
In summary then we could break-down Performance testing to the following types:
 
Comparative Performance
• Response Time
• Throughput
• Resource Utilisation

Full System Performance
• Load
• Stress
• Soak
 
For the purposes of this post, I’m going to ignore the Full System Performance and suggest in this scenario we need to get a 3rd party in to help us out. The comparative Performance testing of the website however is perfectly doable by the test team. Let’s see what and how.

---

Response Time Comparison
The user’s perception of the time it takes the service to respond to a request they make, such as loading a web page or responding to a search, is the basis for Response Time comparison testing.
Measuring Response Time

Response time should be measured from the start of an action a user performs to when the results of that action are perceived to have completed, for some singular task. The measurement must be taken from when a state change is triggered by the start of an action such as clicking a link to navigate from one page to another, submitting a search string or confirming a filter they have just configured on data already returned.
 
For Services with a web front end, use the F12 developer tools in IE (for example) to monitor timings from request to completion.
 
1. Open IE, hit F12 and select ‘Network’, then click on the green > to record
 
 
2. Enter the target URL and capture the network information
3. Click on ‘Details’ and record the total time taken
 
 
 
Test Evidence
A timing in seconds should be taken and recorded as the result in the test case. Multiple time recordings are advisable to ensure there were not lulls or spikes in performance that skew the average result.
 
---

Throughput Comparison
This measure is the time it takes to perform a number of concurrent transactions. This could be performing a database search across multiple tables or generating a series of reports.
Measuring Throughput

Measuring Throughput from the user’s perspective is very similar to measuring response time, but in this case Throughput is concerned with measuring the time taken to perform several tasks at once. As with Response time, the measurement should be taken from the start of an action to its perceived end. A suitable action for Throughput might include the generation of weekly/monthly/yearly reports where data is drawn from multiple tables or calculations are performed on the data before a set of reports are produced.
 
Monitor system responses in the same way for Response Comparison above, but also include checks of dates and timings on artefacts or data produced as part of the test. In this way the user facing timings plus the system level timings can be analysed and a full end to end timing derived.

Test Evidence
Careful recording of the time taken to complete the task is needed, as with throughput tests it may not always be obvious when a task has completed. For example, if outputting a series of files, check the created date and time for the first and last files to ensure the total duration is known. Record the results in the relevant test cases, ideally of several runs as suggested for Response time.
 
---

Resource Utilisation
When the service is under a certain workload system resources will be used, e.g. processor, memory, disk and network I/O. It’s essential to assess what the expected level of usage is to ensure no unacceptable degradation in performance.

Measuring Resource Utilisation
Unlike Response and Throughput comparisons, Resource Utilisation measurement can only be done with tools on the test system that can capture the usage of system resources as tests take place. As testing will not generally need to prove the ability of the service to use resources directly, it’s expected this testing will be combined with the execution of other test types, such as Response and Throughput, to assess the use of resources when running agreed tests. Given this, the testing would ideally be done at the same time as Response and Throughput.
 
One example way to monitor resource usage is by using the Performance Monitoring tools in the Windows OS. To allow us to go back to the configuration of Monitors we set up it’s actually best to use Microsoft Management Console. Here’s how:

1. Open the Start/Windows search field and enter MMC to open the Microsoft Management Console
2. In MMC add Performance Monitor snap-in via File > Add/Remove Snap-in...
 


3. Load up the template .msc file that includes the suggested monitors by going to File > Open and adding the .msc file

To do this, save ta copy of the file on GitHub

4. The monitoring of system resources will start straight away.
5. To change the time scale that's being recorded; Right click on 'Performance Monitor', select 'Properties' and change the duration to slightly beyond the length of the test you're running.
 

Test Evidence
Where possible extracted logs and screen shots should be kept and added to the test case as test evidence. Some analysis will of the results will need to be done and as with other comparative test types several runs are suggested.

----

So there we go, it’s easy to do simple performance checks that can then inform the full system performance testing or stand on their own if that’s all you need.
 
Mark.

0 comments: