I just posted a new paper on my website which reports the finding of a six week study in time spent by the QA team.
At the start of the study we agreed a set of Key Tasks such as 'Reading Test Documents' or 'Waiting on others', getting those involved in the study to agree the tasks and definition of them.
Then, for the next 6 weeks 6 staff recorded their activity every 15 minutes. At the end of each week the stats were collated and charted out to see where the department, team and individuals were spending their time.
The results were certainly interesting and gave some valuable insights, such as:
- Team members in work for 7.5 hours a day don't test for that amount of time.
- Test Managers are available for testing under half of their working day.
No, really. I bet these two insights are obvious to those who'd read this blog but on presentation of the data to the client there was slight disbelief. I had to show them the Daily Tracking Sheets to prove I wasn't making the figures up.
The study participants also kept an Issue Log to track non-bug issues and start to get a broader insight into what else was impacting their testing.
Have a look at the Papers section of the website for the paper and hit the Templates button for the 'Daily Tracking Sheet' and 'Issue Log' templates I mention.
Let me know your thoughts. Plenty of scope to make this study more scientific, what would you do or is this good enough for what we were looking to prove? Does it apply outside of the study group and make sense in your environments perhaps?
Mark Crowther, Head of QA Time Tracking