New Add-On For Jira: Tester Performance Evaluation

Posted in: Product news by: Simon on:

Tester Performance Evaluation Add-on

The team at Crowdsourced Testing is very proud to announce the release of the Tester Performance Evaluation, a new Add-on for Jira.

The idea for this add-on stems from a conversation I had with a partner who manages a very large team of testers. He said:

“How do you evaluate the performance of individuals in a software testing team?”

And, like a lot of QA Managers, I cringed. There is no easy answer to this question.

Testers are unique, well-rounded creatures. Therefore, it’s difficult to pinpoint what makes a good tester. Curiosity, technical know-how, writing skills, and communication skills, these are just some of the many skills that a tester needs. The testing teams I’ve seen were made up of very different individuals with very compatible skills. Not one of them is the same.

To my knowledge, there has never been an attempt to measure testers’ performance via some type of metric or software. So, we took a stab at it. It’s not perfect, but it’s a start.

If you try it, I would appreciate your feedback in our forum. Like our bug tracker Damn Bugs, we will listen to and prioritize user feedback for future releases.

Let me tell you a little bit about Tester Performance Evaluation and how it works for Jira.

When accessing the plugin from your Jira dashboard, you will view a list of testers within your organization. Select a tester to view individual statistics.

The first section, “Issues“, contains information about the number of issues reported per project and their severity. The idea is to get an overview, in one glance, of the number of issues reported by a tester across all projects. There is no evaluation or measurement done at this point. We’re just listing in a clear fashion how many bugs were reported in each project, and what the severity of these bugs was.

Jira

Image: Screenshot of Tester Performace Evaluation for Jira

The second section, “Averages“, looks at the average number of bugs reported by a tester per day and per hour. These are exclusively active days, meaning that weekends or non-work days aren’t taken into account. Again, there is no competitive aspect to this calculation – the idea is just to provide a tester, his team, and his manager, a sense of what a typical day is like.

This section also contains an average of the time required to respond to comments in bug reports. We feel this is an important aspect of teamwork, especially between distributed teams. The value given should not necessarily be seen as bad if high. Especially if your testing team collaborates with a development team in another location.

Finally, the third section, “Reports“, under “Performance Analysis” is where things get interesting.

Currently, there are 4 criteria that we look at:

Report quality and bug advocacy: the objective is to measure the ease of comprehension and ease of replication of the bugs reported by a tester. Ideally, you want your testers to report bugs that are very detailed, yet concise and clear. That way the developers will understand immediately and feel that it’s something they should resolve.

Bug relevance, ability to consider the context: the objective is to assess whether the bugs reported by the tester were relevant and on-target with the project’s context and the team’s expectations. This essentially evaluates judgment: are testers reporting the right issues?

Bug Difficulty: the objective is to measure the tester’s ability to find bugs while accounting for the difficulty of finding bugs based on the state of the project. This particular criterion recognizes testers who consistently report valuable bugs throughout the project’s lifecycle, from beginning to end.

Bug Severity: the objective is to measure the tester’s ability to find bugs while accounting for the difficulty of finding more complex bugs. This criterion is perhaps the most volatile and uncontrollable. Some projects have more critical bugs than others, that’s normal. But what you should look for in a balanced tester is someone who will catch both small visual issues and complex functional issues.

These criteria and the formulas applied to calculate them will evolve as we go. We also plan to allow for customization of each criterion in a future update with Jira, as well as adding others.

Questioning validity

Some testers will question the validity of the formulas we use and that’s fine. Hence it’s important to understand what we’re doing here: we use Jira data to make a best-educated guess as to how you perform as a tester. It’s not scientific and it’s not flawless. Over time, and over the course of several projects, the data found in our plugin will become more accurate and insightful.

Testing is a complex art that requires a lot of skills that cannot be calculated mathematically. Your superiors should know that. If they don’t, you might want to consider changing managers, not necessarily blaming the plugin.

But more importantly, there’s a chance that this is the beginning of something great. It’s a useful tool that, when fully understood and used for what it is, will help QA Managers improve the productivity of their teams – and maybe lead to better software and happier testers.

If you try our new Jira add-on, thank you. After all, we wouldn’t build products if we didn’t want them to be used by as many people as possible.

Whether you like it or you don’t, please don’t hesitate to drop me a line on Twitter: @simonpapineau. I’m always keen to hear your thoughts.

You can also join the bloodbath in the software testing subreddit where a group of Redditors are actively questioning my intentions and challenging every aspect of this plugin.

ABOUT THE AUTHOR:

Simon

Simon is the founder of Crowdsourced Testing. After 10 years in interactive software development, he set his sights on building a world-class crowdsourcing platform to facilitate the software testing process for developers.