Enough is Enough: How to Determine When to Stop Testing

Posted in: Quality assurance testing by: Simon on:

Stop testing

Can you ever say with 100% confidence that a piece of software is functional, reliable, and defect-free? Unless you happen to be testing ‘Hello world’ or something of a similar complexity, you can’t really make any guarantees. It’s widely accepted that complete testing is not something that is feasible in practice, but all projects reach a point where you need to step away and stop testing.

Sometimes there are deadlines that force you to pull the plug on testing, or a dwindling budget. Then there are other times when the project team just has to make the call: Are we done testing, or not? Sadly, there are no magic formulas that will give you the answer of when to stop testing. However, you can make an educated and an informed decision by taking into account the following factors.

stop testing

(Image source: unknown)

Factors that help determine when to stop testing

Test Coverage

Having 100% test coverage is ideal, but isn’t always realistic given real lifetime and cost constraints. When it comes to website and mobile apps, in particular, accounting for 100% of devices, browsers, and other conditional factors is nearly impossible.

Let’s say, for example, that you’re developing a web application. Testing it on popular browsers such as Firefox, Internet Explorer, Chrome, and Safari for compatibility is a no-brainer. But what about lesser-known browsers like Opera?

Opera currently owns about 2% of the browser market share. Is that enough to warrant full-scale testing? In most cases, probably not, but it depends on your intended audience or customer base, their expectations, and your resources.

Defect Patterns

Website and software projects are like snowflakes: no two are alike. Defects have a tendency to run in patterns or group together. If a problem area is identified in the code, it warrants a little further testing than less troublesome areas. Furthermore, if testers are consistently running into issues with increasing or regular frequency, then it’s probably not a good time to shut down testing anytime soon.

Bug Rates

When software reaches a certain size or level of complexity, bugs are inevitable. Presented with freshly written code, testers and QAs flag down bugs quickly and frequently. As testing progresses, they tend to crop up with less frequency. The problem with bugs is that you can’t explicitly test for them. They are often triggered by unusual or obscure environmental conditions or test cases. Guaranteeing something is bug-free is not feasible, but you should reach a point where they become relatively rare before you stop testing.

Showstoppers

There are times when the software dictates the end of the testing phase, at least temporarily. Major structural defects and critical errors may be enough to send the whole project back to the drawing board, in which case further testing would be a waste of resources.

Putting it all together

Make a concerted effort to get as close to 100% test coverage as you can with your resources. When defects and bugs are found during the test process, make note of where they’re coming from and what is causing them. Then, adjust the subsequent tests accordingly. Finally, if you prioritize the testing effort to cover the most likely use cases, scenarios, and environmental conditions first, you stand a better chance of going live with a website or application that is largely defect and bug-free to most users.

ABOUT THE AUTHOR:

Simon

Simon is the founder of Crowdsourced Testing. After 10 years in interactive software development, he set his sights on building a world-class crowdsourcing platform to facilitate the software testing process for developers.