Why Coverage Isn’t Everything: Rethinking Your Testing Approach
--
Do you take pride in high coverage scores? Do you compare coverage percentages with colleagues to see who is doing better?
This is probably wrong.
Unequivocally, undeniably, ecumenically wrong.
And probably time to rethink your approach.
Coverage reports are useful, but they aren’t everything. The goal of testing is not just to have high coverage scores, but to ensure that the code works as expected in real-world scenarios.
What does this mean exactly?
Those that get overfocused on coverage reports are probably sacrificing test quality to achieve those numbers.
And sacrificing test quality for the sake of coverage is a recipe for disaster.
Coverage reports can give us insights into what is being exercised and what is not, but they don’t tell the whole story. Simply because a line of code is being exercised doesn’t mean it is doing what it’s supposed to do.
How can we solve this?
We need not only generate the coverage reports, but must also analyze what they mean.
We need to analyze what is being covered and how well it is being tested. For instance, if a module has 70% coverage, it doesn’t mean that the remaining 30% is unimportant or that we need to write more tests that specifically target the uncovered parts.
Think of it like this…there are two schools of thought on how to approach this:
- The first approach is to write new tests that target the uncovered parts without overlapping the original 70%. That’s because if we end up testing code that has already been covered in another test, we’re wasting time and resources!
- The second approach is to write new tests that target scenarios that the code is expected to handle but haven’t been tackled yet. What was not covered should give us clues about what scenarios haven’t been tested yet.
The second approach is the better one. And yes, I wrote that in a rather leading fashion. But you get my point, yeah?
It’s important to write tests that are based on real-world scenarios rather than simply…