Last year I went through my programming habits looking for things to improve. All my projects had coverage reports, and my coverage was 95-100%. Looking deeper, I found that developing that coverage had actually hurt my projects. I decided to turn off coverage reports, and a year later I have no reason to turn them back on. Here’s why:
#1 Coverage is distracting.
I littered my code with markers telling the coverage engine to skip hard-to-test lines like Python’s protection against import side effects:
if __name__ == "__main__": # pragma: no cover main()
In some projects I had a test_coverage.py test module just for the tests that tricked the argument handling code into running. Most of those tests barely did more than assert that core libraries worked.
I also went down rabbit trails trying to find a way to mock enough of boilerplate code like module loaders to get a few more lines to run. Those were often fiddly areas of the language and their rabbit trails were surprisingly long.
#2 Coverage earns undeserved confidence
While cleaning up old code written by somebody else I wrote a test suite to protect me from regressions. Its had 98% coverage. It didn’t protect me from anything. The code was full of stuff like this:
import dbs def helper_function(): tables = dbs.get_tables() # ... other db-related stuff.
DBS = ['db1.local', 'db2.local'] TABLES = list() for db in DBS: # Bunch of code that generates table names.
This is terrible code, but I was stuck with it. One of its problems is that dbs.py is a side-effect; ‘import dbs’ causes the code in that module to execute. To write a simple test of helper_function I had to import from main.py, which caused an import of the dbs module, which ran all the lines in that module. A test of a five-line function took me from 0% coverage to over 50%.
When I hit 98% coverage I stopped writing tests, but I was still plagued by regressions during my refactors. The McCabe complexity of the code was over 12 and asserting the behavior buried in those lines needed two or three times the number of tests I’d written. Most tests would run the same lines over and over because of the import side-effects, but each test would work the code in a different way.
I considered revising my test coverage habits. Excluding broader sections of code from coverage reports so I didn’t have to mock out things like argument handling. Reducing my threshold of coverage from 95% to 75%. Treating legacy code as a special case and just turning off coverage there. But if I did all those things, the tests that were left were the tests I’d have written whether or not I was thinking about coverage.
Today, I don’t think about covering the code, I think about exercising it. I ask myself questions like these:
- Is there anything I check by hand after running the tests? Write a test for that.
- Will it work if it gets junk input? Use factories or mocks to create that input.
- If I pass it the flag to simulate, does it touch the database? Write a test with a mocked DB connector to make sure it doesn’t.
Your tests shouldn’t be there to run lines of code. They should be there to assert that the code does what it’s supposed to do.
If this was helpful and you want to stay up to date on the latest best practices for Cloud DevOps tooling in your inbox, subscribe here.