Unit test coverage tools and coverage statistics are all the rage on agile teams. I know they are at my place of employment. Don’t get me wrong I believe it is important to measure your unit test coverage. (here comes the but) But are we getting the desired effect from measuring unit test coverage? By setting and enforcing team goals, such as 90% - 100% unit test coverage for classes are we getting the desired results?
If we achieve 100% coverage on a class we have tests for everything that can possibly break in that class. But at what cost did we achieve the 100% coverage? How many of you have classes of any significant complexity at 100% coverage? If you do, now think of the extremes you had to go to achieve 100% coverage for that class.
Doing short test driven cycles will easily get you 75% – 85% coverage on your class with the majority of the things that can break covered with tests. At that point I would encourage you to make sure that everything you think could break is covered with a test but without worrying about to what degree you have improved your coverage percentage.
From this point on increasing the coverage percentage above the 90% level is difficult, at best. Often times you find yourself writing silly little mocks for the sole purpose of exercising a catch block or some other rarely executed code block all in the name of an extra percentage point or two of coverage. You then slave away for an hour or so slowing increasing your test coverage by a percentage point here and a percentage point there.
You really must ask yourself, “Does this mock or this additional unit test really add value to what I am doing?” When you are at or around the 85% - 90% coverage level I think more times than not you will find yourself answering, “No!” So if your new tests are not adding value then you have been caught in the “100% Test Coverage Trap”. You’ve been sucked in by the allure of the 100% coverage statistic. You are spending your valuable and precious time adding tests just to bump your percentage. Do you really have this sort of free time? I know that I do not. Do you think that your project manager or technical lead would agree that this is a good use of your time? Even better, do you think the project manager or technical lead intended this when they began measuring your unit test coverage? My guess is that the answer to these questions is NO.
In stark contrast to the writing tests to increase the coverage percentage I think you would be better served spending this time writing tests that more clearly convey the usage and intent of the code that is being tested. Oh, you forgot that one of the goals of unit tests are to serve as documentation for the methods of a class. See you have lost the focus on this aspect of tests because you are focusing on the test coverage percentage. Spend the time you have writing more tests that clearly demonstrate the proper usage of your class and learn to be satisfied with 90% coverage and avoid the 100% test coverage trap.
If we achieve 100% coverage on a class we have tests for everything that can possibly break in that class. But at what cost did we achieve the 100% coverage? How many of you have classes of any significant complexity at 100% coverage? If you do, now think of the extremes you had to go to achieve 100% coverage for that class.
Doing short test driven cycles will easily get you 75% – 85% coverage on your class with the majority of the things that can break covered with tests. At that point I would encourage you to make sure that everything you think could break is covered with a test but without worrying about to what degree you have improved your coverage percentage.
From this point on increasing the coverage percentage above the 90% level is difficult, at best. Often times you find yourself writing silly little mocks for the sole purpose of exercising a catch block or some other rarely executed code block all in the name of an extra percentage point or two of coverage. You then slave away for an hour or so slowing increasing your test coverage by a percentage point here and a percentage point there.
You really must ask yourself, “Does this mock or this additional unit test really add value to what I am doing?” When you are at or around the 85% - 90% coverage level I think more times than not you will find yourself answering, “No!” So if your new tests are not adding value then you have been caught in the “100% Test Coverage Trap”. You’ve been sucked in by the allure of the 100% coverage statistic. You are spending your valuable and precious time adding tests just to bump your percentage. Do you really have this sort of free time? I know that I do not. Do you think that your project manager or technical lead would agree that this is a good use of your time? Even better, do you think the project manager or technical lead intended this when they began measuring your unit test coverage? My guess is that the answer to these questions is NO.
In stark contrast to the writing tests to increase the coverage percentage I think you would be better served spending this time writing tests that more clearly convey the usage and intent of the code that is being tested. Oh, you forgot that one of the goals of unit tests are to serve as documentation for the methods of a class. See you have lost the focus on this aspect of tests because you are focusing on the test coverage percentage. Spend the time you have writing more tests that clearly demonstrate the proper usage of your class and learn to be satisfied with 90% coverage and avoid the 100% test coverage trap.


3 comments:
John,
Very well written. I agree 100% with you that 100% coverage comes with un-necessary costs like "silly mocking", etc. In a previous project when we were TFD maniacs, we wrote several hundred tests and projected 1:3.5 ratio of production to test code (for every 1000 lines of code, we had 3500 lines of code). A small enhancement broke several tests for silly reasons and we had to task out three days to "fix broken UTs" most of which were like the ones you mentioned to increase coverage. Apparently, that didn't sell well with the customer. ;-).
IMHO, test well to the extent where tests should fail when the functionality is broken not for coverage. Somebody said, "Do not love the code but love what it does for you". Hence, I am pro ATs than UTs. However, every class should have at least 3-4 UTs for every public contract with 60-70% emphasis on unhappy paths.
My 2Cents.
You have a great point. This is the same basic problem of placing too much emphasis on any coding metric. A coverage tool can be a great guide for developing a full suite of tests, but it does nothing to indicate the quality of tests.
So what defines a good quality test suite? If you go with the notion that tests should serve as a sort of "documentation" for what your code does, then a coverage metric should make a bolder statement about the amount of unreachable code in your app. A high percentage of coverage guarantees that you have no unreachable code, but does it prove that your tests cover every scenario your application might encounter in its lifecycle?
I had a somewhat cynical friend describe TFD to me as a way of designing by "defining all the negative space around what you want to build." I'm not sure I agree 100% with that, but I'm somehow at a loss for telling him exactly which way that's wrong.. So if he turns out to be right, then can it really even be possible to have a full unit test suite in a finite space? Or do we have to at some point settle for "good enough"? Perhaps that is the real dillema that you are facing.
(Or maybe I've blown the issue way out of proportion...) :-)
[quote]
I've seen various proposals for rules to ensure you have tested every combination of everything. It's worth taking a look at these, but don't let them get to you. There is a point of diminishing returns with testing, and there is the danger that by trying to write too many tests, you become discouraged and end up not writing any. You should concentrate on where the risk is.
[/quote] - Martin Fowler, Refactoring (p 101).
Post a Comment