The Reasons Why You Have 100% Statement Coverage But Finding None Bugs

发表于十一月 10, 2014

I had a discussion with James Bach weeks ago about testing coverage. I just want to share with you what we discussed here.

This is my initial view about testing coverage:

I find people pay attention to the testing coverage quantification (the number of 80% or 90% ) more than necessary. For example, when I talked about ET, some people will react immediately by asking “Can ET increase testing coverage? How do I know the accurate coverage percentage?” When people evaluate test design work, they asked “How about the test coverage with these test cases?” People pursue more coverage number when they do their test automation, when they review test report, when they evaluate testing effectiveness, etc. 

On the other hand, it seems that those people don’t care about test coverage very seriously. Just look into how people calculate their coverage. They may only care about some kind of code coverage, or they roughly calculate requirements coverage. Based on Cem Kaner’s article “SOFTWARE NEGLIGENCE AND TESTING COVERAGE”, there are at least 101 kinds of coverage. Obviously most people don’t consider all of them when they talk about test coverage. But, does higher test coverage number necessarily mean better testing effectiveness? Certainly not. It is hard to explain your test coverage absolutely clearly to people and it is absolutely true that 100% coverage can’t be reached. Why pay attention to test coverage NUMBER SO MUCH? Within limited time and with limited testing resources, always test the most important parts based on risk analysis, and adjust testing strategy in a timely manner, isn’t it enough? We can still get a coverage number, maybe not so satisfied, but it should be not too bad, either - after all, we have done our best to do testing within this given context.  Do not test just for the sake of increasing test coverage rate! Test coverage quantification is not so meaningful, just as the case of test cases number. 

And James Bach answered me with similar idea :"

Coverage assessment is important. Coverage quantification may not be meaningful." Then asked me to explain why "higher test coverage rate doesn't necessarily mean better testing effectiveness". Here is my answer:

"100% statement coverage can't guarantee to find a bug". 

For example, you get 100% statement coverage, but you find none bugs. To test effectively, or to have a good enough (but not perfect) testing, you need to test not only broadly enough, but also deeply enough so that some important bugs can be discovered.  The reason that you test a lot but finding few bugs are manifolds. 

All testers know “Complete testing is impossible”, but many testers try to test as complete as they can. I just want to tell people not to focus on coverage too much, but focusing on how to test effectively and efficiently.

And James still wanted to know "WHY would you not necessarily find bugs if you had 100% statement coverage". This time I gave a rather long answer.

 1) Human Factor (or test execution factor) Bugs don’t just jump out by themselves, so testers need to observe and point them out. To do this, testers need have many testing skills.

2) Test Design Factor Your testing cases or scenarios are very simple or not effective enough as to helping you finding bugs. You have 100% statement coverage, but having none test scenarios covering other models like states, usage scenarios, data, etc.

3) Testing Purpose Factor You’re doing acceptance testing, maybe. Your target is not for finding bugs, but for proving the system is working well.

4) Testing Tools Factor Your coverage tools don’t tell the truth about your real testing coverage due to poor algorithms or other reasons. Relying on tools report, you think you’re testing a lot, but actually not. And you’re satisfied with current testing coverage, so you don’t test more and you can’t find bugs. Or you’re using tools to do automatic testing, then whether you can find bugs or not depends very much on your test scripts. In each script, if the section of comparing “what is” and “what ought to be” covers very few or even none the points that you should compare, you can’t find bugs even though you write a lot of testing scripts.


James pointed many "bugs" in my statements. Here a thousand of words are omitted.  If you like, you can try to find them by yourself. And here is James' expected answer:

The purpose of testing is primarily to find important problems before it's too late. Code is a huge part of what we are testing. We are speaking of software testing and software consists largely of encoded instructions. I understand that code is not all of it, but I would have liked to hear a specific list, such as:

    •  hardware components that comprise the product under development (such as in an embedded system)
    •  platform components (such as either hardware or software) that lie outside our codebase.
    •  other products that our product may interoperate with.

Completely covering the code in our product does not cover the code in that other stuff. Furthermore, we don't even need to cover code in our product that can never be reached, such as unused portions of libraries that are embedded in our product.

A simple way to have answered my question is to cite four factors:

    • Technology outside the scope of my product: "100% statement coverage" does not imply coverage of code in other products that support or relate to my product.
    • Test data: "100% statement coverage" does not imply that the statements have been covered in every possible WAY (that is to say, with all possible data and states as input). And let's lump timing in with data, too. 
    • Decision/Loop coverage: "100% statement coverage" does not include decision predicates and loops. I suppose 100% data coverage will also cover this, but you won't achieve 100% data coverage, either, so it helps to think about this kind of coverage specifically.
    • Oracles: Coverage simply means that you have made an observation. Oracles mean that you have the ability to see a problem. Coverage without oracles is not testing. There are no perfect oracles. Therefore, "100% coverage" does not mean you have found even one single bug, even if you saw them all happen. There can be no such thing as a "100% oracle," although in some situations the correctness of specific data might be proven correct. (In limited cases the word "prove" can be used, but only when you actually do have the ability to show that there can be no possible doubt about correctness of a result).

Comment Box is loading comments...