Rick (along with many MS executives) doesn't seem to articulate well what Code Coverage really measures, and just how fundamentally flawed the metric is.

The notion that Code Coverage measures test effectiveness (the degree to which an activity is successful in achieving a specified goal) is not accurate; it simply measures the portions of your binary that have and have not been executed while measurements were being taken.

Code coverage doesn't tell you if your code behaved correctly, and most importantly, just because I executed a portion of the code does not mean that it has been adequately tested, which is why I get irritated when people equate code coverage to 'effective' or 'good' testing.

A simple example: I can choose n number of values to throw at an API that does a simply calculation, and verify that I have 100% code coverage for that API, but that does not mean that there aren't bugs in the code. Just as easily, I could come up with another value to give to that same API that causes a divide by zero or overflow error that is executed by exactly the same code path.

Don't get me wrong, Code Coverage is a useful tool; the most useful aspects are analyzing results over time (is my coverage increasing or decreasing as new code and new tests are written?) , and it can tell me where I have holes in my testing. But effectiveness of my testing? No.