Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You should be very skeptical of anyone that claims they have 100% test coverage.

Only under very rare circumstances is 100% test coverage is even possible, let alone done. Typically when people say coverage they mean "code line coverage", as opposed to the more useful "code path coverage". Since it's combinatorially expensive to enumerate all possible code paths, you rarely see 100% code path coverage in a production system. You might see it for testing vary narrow ADTs, for example; booleans or floats. But you'll almost never see it for black boxes which take more than one simply defined input doing cheap work.



I think the point isn't about 100% coverage - that's obviously a lie, because if even one line in a million line project isn't covered, you don't have 100% coverage. I think claiming to have >50% code coverage is already suspicious. Unless you're writing life-critical code or have some amazing test automation technology I've never heard about, I don't buy it.


Agree with both. Recently, youtube channel ThePrimeagen was talking about this, and coincidentally put up a very silly but clarifying example, luckily he also posted to Twitter so here it is just for fun [1]:

  function foo(num: number): number {
    const a = [1];
    let sum = 0;
    for (let i = 0; i < num; ++i) {
      sum += a[i];
    }
    return sum;
  }

  test("foo", () => {
    expect(foo(1)).toEqual(1);
  });

  100% test coverage
  100% still bugged af
[1]: https://nitter.net/ThePrimeagen/status/1639250735505235975


> 100% still bugged af

That isn't necessarily a problem. The primary purpose of your tests is to document the API. As long as someone is able to determine what the function is for, and how it is meant to be used, along with other intents and information the author needs to convey to users of the API, the goal is met.

However, I don't see how the given test actually helps document anything, so it is not a good example in the slightest. 100% coverage doesn't indicate that you did a good job, but if there are code paths not touched then you know you screwed up your documenting big time.


That's a contrived example that misses the point, and I think poisons the discussion.

Code coverage, and especially _line_ coverage, doesn't tell you anything about the quality of the tests. It's just a basic metric that tells you how much of the code is being, well, covered, by at least one test. Most projects don't even achieve 100% line coverage, and, like most of you here, I agree that reaching that is often not very productive.

But, if in addition to line coverage you also keep track of branch coverage, and use other types of tests (fuzz, contract, behavior, performance, etc.), then your chances of catching a bug during development are much higher. After all, let's not forget that catching bugs early, or ideally not even committing them, is the entire purpose of testing. All of this work takes a lot of effort and discipline, of course, which is why most projects don't bother with it. But, again, you can't argue that the ROI is not worth it, when projects like SQLite are one of the most stable programs in existence, in a large part due to their very high testing practices. This doesn't mean that this is not worth pursuing, or that the entire practice is wrong for some reason.

Arguing that coverage is not a meaningful metric is a reflection of laziness IMO.


I've been asking a few people about what range is good, and a lot say 90 is great, 70 is ideal to balance maintenance cost.

The answers vary by a lot it seems




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: