Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Huh, I found this more interesting than I thought I would. I hadn't heard before that the "unit" in "unit test" just meant "can run independently". I once failed an interview partly because of only writing "feature tests" and not "unit tests" in the project I showed. But actually those tests didn't depend on each other, so... looks like they really were unit tests!

Anyway, I'm still not totally sure about TDD itself - the "don't write any code without a red test" part. I get the idea, but it doesn't feel very productive when I try it. Of course maybe I'm just bad at it, but I also haven't seen any compelling arguments for it other than it makes the tests stronger (against what? someone undoing my commits?). I think even Uncle Bob's underlying backing argument was that TDD is more "professional", leading me to believe it's just a song-and-dance secret handshake that helps you get into a certain kind of company. OR, it's a technique to try and combat against lazy devs, to try and make it impossible for them to write bad tests. And maybe it is actually good but only for some kinds of projects... I wish we had a way to actually research this stuff rather than endlessly share opinions and anecdotes.



If I have the tooling all set up (e.g. playwright, database fixtures, mitmproxy) and the integration test closely resembles the requirement then I'm about as productive doing TDD as not doing TDD except I get tests as a side effect.

If I do snapshot test driven development (e.g. actual rest API responses are written into the "expected" portion of the test by the test) then I'm sometimes a little bit more productive.

There's a definite benefit to fixing the requirement rather than letting it evaporate into the ether.

Uncle bob style unit test driven development, on the other hand, is something more akin to a ritual from a cult. Unit test driven development on integration code (e.g. code that handles APIs, databases, UIs) is singularly useless. It only really works well on algorithmic or logical code - parsers, pricing engines, etc. where the requirement can be well represented.


BitD (Back in the Day), “unit tests” were independent tests that we wrote, that tested the system. It applied to pretty much any tests, including what we now call “test harnesses.” There weren’t really any “rules,” defining what a “unit test” was.

The introduction of TDD (before it, actually, as testing frameworks probably had a lot of influence), formalized what a “unit test” is.

In general, I prefer using test harnesses, over suites of unit tests[0], but I still use both.

[0] https://littlegreenviper.com/miscellany/testing-harness-vs-u...


Thats a new term for me, thanks for pointing out.


> arguments for it other than it makes the tests stronger

It's supposed to lead to a better design. It's easy to write some code that maybe works and you can't actually test (lots of interdependencies, weird state, etc.), or you only think you're trying correctly. But making the test first forces you to write something that 1. You can test (by definition) 2. Is decoupled to the level where you can check mainly for the behaviour you're interested in. 3. You won't bypass accidentally. It's not even someone undoing your commits, but some value in the call chain changing in a way that accidentally makes the feature not run at all.

I've seen it many times in practice and will bet that any large project where the tests were written after the code, has some tests that don't actually do anything. They were already passing before the thing they're supposedly testing was implemented.


> not totally sure about TDD itself - the "don't write any code without a red test" part

I'm not into TDD, but I'm absolutely into "never skip the red phase".

After fixing a bug or writing a test, I always revert and repeat the test. Same when testing manually (except for the absolutely trivial). You wouldn't believe how often the test then just passes. It's the hygienic thing to do. It's so easy to fool yourself.

About half of the time I realize my test (or my dev setup) was wrong. The other times I learn something important, either that I didn't fully understand the original problem, or my bugfix.


"Never trust a test you didn't see fail"


> OR, it's a technique to try and combat against lazy devs,

I think that many of these practices are a result of programmers starting to code before having understood the problem in detail and before thinking through what they want to accomplish. Many times programmers feel an itch in their fingers to just start coding. TDD is an improvement for some because it forces them to think about edge cases and how to test their work results before starting to code their implementation. And bonus: they can do so while coding tests.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: