> Now, you change a little thing in your code base, and the only thing the testing suite tells you is that you will be busy the rest of the day rewriting false positive test cases.
If there is anything that makes me cry, it’s hearing “it’s done, now I need to fix the tests”
It's something we've changed when we switched our configuration management. The old config management had very, very meticulous tests of everything. This resulted in great "code" coverage, but whenever you changed a default value, at least 6 tests would fail now. Now, we much rather go ahead and test much more coarsely. If the config management can take 3 VMs and setup a RabbitMQ cluster that clusters and accepts messages, how wrong can it be?
And this has also bled into my development and strengthened my support of bug-driven testing. For a lot of pretty simple business logic, do a few high level e2e tests for the important behaviors. And then when it breaks, add more tests for those parts.
But note, this may be different for very fiddly parts of the code base - complex algorithms, math-heavy and such. But that's when you'd rather start table based testing and such. At a past gamedev job, we had several issues with some complex cost balancing math, so I eventually setup a test that allows the game balancing team to supply CSV files with expected results. That cleared up these issues within 2 days or so.
Me, right before some really annoying bug starts to show up and the surface area is basically half the codebase, across multiple levels of abstraction, in various combinations.
whenever you changed a default value, at least 6 tests would fail now
Testing default values makes a lot of sense. Both non-set configuration values and non-supplied function parameters become part of your API. Your consumers will rely on those default values, and if you alter them, your consumers will see different behaviour.
Sometimes, you have to make a complex feature or fix. You can first make a prototype of the code or proof of concept that barely works. Then you can see the gap that remains to make the change production ready and the implications of your change. That involves fixing regressions in the test suite caused by your changes.
If there is anything that makes me cry, it’s hearing “it’s done, now I need to fix the tests”