Over the years I've gone from writing no tests at all, to being a die hard TDD purist, and then out the other side to writing some things with tests first, others with tests after writing the code, and some without any tests at all.
In some situations I have clear view of what I need to build, and how that should work. TDD is great in that case - write a test for the expected behaviour, make it pass, refactor, rinse and repeat. The element I think a lot of people miss when doing this is higher level integration tests that ensure everything works together, because that's the hard bit, but its also essential.
Other situations you're still feeling out the problem space and don't necessarily know exactly what the solution is. There's an argument that in those cases you should find a solution with some exploratory development then throw it all out and do it with TDD. If I've got the time that's probably true, it'll result in better code simply through designing it the second time with the insight provided by the first pass, but often that just isn't viable. Deadlines loom, there's a bug that needs fixing elsewhere, and I've got two hours until the end of the week and a meal date with my wife.
Finally there's the times when tests just aren't needed, or they don't offer a decent return on the effort that will be required to make them work. I'm thinking GUIs, integration testing HTTP requests to other services, and intentionally non-deterministic code. Those cases certainly can be tested, but it often results in a much more abstract design than would otherwise be called for, and brittle tests. Brittle tests mean that eventually you stop paying attention to the test suite failing because its probably just a GUI test again, and that eventually leads to nasty bugs making it into production.
One thing I'll directly say on the article is that I found his opinion that its hard to write tests for handling bad data. That's almost the easiest thing to test, especially if you're finding bugs due to bad data in the real world - you take the bad data that caused the bug, reduce it down to a test case, then make the test pass. That process has been a huge boon in developing a data ingestion system for an e-commerce platform to import data from partner's websites as its simply a case of dumping the HTTP response to a file and then writing a test against it rather than having to hit the partner's website constantly.
Over the years I've gone from writing no tests at all, to being a die hard TDD purist, and then out the other side to writing some things with tests first, others with tests after writing the code, and some without any tests at all.
Hear, hear! Exactly the same here, for the same reasons as you and the OP mention.
And observed fom a distance, it's always the same universal principle: purist behaviour (in the sense of almost religous beliefs that something is a strict rule, sentence starting with Always etc you get the point) in programing or to a further extent, in life, is nearly always wrong, period. No matter what the rule is, you can pretty much always find yourself in a situation where the rule is not the best choice.
I deeply agree when it comes to programming, but there's plenty of room for absolutism in other areas of life. Why subordinate strong moral preferences to hazy cost-benefit analysis?
This is becoming very off-topic, but if absolutism doesn't apply in the highly-ordered world of programming, to me it's even less likely to apply in messy real-life.
> you take the bad data that caused the bug, reduce it down to a test case
That's not really TDD anymore though - post-hoc testing of bad data is always going to be orders of magnitude easier because you have the bad data and you know what broke.
In some situations I have clear view of what I need to build, and how that should work. TDD is great in that case - write a test for the expected behaviour, make it pass, refactor, rinse and repeat. The element I think a lot of people miss when doing this is higher level integration tests that ensure everything works together, because that's the hard bit, but its also essential.
Other situations you're still feeling out the problem space and don't necessarily know exactly what the solution is. There's an argument that in those cases you should find a solution with some exploratory development then throw it all out and do it with TDD. If I've got the time that's probably true, it'll result in better code simply through designing it the second time with the insight provided by the first pass, but often that just isn't viable. Deadlines loom, there's a bug that needs fixing elsewhere, and I've got two hours until the end of the week and a meal date with my wife.
Finally there's the times when tests just aren't needed, or they don't offer a decent return on the effort that will be required to make them work. I'm thinking GUIs, integration testing HTTP requests to other services, and intentionally non-deterministic code. Those cases certainly can be tested, but it often results in a much more abstract design than would otherwise be called for, and brittle tests. Brittle tests mean that eventually you stop paying attention to the test suite failing because its probably just a GUI test again, and that eventually leads to nasty bugs making it into production.
One thing I'll directly say on the article is that I found his opinion that its hard to write tests for handling bad data. That's almost the easiest thing to test, especially if you're finding bugs due to bad data in the real world - you take the bad data that caused the bug, reduce it down to a test case, then make the test pass. That process has been a huge boon in developing a data ingestion system for an e-commerce platform to import data from partner's websites as its simply a case of dumping the HTTP response to a file and then writing a test against it rather than having to hit the partner's website constantly.