TDD is nuts for code without a client or specification. The whole point of tests is to ensure that the code does what it's supposed to do. When you have neither client nor spec, how are you supposed to know what the code is supposed to do? There is, IME, a >90% chance that any such code will be ripped out and replaced as you develop a better understanding of the problem domain.
I've found it's pretty useful to go back and add tests as you accumulate users, though (or convince an exec that your project is Too Big To Fail in the corporate world). You're capturing your accumulated knowledge of the problem domain in executable form, so that when you try an alternate solution, you won't suddenly find out - the hard way - that it doesn't consider some case you fixed a couple years ago.
I've found the opposite, that TDD helps me most when the spec or my understanding of the spec is fuzzy, by forcing me to make decisions about behaviour up-front and clarify my thinking. Otherwise I can get bogged down trying to implement and specify a feature simultaneously, or spend a lot of time implementing a feature before realising I'm approaching it the wrong way.
Depending on how you work, that can be worthwhile, but I usually find I get more useful information quickly by making the decisions that make implementation easiest, getting something up on the screen that I can react to as a user (or put in front of a real user), and then seeing where that initial proof of concept falls short. In general, I've found that pushing off decisions until I have as much information as I can tends to result in better decisions.
Some (quite a lot, actually) pieces of code are essentially experimental - e.g. trying out an API/libary and seeing what it's capable of or trying an approach to solving a particular kind of problem or even trying to see if a particular problem is solvable with a piece of code.
For this kind of coding, TDD makes no sense whatsoever. The 'specs' are as fluid as the code and having confidence in the code isn't that important.
This is entirely different to creating production hardened systems with very clear specs. If you don't do TDD on that, you're an idiot.
I don't agree with "no sense whatsoever." Actually, TDD can be a very pleasant way to do this kind of exploratory programming, precisely because it's oriented around verifying expectations.
Yep, any time I have an assumption about how code should work, that's a good starting point to write a test. Even if it's vaguey like "should not throw when given inputs X Y Z that I expect will be encountered"
That's assuming it works correctly the first time (I haven't had that experience often, even for "trivial" code :( ). Even for "run once" functions, I still use a few tests to develop them and make sure my expectations are correct. With a good framework, setting up a handful of unit tests takes just about as much dev time as running the function in a REPL.
I'm not clear on what the scope of selenium is or how it works, but for a general web scraper, I'd identify some targets I want out of skyscanner. Here's a quick googletest butchery for "I want to make sure my function returns flights for a known good flight search."
In XP settings this is called "spiking" and is recognised as an entirely legitimate tactic.
The point of spiking is that you don't know enough to TDD. You are outside the sweetspot where TDD is tractable.
The key to spiking is that once you come with a plausible approach, or have a conceptual breakthrough, you stop, backtrack and then switch back to TDD.
Like everything else in mature software engineering, agile or classical, it requires sustained discipline.
Interesting. I've found myself drawn into XP-style delivery repeatedly, probably because I tend to work closely with clients who have rapidly shifting or unknown requirements.
The preference function (or oracle) can return a truth value without you necessarily having insight into its inner workings. People "know it when they see it" without necessarily being able to define it.
So you write the code; expose it to a preference function / oracle, iterate, and hill-climb towards a local optimum. The revealed preferences may give enough information to make a creative leap off the local hill onto a large hill elsewhere - but it may not.
After a solution has been found, then you can write tests.
> The preference function (or oracle) can return a truth value without you necessarily having insight into its inner workings.
Is this preference function implemented on a computer, or is it just a person giving you seemingly random answers in response to whatever you tell them?
> People "know it when they see it" without necessarily being able to define it.
That path leads to hell.
> After a solution has been found, then you can write tests.
what exactly would you test? Tests make sense when you want to check that your implementation conforms to a certain set of logical rules, but where exactly is the logic in the process you outlined?
> > People "know it when they see it" without necessarily being able to define it.
> That path leads to hell.
That's why they pay us the big bucks.
If people could define their problems well enough without iterative design, waterfall development would be a success story and all software would be outsourced to the lowest bidder.
The requirement for an iterative process is what I mean by "oracle"; it's the thing that tells you how far off you are on this iteration, and how to change your heading for the next iteration.
That's a rather weird use of the term “oracle”, which normally means “a black box that solves a decision problem in a single step”. Normally, to decide whether you've achieved your goals, you don't use a black box - you use performance metrics that you've defined yourself (and thus can't claim not to understand).
Imagine you make some code that needs to comply to a law, but then the law changes. Or client decides he wants his notification in different format, look shape.
And you've spent 2x the effort that you would've had you not written the test in the first place.
My initial comment was meant to be a statement about the relative chance that you're solving the wrong problem vs. solving the problem wrong. Prototypes help you identify that you're solving the wrong problem, unit tests help you identify if you're solving the problem wrong. In the beginning of a project, you are much, much more likely to be solving the wrong problem than solving the problem wrong. Go ensure that the system works end-to-end and solves the user's needs before you make it bulletproof.
how are you supposed to know what the code is supposed to do?
That's the beauty of TDD. It forces you to sit and work out, "What is this code supposed to do from the client perspective?"
It battles the all-too-prevalent developer inclination to just start running with implementation without even taking the time to understand what problem is being solved.
I don't agree with this at all as the tests at least say what the code does in its current state. I see that as incredibly useful. It also forces you to think about what it's actually doing as you build it. I get the OP's forest-for-the-trees thing, but I still find incredible value in testing things - even my side projects.
> TDD is nuts for code without a client or specification.
So, write/pretend to be one? I find writing high-level overview docs, and API interface example code before writing code gives enough guidance for my personal project to then write some tests & then code (in any order).
I've found it's pretty useful to go back and add tests as you accumulate users, though (or convince an exec that your project is Too Big To Fail in the corporate world). You're capturing your accumulated knowledge of the problem domain in executable form, so that when you try an alternate solution, you won't suddenly find out - the hard way - that it doesn't consider some case you fixed a couple years ago.