I'm the author of that article and if anyone has any questions, feel free to ask me. I'm building up the wiki of well-discussed but rarely documented nuances to testing that I think are worth capturing. A lot of these little articles are things I've discussed a dozen times over six years with people close to the "movement", but still aren't even google-able.
The reason I think this is important is that ideas about TDD people had from 2000-2007 still has a profound impact on testing tools and practices more broadly. For example, very very few developers I meet know how to apply test doubles rigorously and what their purpose is. This might help that.
I feel like if I were to take program design as being the Wikipedia definition: "...the process by which an agent creates a specification of a software artifact, intended to accomplish goals, using a set of primitive components and subject to constraints.[1]", then I can't see your article saying much about that. Instead, the article points to good things about code created using TDD methodologies. Those code-qualities (coupling, etc) may indeed happen with TDD and may be desirable but that seems distinct from the issue of how-to-design or how TDD is design (one assumes the tests are the artifacts but the point is that details need filling-in).
As far I've seen, the "designing by writing tests" approach, mostly succeeds in the web-apps field where one has an implicit/assumed CRUD/MVC structure, writes tests for behavior and filling in functions based on this assumed structure.
I haven't seen anything about how one proceeds with TDD if one has the more complex task of implementing an algorithm, a language or something with highly structured requirement. I'd be interested in how one would do that.
I don't think this wiki page is intended to show specific examples. But here's one I really like that isn't a typical MVC web app: https://www.youtube.com/watch?v=XHnuMjah6ps
The process he's using there falls in line with the higher-level experiences described on the wiki.
Great article. I'm working in a talk about Developer Flow and TDD and towards finding that elusive zen like state where the TDD you are practicing helps really drive out the design and strongly guides what to code next, and next and next. Specifically that TDD can be used to find this nice developer rythm.
Has anyone had this experience/success with this? What can help make the difference towards getting that nice flow out of TDD.
I'm inspired by Corey Haines and a bunch of code experiments he's illustrated over the years. One technique that struck me was that a nil return from a method was was an expected and desirable step during a TDD cycle and that guard clauses can be very helpful to driving out TDD designs while maintaining flow.
I'm looking for similar patterns for TDD decision making, especially those that help the dev stay in a good rythm.
Would love any ideas towards exploring this further.
Do you need help? I'm completely on board for that mission and personally feel that we as an industry do a really bad job at testing past the evangelizing phase. Teaching people how to actually test code is something I think we need to do better.
This resource is clearly-written, and it distills what a lot of what people gradually learn over years of practicing TDD.
There is a lot of solid reasoning (and examples) leading up to this point, but one notable point was this conclusion:
```
[London-school TDD] sits at the extreme end of the trade-off between coupling and design feedback: incredibly rich feedback about the design of the system, typically resulting in small, hyper-focused units with carefully-chosen naming and easy-to-use APIs. They come at the cost, however, of significantly decreased freedom to refactor the implementation aggressively, which is why Discovery Testing recommends developers default to deleting-and-redeveloping local sub-trees of dependencies when requirements change significantly. This can result in reliably comprehensible designs, but with an increased base cost to requirement changes.
```
I really agree with this. I spent a few years working with a consulting company which preaches London-school TDD. We did a lot of work in dynamic languages, and refactoring was particularly difficult because we had to make sure that all of our mocks (and other test code) lined up with the refactored APIs.
I hate to take the discussion here, but I wonder how Static Typing changes the experience of London-school TDD. Does it become easier and less-frustrating to refactor your test-code?
Naturally it does, because a good typechecker is like an automatic test suite that guides you through refactoring. It means you need fewer unit tests, which means less test code to maintain.
I am working on a Rails project and a Haskell project simultaneously; the Haskell project requires far fewer tests because the compiler can catch so many potential bugs. Then when you add ghci (Haskell REPL) it becomes almost as flexible and exploratory as a dynamic language. Some Haskellers even like to say that TDD stands for Type-Driven Development. Combine that with REPL-driven development, add in property-based generative testing with Quickcheck, and you have a lot of very powerful tools to help you build the thing right.
That is an incorrect assumption if it's made universally. The answer is that it depends on what you're testing, why, and how. It also means what you mean by refactor.
> Deciding whether and how to couple two pieces of code is a fine art in software development, and as such, it's a point of never-ending tension as systems are designed and redesigned.
To expand on the theory here:
Specifically, this is a tension between coupling and cohesion. Coupling means dependencies between modules; cohesion means dependencies within a module. High cohesion reflects a module’s focus on just one responsibility, which really does need everything in the module to get done.
> > Woah, increased coupling is always a bad thing, right!?
Increased coupling is always a bad thing – all else being equal. But if increased coupling results in increased cohesion too, it may be worth it. Higher cohesion is better, and lower coupling is better.
> All use is coupling. All reuse is additional coupling. A system with minimal coupling is a system without any abstraction or invocation.
How do you know you don’t want such a system? Because it has very low cohesion. The duplicated parts all over your codebase will only be used by a few parts of their modules, rather than most of their modules.
Note that cohesion is measured relative to the size of the module, not as an absolute count of connections. If you pointlessly extract the full body of a method to another differently-named method, you have increased the number of intra-module connections. But you have decreased cohesion, because the module is now bigger, yet your new method is only called once within the whole module.
The reason I think this is important is that ideas about TDD people had from 2000-2007 still has a profound impact on testing tools and practices more broadly. For example, very very few developers I meet know how to apply test doubles rigorously and what their purpose is. This might help that.