I dealt with this recently. We wrote a system that had a hard-coded data size in it. Later, that data size had to change to a flexible value chosen on the fly, and I was the one that had to go through all the code to make all the changes so we could handle that. It took a long time. Partly because we had no unit tests (a whole 'nother discussion), partly because I wasn't familiar with the area (all the areas) of the code that needed to change, and partly because we had said, "YAGNI" and not made our code configurable enough from the get go.
I started to curse YAGNI in the middle of that chore, but then I paused and thought about the many months of productive use of this code that we had been enjoying before this point. And even though it took me significant time to make that change, the production code was still running along just fine during all that time, still bringing us value. I decided that I was glad we had said, "YAGNI" at the start.
It sounds like most of your pain came from the code not being DRY. That is, this data size constant was duplicated in many places, rather than defined in one central place.
Unless I'm misreading you, that's not an appropriate YAGNI case, as Fowler writes:
"Yagni only applies to capabilities built into the software to support a presumptive feature, it does not apply to effort to make the software easier to modify"
"Yagni only applies to capabilities built into the software to support a presumptive feature, it does not apply to effort to make the software easier to modify"
That's a very convenient distinction. It lets you No True Scotsman anyone who challenges your position, yet provides little if any practical guidance about the best thing to do in the real world.
That is an argument for refactoring only immediately prior to implementing a new feature in order to support development of that feature. In itself that is reasonable enough, but it becomes less effective as a strategy if the cost of just-in-time refactoring prior to implementing each new feature turns out to be significantly higher than the cost of setting up the same design at an earlier stage.
When to refactor/clean up code is an interesting topic. My rule is to only refactor old code when the bad design gets in my way. If we have some bad code that just keeps working, there is not much reason to clean it up.
New code I try hard to factor into tip top shape.
This is entirely separate from YAGNI in my dictionary.
That all sounds perfectly reasonable, but please answer me this: how do you decide what "tip top shape" is for your new code?
If YAGNI is an argument for not making any sort of advance judgement about future requirements until it's clearly necessary, then it is necessarily also an argument that as soon as any code meets its immediate requirements you should stop working on it and move on to the next sure requirement, without wasting any time on refactoring that might never prove useful for future development.
I suspect that many here who would say they agree with YAGNI do in fact edit their code beyond just working no matter how much it looks like spaghetti, in which case I would argue that the difference between our positions is merely a matter of degree, not a difference in the underlying principle.
Yeah, some/many people forget about the Ruthless Refactoring part of XP. Or they're just not good at it. Like how some decide to not write documentation and declare themselves "agile".
The successful XP teams I've been on probably spent 1/4 of their time refactoring. Once your code works, you clean it up, and refactor anything it touched. THIS IS THE DESIGN PHASE! Without it, you're just another pasta merchant. What truly blew my mind was that designing/architecting the code after you write it is so much easier and effective.
> If YAGNI is an argument for not making any sort of advance judgement about future requirements until it's clearly necessary, then it is necessarily also an argument that as soon as any code meets its immediate requirements you should stop working on it
That is not the YAGNI I know. It applies to external requirements only. Keeping your code base well designed, readable and bug free is an entirely separate concern.
> it becomes less effective as a strategy if the cost of just-in-time refactoring prior to implementing each new feature turns out to be significantly higher than the cost of setting up the same design at an earlier stage.
Or rather, the cost of setting up the best design you could at an earlier stage, knowing what you knew at the time, and then of modifying that design to be the design you want now.
But yes, if that turned out to be cheaper than just-in-time refactoring then that would be a better way to proceed. (IME it never is cheaper though).
But yes, if that turned out to be cheaper than just-in-time refactoring then that would be a better way to proceed. (IME it never is cheaper though).
This is always the danger of proof by anecdote or personal experience in a field as large and diverse as software development. I could just as well tell you that I have seen numerous projects get into trouble precisely because they moved ahead too incrementally and consequently went down too many blind alleys that were extremely expensive or even impossible to correct later.
It's true that I have not often seen this in something like the typical CRUD applications we talk about a lot on HN. However, if for example you're working on some embedded software with hard real time constraints, or a game engine that will run on a console and has to give acceptable frame rates with the known processing horsepower you have available, or a math library that needs to maintain a known degree of accuracy/error bounds, or a safety-critical system that requires a provably correct kernel, I suggest to you that trying to build the whole architecture up incrementally is not likely to have a happy outcome.
You missed the point that I probably made too subtly, that it went from being a set size to a variable size. Yes, we did have some terrible non-DRY code and it would have been a little easier to make the change if not for that. But going from same size all the time to different size every time was never going to be as easy as changing the value of a constant.
>It sounds like most of your pain came from the code not being DRY.
Exactly this. Yagni does not preclude following generally good design principles, which almost always saves future pain; unlike attempting to predict and design for the future.
> Partly because we had no unit tests (a whole 'nother discussion)
I think that the is the primary point. If you have good test coverage at each level, it's actually really easy to make large sweeping changes to a codebase. If you don't have those tests, it's hell. This is a big part of why I think unit tests are ok, but not nearly as important as good E2E and perf tests, as painful as they can be to write and wait for to run some times. Good high level tests let you tear the heart out of the codebase, replace it and still be confident it works.
I agree that having good test coverage across different levels/purposes makes sweeping changes far quicker, easier, and even feasible to consider doing.
Building and shipping the software is perhaps half the work - another half of the work is growing and maintaining the scaffolding to rapidly, automatically and reliably measure the properties of the software that you wish it to have.
It seems no-one here is arguing that YAGNI should be applied to justify not building automated tests.
Interesting. I've found myself working almost entirely without unit tests (at least when I have an advanced type system) - making small incremental changes means that if an end-to-end test starts failing there's only a very small number of places that could have caused that failure. What kind of problems did you face? Why were they hard to diagnose?
I definitely did not have an advanced type system helping me out. This was a SystemVerilog testbench. SystemVerilog has a type system akin to Java's or C's. Actually it's probably worse than either of those. It's a real Frankenstein's monster of a language.
The general idea with SystemVerilog is you have a low-level model of the hardware made up of language primitives like wires and logic functions. Then you wrap that in a few layers of testbench code. Each layer ups the abstraction level until at your top level you can write tests in terms of high level transactions (in this case it was a flash controller, so you could do simple commands like write, read, erase). The layers all take care of converting those high-level transactions into (eventually at the bottom layer) wiggling the input wires of the hardware at just the right times. It goes the other way too, monitoring the output wires of the hardware and converting those wiggles into high-level response transactions (such as, "here's the read data you requested"). The net result is you should be able to write succinct tests at a high-level of abstraction that exercise a lot of the hardware (wiggles a lot of the wires a lot of different ways).
There were actually two testbenches that shared some of this code and among them at least three different code paths from high-level to low-level and back again that gave me trouble. If I had unit tests isolating each of those layers it would have been a lot easier to find which layer was dropping a byte of data here or adding extra padding there.
EDIT: I just noticed the "small incremental changes" part of your question. That's the other thing about not having unit tests. I had to get the whole thing to compile and run before I could test any change, and that meant changing every layer to handle the variable sized data. Part way through I did stop to write unit tests for the trickiest (lowest-level) layer.
Yes! And in this case, you “cursed” yourself for not making the prediction that this change would be necessary. One discounts the 100 other wrong predictions that would have accompanied this right one.
The incredibly interesting thing about this example is that it sounds like the design actually had to change in two ways: 1) the value had to be configurable and 2) it had to be configurable on the fly.
Anticipating and designing for (1) is simple enough - the costs that the article refers to are quite low - but to think ahead and design for (2) is a completely different task that might radically complect the code.
You caught the subtlety that I should have emphasized more. Number 2 was the really difficult part. We had some hard-coded constants as others guessed, which also made this really hard, but just fixing that problem was wasn't enough.
I started to curse YAGNI in the middle of that chore, but then I paused and thought about the many months of productive use of this code that we had been enjoying before this point. And even though it took me significant time to make that change, the production code was still running along just fine during all that time, still bringing us value. I decided that I was glad we had said, "YAGNI" at the start.