Something this article largely ignores is that often you want to build the support for the presumptive feature now because the act of implementing it affects architectural decisions. It came close to this by saying
> it does not apply to effort to make the software easier to modify
But there's a difference between making software easier to modify (where the same effort could be expended later to retrofit the software) versus making architectural decisions. Often times trying to implement something teaches you that your architecture isn't sufficient and you need to change it. If you can learn that while you're building the original architecture, then you've avoided a potential huge amount of work later trying to retrofit it.
To use the example from the article, what if supporting piracy risks reveals that the fundamental design used for representing pricing decisions isn't sufficient to model the piracy risks, and the whole thing needs to be reimplemented? If you can learn this while implementing the pricing in the first place, then you haven't lost any time reimplementing. But if you defer the piracy risks by 4 months, and then discover that your pricing model needs to be re-done, now you need to throw away your previous work and start over, and now your 2-month feature is going to take 3 or 4 months instead.
As I get older, both as a person and as a dev, I have come to the same realization as you: that a lot of the time I end up building things that I don't need "right now", but which I have to at least plan at the architectural level, lest be it a nightmare later on.
And then it hit me.. THIS is one of the areas of software development where I don't think reading more articles, techniques "mantras" etc will help. Only more practice will help. It ends up boiling down to how much experience as a developer you have, making those particular decisions.
I.e. It's a craft, not a science.
So as you grow up as a developer you suddenly find that you start to have certain "intuition" as to why I should actually plan for something that I don't need, vs actually ignoring something (a real "yagni"), but you just can't explain it in terms of generalizations. You end up saying.. "well.. it just feels like I Should do this, because I have been burned in the past when I ignored this intuition"
And I don't think I'm alone in this train of thought.
This sounds like the journey from "realization of the cost of hacking" to its "cure", the grand solution.
I had the same epiphany in my 14th year of software development when I realized that programmers Must Be Given Solid Platforms of Well Architected Object Oriented Code or they will Make A Mess.
However, now in my 34th year, my view is more "Plans are useless, but planning is indispensable".
I have "grand" architectures, but my architectures and their implementations are designed with the assumption that they will be wrong in a week's time.
And my advice to anyone reading this is YAGNI. More practice helps. But practice the right thing.
However, I still am left with the feeling that you have reached this conclusion by means of more than three decades of practicing both the right thing and the wrong thing as well (we all make mistakes, etc).
So basically, how did you realized that "Plans are useless, but planning is indispensable" if not precisely by trying to plan, and then observing it's uselessness?
I could almost bet that you now assume that the architecture will be wrong in a week's time and yet you always find a way to actually make it resilient to this and make it work, otherwise I would have to wonder how did you ever accomplished something if indeed 100% of your architectures was wrong in a week's time.
I.e., this is your experience in your craft. Flowing like water, Bruce Lee would say.
Now, the point I did not get though, is, what exactly are the differences between being a craft or being war?
If it's a craft I kind of assume, like you, that I'll never be able to design the perfect architecture, but only the best architecture that I could have produced at the time. And just strive to get better every time.
If it's war, then it basically means that at any time anything can explode, but then I would be filled with despair all the time I guess. Maybe not. Is this what you refer to? if it's war you have to design assuming it will break very shortly and then work around that? or "make do with what I have" as they say?
Again, very interesting points and I appreciate the discussion since this has been haunting me almost my entire career.
Edit: By the way, I hope you don't me if I quote you. Those are some nifty points right there.
I have done my fair share of over-engineering. =) Its not over-engineering if you need it, but turns out, most of the time, you aint gonna.
A team that just hacks and does YAGNI by accident, or a team that does YAGNI but doesn't plan, they often have a choice of how to build the feature right now. With a little bit of planning, we can see past the first few meters and choose the first step to be in the direction of the bigger plan. Without planning, the first step might be in the wrong direction. The hard part for my teams is to plan, and then still only implement the first step. They're getting better =)
Its war because if you're doing something worthwhile, there's someone else trying to get your customers. I'm not suggesting your stuff should break. The opposite. Combine YAGNI with TDD and push working, tested features to production ASAP.
With a little bit of planning, we can see past the first few meters and choose the first step to be in the direction of the bigger plan.
I had a suspicion that despite the black-and-white way you have expressed your opinion elsewhere, particularly in your exchange with eridius, there probably was actually some common ground here. I don't believe it's possible to avoid the failure modes eridius describes without thinking ahead to some extent; this passage suggests you actually agree.
I can't speak to your experience with your team, but I can say that I've spent a lot of time fixing or working around other people's poorly-considered design decisions. In many cases I understand there wasn't time to do it better; the product had to be shipped. But in some cases I think a little reflection would have shown a better way which wouldn't have taken any longer to implement.
I feel exactly the same way. It turns out everything practical I do with software depends more on the "spider sense" part of it rather than some well-defined knowledge (which is easy to look up when needed, so maybe that has something to do with it). The spider sense is not something you can teach or memorize, and it takes a while to develop. Programming is exactly like a craft in that way.
But there is plenty of better understood theory and experiment out there which if anything is the science part. From the theory and experiment I've dabbled with it's very much a different thing.
And it has a lot to do with the existence of radically different schools of programming. There's a joke, the engineer is showing a laser pointer to the scientist. The scientist asks "why didn't you get an ultraviolet laser when those output far higher energies?" and the engineer says "it's for shining a dot on the wall."
I think what I was trying to say it's not really that developers are divorced from science, but that the day-to-day tasks that we do end up being far from whatever paper or thesis taught us how to do. The example that comes to mind is REST. There is a lot more in that PhD thesis than what is actually used and implemented outside of the lab.
That's what I meant for craft. In the end I don't see anyone implementing "all the science" regarding REST, only those parts that are needed and even then... as best as some engineers understood it (myself included of course). That's also the reason why we end up calling "RESTFul" a lot of stuff that Thomas Fielding clearly states is not.
At least it's good knowing that I'm not alone and other people feel the same. So in the end I know that I can do my best... but not more.
There's an interesting thing that happens when you teach software development: many times you're forced to use heuristics in order to get something of value in the student's head. The 90 or 95% solution that somebody understands and uses is much better than the 99% solution that nobody pays attention to. KISS.
I think everybody needs to build 2,3,4,5 or more systems that you over-design in order to feel the pain and learn that YAGNI makes a lot of sense. Yep, for any given project there might be a couple of YAGNI items that make sense, but good luck trying to get a team of seven to agree on them. Most of the time instead of agreeing, everybody gets their 2 or 3 added (or they add them on their own without asking anybody) and you're back to astronaut architecture land.
A similar thing happens in C++. I firmly believe that a coder needs a bunch of projects where they shoot themselves in the ass with C++ before they finally realize that every little bit of added genericity and redirection is a ticking time bomb of maintenance work later on. Eventually you realize that brutally maximizing simplicity is as important as actually solving the problem. Probably more so.
Interestingly enough, I see more of this in high-level OO languages than I do low-level or FP solutions. I think that's because "manage the complexity" is a key part of those worlds, where "click to make a class" is a key part of the other. (Just speculating)
I believe what you describing is the ability to predict the future. Nobody can do it perfectly, but experience can increase your odds of being correct.
1. My team will not have to throw away our previous work. It works for storm pricing. It is deployed. Customers are using it and we are making money.
Your company has yet to deploy a storm pricing model because it got caught in the complexity of the piracy model.
2. When it comes time to write the piracy pricing system, my team has already gained a huge amount of relevant experience from writing the storm pricing system. In my experience a team that already has developed a simpler system will be better able to develop the more complex system.
3. If piracy pricing is a significant risk then it should be prioritized higher. If it is late because you thought it would take two months and it took you four, then you failed to assess risk. A really good way to assess risk is to implement something similar (may I suggest storm pricing).
However, in my experience, even if we know we want to ship storm and then piracy immediately after, I will still ask my team to develop storm first to completion. And I will stand any one of my teams up against a team that is going to "do it all in one go", and we'll get them both done before the other team.
4. Basically when you make an argument that is "Suppose you have this situation...", then it is equivalent to saying "This thing happened in the past, and if we had known then what we know now...."
They are equivalent because now matter how your frame your "thought experiment", it is guaranteed to never happen in real life. The whole point of YAGNI is that you cannot know the future, and no one ever has. The only way to know for sure that "this is how its going to turn out" is to actually be in the future.
So, yes, if you had a time machine, you could argue that big upfront design would work.
Of course, if you were to start your argument with "Suppose I had a time machine and I could go back in time and tell my team that the piracy model is a perfect superset of the storm model, but is twice as complex because of x, y and z, but there are no other unknowns, and customers love it, and the Navy doesn't kill the pirates, and so therefore we should do it all in one go" then it would be much more obvious that your argument was flawed.
You're still making some pretty strange assumptions here. An imperfect prediction of the future does not render the prediction meaningless, and making architectural decisions to support expectations of the future does not necessarily mean a significant increase in complexity, it often merely means some upfront "critical thinking" time. In fact, this upfront time may very well speed up the rest of the implementation, even if the predicted future requirements never come to pass, because it produces a more well-thought-out and well-understood architecture, that may be cleaner and more powerful without making the actual implementation any harder.
But again, there is a large continuum of possible approaches here, and insisting on reducing it all down to either "do everything" or "YAGNI" is really kind of bewildering.
"May very well speed up the rest of the implementation".
Experience tells us that the rare cases where it does are vastly outweighed by the cases where they do not.
"But again, there is a large continuum of possible approaches here, and insisting on reducing it all down to either "do everything" or "YAGNI" is really kind of bewildering."
This is the "I'm being more reasonable" logical fallacy.
There is a large continuum of possible approaches to the standard prisoners dilemma, but it turns out that there is an optimal solution. In my experience, YAGNI is an optimal solution to software engineering. Sure, there are plenty of times where it doesn't work out. But in any given solution, without a time machine, YAGNI is the sanest strategy.
If this is bewildering to you then you don't have enough data.
In my experience, YAGNI is an optimal solution to software engineering.
Of course it isn't.
For example, it is suboptimal any time you have two features, the total cost of implementing both together is less than the total cost of implementing one then the other separately, and it turns out that you do need both in the end. You might not have known it up-front, but your outcome was still worse in the end.
Moreover, the greater the overhead of implementing the two features separately, the lower the probability of eventually needing both required for doing both together to be the strategy with the better expected return.
If this is bewildering to you then you don't have enough data.
Or just different data that leads to a different conclusion. Some of the YAGNI sceptics here have also been doing this stuff a long time, but apparently with quite different experiences to you.
> Or just different data that leads to a different conclusion. Some of the YAGNI sceptics here have also been doing this stuff a long time, but apparently with quite different experiences to you.
Purely a single data point, but I've been doing this on and off for 30 years, mostly enterprise, and agree that YAGNI is axiomatic to _efficient_ software delivery (most isn't built efficiently of course). And the bigger the project, the more pronounced this becomes.
This leads to questions about what architecture must look like, since it needs to be extremely malleable to support YAGNI. RESTful and microservices based architectures seem to be optimal approaches based on this assumption.
FWIW I've seen traditional enterprise solutions architecture approaches fail extremely hard against this worldview to the point that what looks like best practise in one world (e.g. stateful, transactional, end-to-end design) looks like worst in the other. (I don't think this is resolvable and notably causes huge friction for technical folk from 'the new world' going to work on giant programs populated by veterans who have spent long careers in a BDUF paradigm.)
This leads to questions about what architecture must look like, since it needs to be extremely malleable to support YAGNI.
Precisely. I think this is where some of us might be talking at cross-purposes. Given that implementing robust architecture typically requires work, even if the expectation is that it will be efficient in the long term, I don't see how one can reasonably argue for a flexible architecture to support future developments without considering what sorts of development are most likely to be necessary. Whether you choose to assume general programming principles or something more domain specific is just a matter of degree -- even a simple modular design is more work than spaghetti code in a trivial case, but I imagine most of us would agree that trying to keep things modular is highly likely to pay off for any project of non-trivial scale.
What about the next 37 cases, wherein you didn't need the second feature?
Yagni says that more often than not, you won't need it. Every single case may not prove out according to that dictum, but that doesn't mean that the overall approach is suboptimal.
YAGNI refers to _average_ ROI (as well as time to market, reduced complexity/bugs, 2/3 of features never being needed).
If you want to argue that ROI might not hold true in your particular case, you not only need a time machine, but even worse if you view your work in the context of an ongoing program, you have a halting problem to contend with.
Now as a business owner commissioning software, a) potentially shaving a few dimes is much more expensive to me than not having knowable sums going in and out over the next quarter and b) I may take a financial view that is alarmingly short term to many engineers :)
>It says, "You aren't going to need it". There's no hedging about more-often-than-not
Taking it that literally is silly though, right? I mean, no one is stating that in every single case you couldn't possibly need an anticipated feature. How could anyone argue that? Such a literal interpretation actually reduces the entire discussion to indenfensible nonsense.
YAGNI is an approach, not a statement of fact. Even in the article that is the subject of this thread, the author states the following in the opening paragraph:
>It's a statement that some capability we presume our software needs in the future should not be built now because "you aren't gonna need it".
Well, I'd certainly say so, but I seem to encounter plenty of people who would disagree. Even in this HN discussion, we seem to have some people who are arguing for always assuming YAGNI despite also arguing that YAGNI is an average/on-balance/overall kind of deal and they subjectively assume that the odds will always favour doing no unnecessary work straight away. I also see some people who appear to support the YAGNI principle yet make an exception for refactoring, with an acknowledged open question so far about when refactoring is justified and why it should deserve special treatment in this respect.
Note the word "some".
Unfortunately, the statement you quoted can be parsed at least two ways, with very different meanings, so I'm not sure that really furthers the debate here. Certainly there are some in this HN discussion who do seem to be explicitly arguing that it's very much an all-or-nothing proposition.
My argument is pure logic, based on a direct comparison of the total cost of doing it both ways. No fudge factors are required.
The conclusion fails and so YAGNI always gives the best possible result only if there is no real world situation where the initial assumptions in my argument hold.
Clearly there are actually numerous situations where those assumptions do hold, so YAGNI cannot be trusted to give the best final outcome in all cases.
There are no real world situations where the initial assumptions in your argument hold because you do not own a time machine. A time machine is necessary for you to distinguish between the situation you describe vs. any one of the countless ways that your situation could differ from your expectation as time unfolds. Such as the pirates getting wiped out by the Navy.
Suppose instead of a pair of features, you have twenty pairs of features. For most of those pairs:
a) The second item isn't really needed.
b) The second item is vastly more complicated than the first, and delays revenue
c) The se
If you'd read to the end of the article, you would have seen that the author takes a more balanced view:
Having said all this, there are times when applying yagni does cause a problem, and you are faced with an expensive change when an earlier change would have been much cheaper. The tricky thing here is that these cases are hard to spot in advance, and much easier to remember than the cases where yagni saved effort. My sense is that yagni-failures are relatively rare and their costs are easily outweighed by when yagni succeeds.
My point is that the initial assumptions never hold because we can never say "We know this is how it is going to turn out". We can say "This is how we hope it is going to turn out."
YAGNI embraces this lack of knowledge explicitly, as you quote from the article, and my quote from five posts up.
We're not saying that YAGNI is always better in any specific sample, we're saying YAGNI is an optimal strategy across the entire range of samples.
I don't understand this obsession with the mysterious "time machine".
You don't know for sure whether a feature will ever be implemented or not. Neither side has a definitive answer, or there would be nothing to debate.
So the best you can do is estimate the costs of doing the necessary work in each scenario (do it now but don't need it later, do it now and do need it later, do it later when known to need it), the likelihood that it will in fact be required at some point, and therefore the expected benefit of doing it now vs. later.
It's a straightforward risk assessment and cost/benefit analysis, and you make the best choice you can given the information available at any given time.
>So the best you can do is estimate the costs of doing the necessary work in each scenario (do it now but don't need it later, do it now and do need it later, do it later when known to need it), the likelihood that it will in fact be required at some point, and therefore the expected benefit of doing it now vs. later.
So you do these to estimations and sometimes you are wrong. When you are wrong it costs you. It costs you opportunity cost, cost of carry, cost of delay, cost of building, cost of repair.
With YAGNI, we build feature A then possibly feature B. Let assume that YAGNI and TDD offer no benefit, and the cost of doing A then B is twice the cost of doing A and B together, but sometimes we only do A, or we do A, and then later do B and in between A makes two months of revenue.
The question is, on average, that is to say, over a large number of features, which method costs less?
If you estimates and predictions are accurate, then clearly your solution is the lowest cost over the whole project.
If however, your estimates and predictions are poor then in fact you cause waste and increase costs. There is a point at which YAGNI offers the lower cost over the whole project.
It is my experience, and, it appears, that of Martin Fowler, that in fact we are very bad at making estimations, and very poor at predicting the future.
So this is obviously counter intuitive and not something you want to believe: on average, your "best choice given the information available at the time" will cost more money on average than just implementing feature A now and worrying about B later.
You have to do a risk assessment and cost/benefit analysis of your risk assessment and cost/benefit analysis. It turns out that your risk assessment is highly risky and your cost/benefit analysis is too costly for the supposed benefits. Its cheaper to YAGNI.
So you do these to estimations and sometimes you are wrong. When you are wrong it costs you. It costs you opportunity cost, cost of carry, cost of delay, cost of building, cost of repair.
But unless you are very lucky, always disregarding any expectations about the future that turn out to be true also has a cost. YAGNI seems to assume that such costs are negligible, but in reality both false positives and false negatives can cost you.
As I noted in another comment, this is the danger of generalising from a single person's experience or a small set of anecdotes in such a large and diverse field. I have also been around the industry for a while, but personally I'm still waiting to run into these projects that incur crippling costs because they are so bad at predicting future needs that they write loads of code that is never used, just as I'm still waiting to see a project where refactoring fails catastrophically if you don't have 100% test coverage, or whatever other absolute metric someone wants to argue for this week.
Sure, sometimes you make the wrong call, because as everyone keeps saying, for most projects you can't predict the future with 100% accuracy in this business. But my personal experience has been that much of the time, either you have a reasonable idea of generally where things are going or you know you don't have enough confidence in future directions yet to be worth building specifically. And when you do have a reasonable idea, there are plenty of projects where failing to take advantage of that knowledge really will hurt you because you can't just conveniently refactor it out later -- many fields with specific resource or performance constraints will fall into this category for example.
> This is the "I'm being more reasonable" logical fallacy.
No it's not. It's me outright rejecting the claim that "YAGNI is an optimal solution to software engineering". You can certainly say YAGNI in specific situations, but as a general approach of never building anything that's not immediately needed this very moment, I strongly disagree.
At this point, I'm starting to suspect this comes down to the difference between "programming" and "software engineering" (which is to say, the difference between solving specific problems vs making core architectural decisions).
I agree, it is "programming" vs "software engineering".
You are looking at the problem as a programmer, and making reasonable assumptions about how to solve the problem using objects and graphs and interfaces and models. If we had a better model, our project will go faster and be better.
I am looking at it as an engineer, and seeing what actually happens in real life when real humans build real code. If we had working code for simple problem A, then when we get to more complex problem B, our developers will have more experience, better morale, and better support from management who have happy customers.
Your response is actually very surprising. I'm looking at the problem as a software engineer, who has to make architectural decisions about the applications I'm building. You're looking at it like a programmer. YAGNI is very much not an "engineering" response, it's a "programming" response. Engineering is all about careful, methodical design and architecture and planning for the future maintenance of the product. YAGNI is pretty much the antithesis of this.
I'm curious what kind of software you work on. Most of the software I do is applications / frameworks development. Planning ahead is very important when developing the architecture of an application, and is doubly-so when doing any kind of API design (both when developing frameworks and when developing reusable components inside of applications). You certainly don't need to implement everything up-front, but you do have to understand the future requirements and design appropriately. Failure to do so leads to blowing out time / budget estimates later when you realize your code isn't maintainable anymore and all the "quick" implementations people did to avoid having to make architectural decisions are now so fragile and dependent upon assumptions the implementor probably didn't even realize they were making that you can't change anything without having large ripple effects.
I am the lead architect of a public facing data platform for a fortune 500 company. I influence over 60 engineers through my team of architects, and indirectly influence countless other teams deploying to our platform. My background is start-ups.
We do not suffer the failure modes you describe in your second paragraph.
I take it "data platform" means server-side work? I was guessing that was your line of work. In my experience, server-side programming and native client software (applications, frameworks, or systems) are pretty radically different in a lot of non-obvious ways. I think this might be one of them.
After reading this thread (and others from lowbloodsugar) I would say that I disagree with you here and what I do is similar to what you do. Specifically, I think that architectural choices are only important in a planning phase when they affect how people will collaborate. Architectural choices such as data or system design within that collaborative framework are irrelevant to anything you don't have to build right away.
> Experience tells us that the rare cases where it does are vastly outweighed by the cases where they do not.
I do not agree with this.
My experience does not match this. As I tell my clients regularly, "Yes, I spent one day implementing that feature. But I spent 4 days talking to your folks trying to figure out what you actually needed, and 3 more days getting ready for what they are about to ask for."
Startup folks are fond of thinking that they have to move too fast for thinking or they are going bankrupt. That's rarely true. And, if it is, you're probably hosed anyway.
" The only way to know for sure that "this is how its going to turn out" is to actually be in the future."
But then what kind of "time resolution" do you use for "in the future"?
I mean, of course if you talk about weeks ahead, then yes, why should I plan for something that I may or may not need need in 4 weeks?
What is the cutoff then? Next week? Tomorrow? In an hour? 10 minutes?
After 34 years of experience, you have fine-tuned this. But there IS a line, right? Maybe not fixed, but there should be one, otherwise you won't do anything unless it's needed like right this nanosecond.
Reductio ad absurdum of course, but I'm curious how do you know where to draw the line?.
We do TDD. The cut-off line is "this test". Anything more than fixing "this test" is waste. We fix "this test" and write another test. Fix it. Maybe refactor. Repeat.
So not next month, next week. More like "next 30 seconds".
Even if we are scheduled to do the next feature right after this feature, we still wouldn't start writing a unified solution. We'd write a test for A, then fix it, then write another test for A, and then fix that, and keep going until A was done and now we're writing a test for B.
That said, if we really don't know what we're doing then we spike. Then we throw it away and rewrite it with TDD. Sounds mental, right?
This still seems to suggest some up-front design though. How do you know that the piracy pricing is going to be a significant risk (as pointed out in 3) unless you think through some of its' design?
Why wouldn't the engineers then go into developing the Storm feature with this knowledge, so that they avoid making any decisions that might cause significant rework when it comes time to implement piracy?
I can't see how they could assess the risk of the piracy feature without having thought it through and then choose not to take any of that thought into account when implementing the storm pricing feature.
So the thing is, yes, this hypothetical thing will affect your architecture. That's true. But what's also true is that a) this other thing over here you're not aware of will ALSO affect your architecture; and b) the thing you're planning ahead for might never happen.
Which is to say the really obvious: The future is unknowable.
Some people react to this by saying "okay, let's make everything as general as possible then, so we can be prepared for an unknowable future" and architect up a forest of abstractions.
Some people (the YAGNI crowd) react to this by saying "okay, let's making everything as simple as possible, so it's easy to change in the future when we know things" and rigorously eschew unnecessary abstraction.
In my experience (which includes creating, maintaining, and managing a single large application for a decade), the YAGNI crowd has the right of it. Complexity is a cost, and you should only incur it if you absolutely, no-question need it.
As I stated in another comment[1], this is a false dichotomy. There is a large continuum of various ways you can make decisions in between the two extremes of "YAGNI!" and "be as general as possible". The correct solution almost always lies somewhere in the middle.
YAGNI is appropriate if you think there's a very good chance that you won't need to do anything in the future, or if the future is so ill-defined that you can't make any sort of reasonable prediction of what you might need. If you're writing something that will be deployed once and never updated again, well, YAGNI is fine. But in most cases you're writing something that will need to be maintained and have more features added over time, and in that case, you have a variety of architectural decisions to make that influence what sort of changes can be easily made in the future. Determining which decisions to make in which ways is a hard problem, and it's something you basically just have to develop an intuition for by working on a lot of projects over years.
>There is a large continuum of various ways you can make decisions in between the two extremes of "YAGNI!" and "be as general as possible".
But, isn't the point of Yagni that there is not such a large continuum? Isn't the argument that attempting to anticipate the future to any degree is generally a waste of time?
Once you go beyond solving the problem at hand, you've jumped the shark. My experience has shown me time and again that future planning seldom pays off, and even then in small ways. I come out far better keeping things as simple as possible, as future changes also tend to be simpler.
>But in most cases you're writing something that will need to be maintained and have more features added over time, and in that case, you have a variety of architectural decisions to make...
I don't know that it requires making "architectual decisions" as much as following generally good design principles and keeping your couplings reasonably loose.
But, isn't the point of Yagni that there is not such a large continuum? Isn't the argument that attempting to anticipate the future to any degree is generally a waste of time?
That is the argument, but that doesn't make it true.
I don't know that it requires making "architectual decisions" as much as following generally good design principles and keeping your couplings reasonably loose.
Aren't these going to give much the same result in practice? But if the YAGNI advocates are correct and we can't usefully predict the future at all, how can you judge things like where to put your module boundaries and limit coupling?
>That is the argument, but that doesn't make it true.
My comment was addressing its parent's assertion that Yagni fits on a "continuum". It's a "kind of pregnant" notion in my view. Either you're following Yagni or you're not.
>Aren't these going to give much the same result in practice?
It's probably more the same in theory. But, in practice, it's the difference between, say, simply minding separation of concerns versus attempting to develop a full "framework" based on attempts to predict the future, then implementing the solution to the current problem in that framework.
>how can you judge things like where to put your module boundaries and limit coupling
Concepts like DRY, loose coupling, encapsulation of business logic, MVC, and other design principles stand independent of any particular application.
For instance, I can't recall an application I've developed wherein it wasn't clear where to separate responsibilities (i.e. impose boundaries) for the current problem. These separations are generally applicable to future iterations.
That much I agree with. What I would dispute is the implication, typical of many pro-YAGNI posters in this thread and elsewhere, that the alternative to YAGNI is somehow diving in and developing everything up-front without reference to the relative risks of requirements changing vs. incurring additional costs by delaying. That is a false dichotomy.
For instance, I can't recall an application I've developed wherein it wasn't clear where to separate responsibilities (i.e. impose boundaries) for the current problem. These separations are generally applicable to future iterations.
I suspect this is where our experience differs. To me, one reasonable guideline for modular design and separation of concerns is that each module should roughly correspond to a unit of change, in the sense that a change in requirements would ideally affect a single module without interfering elsewhere in the system. However, if your basic premise is that you can't tell anything in advance about what your future requirements might be, you might model the current known situation in all kinds of different ways, but some will be much more future-proof than others.
Consider the old chestnut of modelling bank accounts. If you only have to model a balance on a single account, you can have some data structure that stores the balance and some functions to increase or decrease it. As soon as you need to model transfers between accounts, it turns out that the above is a very unhelpful data model, and your emphasis on single accounts was a poor choice. Even the most basic assumptions about likely future applications would have led to a more useful path, but if you follow YAGNI you have to start with a single-account model and then follow an onerous migration procedure precisely when you need to work with multiple accounts for the first time.
>implication...the alternative to YAGNI is somehow diving in and developing everything up-front without reference to...
I don't think that's the dichotomy that's being presented, nor do I think it would matter if it was. That is, one doesn't have to go to that extreme to incur the downside of a "non-YAGNI" approach. It's very easy to have contemplation of future features negatively impact a project.
>one reasonable guideline for modular design and separation of concerns is that each module should roughly correspond to a unit of change, in the sense that a change in requirements would ideally affect a single module without interfering elsewhere in the system
Wow. I think that's exceedingly difficult to pull off and trying to design in such a way itself seems tremendously burdensome to the project out of the gate. It also seems that it would tightly couple the code with business requirements in such a way that change actually guarantees maximum impact to the code. Because, rules don't change in a neat, stovepiped way. So, when they change, cross-cut, overlap, etc., then all of your modularization goes right out of the window.
So, interestingly, given that approach, it probably would make it more important to anticipate future changes, because your code will be less insulated from those changes!
>modelling bank accounts
Thanks for bringing this down from the abstract.
But, this is where generally good design can help. If you have your debit and credit functionality neatly encapsulated, plus a good overall model for chaining/demarcating transactions within your app, etc., then you don't need to rip apart your entire model to support transfers. In fact, I'd say you have a good head start.
It's very easy to have contemplation of future features negatively impact a project.
But it's also very easy to have failure to contemplate future features negatively impact a project. This doesn't get us anywhere.
I think that's exceedingly difficult to pull off and trying to design in such a way itself seems tremendously burdensome to the project out of the gate.
But again, you have to design some way, unless you're proposing literally a totally organic design where even things like basic modular design are considered completely unnecessary unless justified by changes required right now. As soon as you are designing some specific way, you are necessarily making choices, and I would argue for making the best choices you can given the information you have available at the time. That may mean you don't have enough confidence in some particular requirement to justify working on it yet, or it may not.
If you have your debit and credit functionality neatly encapsulated, plus a good overall model for chaining/demarcating transactions within your app, etc., then you don't need to rip apart your entire model to support transfers.
But where did that good overall model you mentioned come from if you weren't anticipating potential future needs to some extent?
>But it's also very easy to have failure to contemplate future features negatively impact a project.
Perhaps, but in the former case you guarantee an impact to the project. And, empirically speaking, the odds are that impact will be negative. Trying to design for some unknown future is more likely to get you off the rails than designing well to known requirements.
>unless you're proposing literally a totally organic design where even things like basic modular design are considered completely unnecessary
Well, "modular" is such an amorphous word. Building a domain model and other functionality around current requirements will yield some degree of modularization. I'm suggesting that such modularization should tie back into the actors and objects dictated by current requirements, and is more at the programmatic level. It's horizontal (relative to requirements). This, as opposed to a vertical approach that attempts to stovepipe individual use cases into modules. The latter scenario can lead to more pain when requirements change.
I sincerely believe that may be why you find it so important to anticipate future changes--because you've totally pegged your design to a modularization scheme that demands your requirements stay within neat boundaries. So, it's really important that you define those boundaries well from the outset, or you may face some serious re-work.
>But where did that good overall model you mentioned come from if you weren't anticipating potential future needs to some extent?
But that's really my point: good design practices in themselves do anticipate the future to a significant extent. That is, a system that is well-modeled with good separation of concerns is more flexible, less coupled and thus, more extensible. But, one need not anticipate any specific future requirements to achieve this. Just design well based on known-requirements.
I wonder whether we're still slightly talking at cross-purposes here. You seem to have the idea that I am somehow advocating always trying to anticipate or emphasize future requirements at the expense of what I'm doing right now, or that I'm arguing for some sort of fixed architecture or design where you need to magically know everything up front. This is certainly not what I'm trying to say. On the contrary, I adapt designs and refactor code all the time, just as many others here surely do.
But I still feel that there is something rather selective in the arguments being made for any near-absolute rule about not taking expected future developments into account when designing. Whenever we talk about modular design or domain models or use cases, and whichever words we happen to use, we are always implicitly talking about making decisions of one kind or another, as evidenced by the fact that even those who are supporting YAGNI in this HN discussion are advocating things like refactoring to keep new work in good condition.
Of course we can and should revisit those decisions later if we have better information and of course sometimes we will change things as a result. The only point I'm trying to make here is that I'd rather start from the position most likely to be useful, not the minimal position. To me it is all about probabilities, and perhaps unlike some here, I don't find that predicting what I'm going to need my code to do next week is some inhumanly challenging task that has a 105% failure rate with project-ending costs. On the contrary, the vast majority of the time on a real project I find things will in fact turn out exactly the way we're all expecting on those kinds of timescales, and probably pretty close a month out, while looking six months or two years ahead we probably have at best a tentative plan and just like the YAGNI fans we don't want to invest significant resources catering to hypothetical futures with a high probability of changing before we actually get there.
In this context, I find it is often premature pessimisation and willful ignorance making the work that will almost always turn out to be unnecessary, not the other way around. The idea that I should discard knowledge of what is almost certainly coming next week and create more work for myself on Monday just in case something totally unexpected happens by the end of this week is bizarre to me. Maybe we've just worked on very different kinds of projects or had very different standards for the management/leadership who are making decisions on those projects.
Well, when you start bringing the timeline in as close the following week, then you're talking about something very different (or at least vs. what I've been considering). Because, even with iterative/agile development, a week out can likely be considered in-scope (or close enough).
So, narrowing the timeline so dramatically fundamentally changes this entire discussion. YAGNI's statement implies that there's a reasonable degree of uncertainty with regard to the features you're considering. However, there is generally much less uncertainty about what you'll be building in the following week. The requirements are essentially known for all intents and purposes. So, at that point, it's more like, "I Know We Need This", because you're essentially driving it from the current requirements.
So, I think the real determinant is whether you're looking at current requirements, vs. trying to anticipate future requirements. If you're doing the latter then, I think we just disagree.
And, if your experience is that extrapolating requirements far off into the future has been helpful on average, then we have definitely worked on different kinds of projects.
Well, when you start bringing the timeline in as close the following week, then you're talking about something very different (or at least vs. what I've been considering).
Perhaps this is where the crossed wires happened, then.
To me, this is all a matter of degrees. I know exactly what I need to build immediately -- what code I'm writing this morning, what tests I'm currently trying to make pass, or however you want to look at it. I also have a very clear idea of what I need to build later this week. I have some idea of what I need to build by the end of the month. I have a tentative idea of what I'll need to build in three months. On most projects, I probably have very little confidence in what I'll be building a year from now.
When I'm designing and coding, the amount of weight I give to potential future needs is based on that sliding scale of confidence. If I'm writing two cases for something right now and expect to need the third and final possibility next week, I'll probably just write them all immediately, so that whole area is wrapped up and I don't have to shift back to look at this part of the code again in the immediate future. For something that I will probably need in a few weeks, it's less likely that I'll implement it fully right now, but I might well leave myself a convenient place to put it if and when the time comes if that doesn't require much work. For me, the latter is sometimes a bit like deliberately leaving the final step of a routine task unfinished at the end of a working day so I can get going the next morning with a quick and easy win -- it's as much about the positive mindset as any expectation that doing something immediately vs. very soon will make any practical difference to how well it gets done.
Obviously as I look further ahead, confidence in specific needs tends to drop quite sharply on most projects. For something tentative that is being discussed as a possible future requirement for later this year but with no clear spec yet, it's unlikely that I would write any specific code for it at all at this stage. However, I might still take into account likely future access patterns for data in my system if I'm choosing between data structures that are equally suitable for the immediate requirements. I might take into account the all-but-certainty that we will need many variations of some new feature when planning the architecture for that part of the system and design in a degree of extra flexibility, even though we have no clear idea of exactly which variations they will be yet and I'm not actually writing any concrete implementations beyond the first one at this stage.
And, if your experience is that extrapolating requirements far off into the future has been helpful on average, then we have definitely worked on different kinds of projects.
That's not really how I look at it. I'm not so much extrapolating (a.k.a. guessing) specific requirements far ahead. I'm just allowing for the possibility that I may have some useful knowledge about future needs without necessarily having all the details yet. If I do, I will take advantage of that to guide my decisions today to the extent that confidence justifies doing so. The amount of actual change in designs or code that results will vary with both the level of confidence I currently have in whatever potential requirements we are talking about and in the assessment of how much effort is required to allow for them now vs. how much effort will potentially be saved later if the expectation is accurate.
I don't think YAGNI ignores this so much as explicitly rejects it. I've participated in the type of rewrites you mention here. Yes, they suck. Yes, they happen far too often. But I've also observed that they seem to happen regardless of how much you plan ahead to future requirements: even if you are exhaustively brainstorming possible future directions and are absolutely sure you're going to want something, you're very often wrong. And the inevitable rewrite that happens is a lot more painful when you have to carryover requirements that you don't actually need.
YAGNI fundamentally is a statement about costs and benefits. And it's a statement about personal experiences of costs and benefits, with a counterintuitive conclusion. I've found, however, that the teams I've worked on that just accept that they'll have to rewrite or throw away 90% of what they write end up performing at a much higher level (in terms of their impact on the broader industry) than the teams who figure "But if we could just get that 90% down to 50%, we'll be 5x more effective than other teams!"
Another way to look at this is in terms of external vs. internal drivers of success. YAGNI makes sense when the primary drivers of success are external and you need to quickly react to changing market conditions or customer requirements. It doesn't when the primary drivers of success are internal and you need to quickly act to get from known point A to known point B as efficiently as possible. Some engineers are lucky (unlucky?) enough to work on the latter, but typically it only happens when you either have a monopoly or you're deep in the bowels of a corporation and only need to report up to an executive who never changes her mind.
To use the example from the article - if the biggest risk or change in the external environment you'll face is your software, go ahead and build the feature into it. But who knows? You may be able to close a round of venture funding in 2 months and then hire the Gondor navy to eliminate piracy. Or Gondor may enter into a trade agreement with Rohan and redoing all your contracts takes primacy. Or Aragorn may arrive with the Army of the Dead and suddenly piracy is not a problem anymore, but a lucrative business in life insurance may pick up.
I think there's a difference between going ahead and implementing piracy risk immediately, vs determining all the requirements of piracy risk and using them when designing your fundamental architecture, with hooks left in place for extensibility that simply aren't actually implemented yet. Maybe the Gondor navy will destroy the pirates, but then you have to worry about Corruption risk because you now may need to bribe the navy or risk having your cargo impounded (so you have to balance the risk of impounding vs the cost of bribing). Sure it's not the same thing as piracy risk, but it's similar, and because you designed your architecture from the get-go to enable piracy risk and other such extensions, you can now implement navy bribes pretty easily.
Meanwhile, if you'd said YAGNI to piracy risk and just implemented support for storm risk, you may find that you can't easily implement navy bribes without re-doing much of the work you already did for storm risk.
As saganus said[1], this is more of a craft than a science. You need to plan ahead with your architectural decisions, and they need to be made using the actual requirements you expect to encounter (as opposed to theoretical requirements, which aren't really much of a use to anyone), but that doesn't mean you need to actually implement everything immediately. Just enough to be satisfied that your architecture will suffice.
And this is why a lot of software written by the agile teams in the companies I've worked for looks like a messy accumulation of ad hoc solutions, with too little shared code and too much duplication.
Even if you are not implementing a feature right now, you still need to have as much information as possible about what might be needed in the future and how it will fit in in your solution.
No. You need to have information about what you're implementing right now, and the willingness to write it well.
What you generally see is that if you have a complex code base with lots of plan-for-the-future abstractions in it, refactoring it in any non-trivial way is really hard[1], so people don't, and you end up with hacks.
Whereas if the system is as simple as possible, you can make bigger changes more easily, and the code can stay cleaner.
That's no guarantee that it will -- code quality still requires discipline and sound engineering -- but it's a lot more likely with YAGNI than without.
[1] "Hey, we need to make our pricing system handle multiple currencies."
"Oh geez, that's going to wreak havoc with our risk plugin system, that change will take at least three weeks."
"Oh yeah, and we'll definitely need that risk plugin system when we get to the piracy risk feature later this year... what if we just kind of hack currency systems up by [doing something awful]?"
"Sure, we can do that in a week. I don't like it, but it's the only choice given our deadline right now."
The software I'm working on now is a messy accumulation of ad-hoc solutions but not because of YAGNI - it's because the real world of selling a system to multiple customers who get to demand things like "but we want our adverts to be blue on the second Tuesday of every third month" tends to makes software a mess.
But on the gripping hand, deferring an implementation often means you understand the problem better by the time you get to it, so that your actual implementation is better than the naïve version you would have written earlier.
Fair point. Making these decisions correctly is hard. I'm pretty sure this is one of the things that you slowly learn how to do with experience, that no amount of being smart or research can make up for.
I think you had it right first time. Much like you can define an interface without a concrete implementation, or a unit test without working code.
One rather hypothetical approach to the problem is separation of concerns - make pricing composable - starting with storm risks, and supporting other unknown, future risks. The obvious follow-on I see too often is to use some type of dependency injection, which is almost always (my experience - YMMV) a bad move - pricing risk components probably don't change often enough to warrant the configuration nightmare that ensues. Just recompile, and redeploy. Or use reflection to load available pricing modules, assuming the perf hit is acceptable.
You may not need to know the intricacies of pirate risk when delivering storm risk, but you do know that you'll be dealing with another risk after delivering storm risk. So plan for it.
Because what'll happen is, you'll find out that the type of risk you actually encounter ends up being tied to (say) the time of year in a way that you were not expecting, and now you've got this elaborate risk-plugin architecture sitting there, and there's no way to get the time of year down to the risk calculator, so you need to elaborately hoist out and redesign this entire gigantic apparatus, instead of just adding a parameter to a function call.
And meanwhile, as you were daydreaming your future hypothetical risks and trying to have an idea of what stuff they might need to know, you imagined that it might need to know what currency the thing is priced in to calculate currency-fluctuation risks, and so you're passing currencies all over the place, to be prepared for general extensible etc., but it turns out you never need them in your risk calculators anyway, so it's just this pile of unnecessary nonsense of setting currencies everywhere and you have to maintain all that.
And if you read those paragraphs and think "hmm, you'd probably want to make it extensible in terms of which fields are passed into the risk calculator to prevent that kind of problem" then realize that you are now solving a hypothetical problem that was caused by the "solution" to your first hypothetical problem, and you're three levels removed from delivering any actual value to anyone.
Forget about it. Don't plan for it. Plan for what you need today, because you will not be good at anticipating what you need tomorrow.
You seem to be dividing the world into two possibilities:
1. Don't plan ahead. Only implement what you need right at this moment.
2. Plan for everything that's even remotely imaginable.
#2 is quite ridiculous. But the alternative to that is not #1. There is a huge amount of middle ground here, for assessing likely future needs and designing your architecture appropriately, as well as for identifying what future requirements affect architecture and which ones can be safely ignored as something that can be easily implemented on top of the current architecture.
The problem with #1 is this either leads to re-implementing large portions of your app (possibly many times if you persist with this) as you discover your architecture doesn't work; or, more likely, adding hack on top of hack to implement new functionality without having to rearchitect, which leads to massive technical debt and results in an un-maintainable product.
But, what are you defining as "architecture" here?
Having a design that requires you to completely re-implement your app whenever changes are required (even significant ones), seems more a problem of not following proven design principles than one of poor "architecture".
>* If you can learn this while implementing the pricing in the first place, then you haven't lost any time reimplementing.*
But, isn't this the same kind of thinking that necessitated a precept like Yagni?
I wonder if the very notion that we are "architecting" vs. simply building software to solve a clear and present problem is at the heart of the tendency to overengineer.
Every "architectural decision" I've ever seen has been wrong. It's always resulted in a worse product than simply writing the code and allowing the architecture to happen.
(This doesn't mean writing code with no layering, just deferring those decisions to the point where you actually have the code that makes use of them)
I have found the opposite. Building things you don't need yet it's just increasing the mass of code that needs to change when the real requirements hit. It pays off to build software as simple as possible, and modify / generalize later, as opposed to designing an up-front "architecture" that is bound to become obsolete and a hindrance in 6 months.
If you know that you are going to need piracy risk support, then of course it makes sense to prepare the architecture for it, even if you only have to deliver the feature four months down the line. But YAGNI does not apply if you know a feature is needed. If you don't know but are just guessing about possible future development there are a million things in various directions you could also prepare for, and all preparations have a cost.
I dealt with this recently. We wrote a system that had a hard-coded data size in it. Later, that data size had to change to a flexible value chosen on the fly, and I was the one that had to go through all the code to make all the changes so we could handle that. It took a long time. Partly because we had no unit tests (a whole 'nother discussion), partly because I wasn't familiar with the area (all the areas) of the code that needed to change, and partly because we had said, "YAGNI" and not made our code configurable enough from the get go.
I started to curse YAGNI in the middle of that chore, but then I paused and thought about the many months of productive use of this code that we had been enjoying before this point. And even though it took me significant time to make that change, the production code was still running along just fine during all that time, still bringing us value. I decided that I was glad we had said, "YAGNI" at the start.
It sounds like most of your pain came from the code not being DRY. That is, this data size constant was duplicated in many places, rather than defined in one central place.
Unless I'm misreading you, that's not an appropriate YAGNI case, as Fowler writes:
"Yagni only applies to capabilities built into the software to support a presumptive feature, it does not apply to effort to make the software easier to modify"
"Yagni only applies to capabilities built into the software to support a presumptive feature, it does not apply to effort to make the software easier to modify"
That's a very convenient distinction. It lets you No True Scotsman anyone who challenges your position, yet provides little if any practical guidance about the best thing to do in the real world.
That is an argument for refactoring only immediately prior to implementing a new feature in order to support development of that feature. In itself that is reasonable enough, but it becomes less effective as a strategy if the cost of just-in-time refactoring prior to implementing each new feature turns out to be significantly higher than the cost of setting up the same design at an earlier stage.
When to refactor/clean up code is an interesting topic. My rule is to only refactor old code when the bad design gets in my way. If we have some bad code that just keeps working, there is not much reason to clean it up.
New code I try hard to factor into tip top shape.
This is entirely separate from YAGNI in my dictionary.
That all sounds perfectly reasonable, but please answer me this: how do you decide what "tip top shape" is for your new code?
If YAGNI is an argument for not making any sort of advance judgement about future requirements until it's clearly necessary, then it is necessarily also an argument that as soon as any code meets its immediate requirements you should stop working on it and move on to the next sure requirement, without wasting any time on refactoring that might never prove useful for future development.
I suspect that many here who would say they agree with YAGNI do in fact edit their code beyond just working no matter how much it looks like spaghetti, in which case I would argue that the difference between our positions is merely a matter of degree, not a difference in the underlying principle.
Yeah, some/many people forget about the Ruthless Refactoring part of XP. Or they're just not good at it. Like how some decide to not write documentation and declare themselves "agile".
The successful XP teams I've been on probably spent 1/4 of their time refactoring. Once your code works, you clean it up, and refactor anything it touched. THIS IS THE DESIGN PHASE! Without it, you're just another pasta merchant. What truly blew my mind was that designing/architecting the code after you write it is so much easier and effective.
> If YAGNI is an argument for not making any sort of advance judgement about future requirements until it's clearly necessary, then it is necessarily also an argument that as soon as any code meets its immediate requirements you should stop working on it
That is not the YAGNI I know. It applies to external requirements only. Keeping your code base well designed, readable and bug free is an entirely separate concern.
> it becomes less effective as a strategy if the cost of just-in-time refactoring prior to implementing each new feature turns out to be significantly higher than the cost of setting up the same design at an earlier stage.
Or rather, the cost of setting up the best design you could at an earlier stage, knowing what you knew at the time, and then of modifying that design to be the design you want now.
But yes, if that turned out to be cheaper than just-in-time refactoring then that would be a better way to proceed. (IME it never is cheaper though).
But yes, if that turned out to be cheaper than just-in-time refactoring then that would be a better way to proceed. (IME it never is cheaper though).
This is always the danger of proof by anecdote or personal experience in a field as large and diverse as software development. I could just as well tell you that I have seen numerous projects get into trouble precisely because they moved ahead too incrementally and consequently went down too many blind alleys that were extremely expensive or even impossible to correct later.
It's true that I have not often seen this in something like the typical CRUD applications we talk about a lot on HN. However, if for example you're working on some embedded software with hard real time constraints, or a game engine that will run on a console and has to give acceptable frame rates with the known processing horsepower you have available, or a math library that needs to maintain a known degree of accuracy/error bounds, or a safety-critical system that requires a provably correct kernel, I suggest to you that trying to build the whole architecture up incrementally is not likely to have a happy outcome.
You missed the point that I probably made too subtly, that it went from being a set size to a variable size. Yes, we did have some terrible non-DRY code and it would have been a little easier to make the change if not for that. But going from same size all the time to different size every time was never going to be as easy as changing the value of a constant.
>It sounds like most of your pain came from the code not being DRY.
Exactly this. Yagni does not preclude following generally good design principles, which almost always saves future pain; unlike attempting to predict and design for the future.
> Partly because we had no unit tests (a whole 'nother discussion)
I think that the is the primary point. If you have good test coverage at each level, it's actually really easy to make large sweeping changes to a codebase. If you don't have those tests, it's hell. This is a big part of why I think unit tests are ok, but not nearly as important as good E2E and perf tests, as painful as they can be to write and wait for to run some times. Good high level tests let you tear the heart out of the codebase, replace it and still be confident it works.
I agree that having good test coverage across different levels/purposes makes sweeping changes far quicker, easier, and even feasible to consider doing.
Building and shipping the software is perhaps half the work - another half of the work is growing and maintaining the scaffolding to rapidly, automatically and reliably measure the properties of the software that you wish it to have.
It seems no-one here is arguing that YAGNI should be applied to justify not building automated tests.
Interesting. I've found myself working almost entirely without unit tests (at least when I have an advanced type system) - making small incremental changes means that if an end-to-end test starts failing there's only a very small number of places that could have caused that failure. What kind of problems did you face? Why were they hard to diagnose?
I definitely did not have an advanced type system helping me out. This was a SystemVerilog testbench. SystemVerilog has a type system akin to Java's or C's. Actually it's probably worse than either of those. It's a real Frankenstein's monster of a language.
The general idea with SystemVerilog is you have a low-level model of the hardware made up of language primitives like wires and logic functions. Then you wrap that in a few layers of testbench code. Each layer ups the abstraction level until at your top level you can write tests in terms of high level transactions (in this case it was a flash controller, so you could do simple commands like write, read, erase). The layers all take care of converting those high-level transactions into (eventually at the bottom layer) wiggling the input wires of the hardware at just the right times. It goes the other way too, monitoring the output wires of the hardware and converting those wiggles into high-level response transactions (such as, "here's the read data you requested"). The net result is you should be able to write succinct tests at a high-level of abstraction that exercise a lot of the hardware (wiggles a lot of the wires a lot of different ways).
There were actually two testbenches that shared some of this code and among them at least three different code paths from high-level to low-level and back again that gave me trouble. If I had unit tests isolating each of those layers it would have been a lot easier to find which layer was dropping a byte of data here or adding extra padding there.
EDIT: I just noticed the "small incremental changes" part of your question. That's the other thing about not having unit tests. I had to get the whole thing to compile and run before I could test any change, and that meant changing every layer to handle the variable sized data. Part way through I did stop to write unit tests for the trickiest (lowest-level) layer.
Yes! And in this case, you “cursed” yourself for not making the prediction that this change would be necessary. One discounts the 100 other wrong predictions that would have accompanied this right one.
The incredibly interesting thing about this example is that it sounds like the design actually had to change in two ways: 1) the value had to be configurable and 2) it had to be configurable on the fly.
Anticipating and designing for (1) is simple enough - the costs that the article refers to are quite low - but to think ahead and design for (2) is a completely different task that might radically complect the code.
You caught the subtlety that I should have emphasized more. Number 2 was the really difficult part. We had some hard-coded constants as others guessed, which also made this really hard, but just fixing that problem was wasn't enough.
In my years of experience (both in big corporate R&D as well as startups) I've never seen a project fail because of the end product being too simple or not having enough features.
However on the other hand I've seen countless projects suffering delays, staff departures, emotional team arguments and eventually bad end product because of too complicated software architecture that overwhelmed its creators and because of obscure features that well-meaning engineers built in because "it'll save time in the long run".
It's so, so hard to resist the siren song of premature optimization, though. It's even harder in a corporate environment where "Well, why didn't you plan for that?" is a question lurking behind every failure or delay. Overplanning isn't punished.
In startups, almost everything is punished by company failure.
Another example is failing to plan ahead far enough that you can maintain your first mover advantage when a competitor comes along and already has the benefit of learning from your mistakes so far.
This is becoming less true over time. As enterprises start adopting Lean practices, it becomes obvious to everyone why building something now when you won't use it for months is waste.
If there's one thing enterprises don't give a flying fsck about, it's waste.
After a couple decades of mostly-enterprise development, I've learned something really important - it's better to not be wrong than to be right. Thus the overbuilding and risk aversion. Frankly, this is the only real advantage startups have over the enterprise - the ability to take risk.
My limited experience in that world was that there often wasn't a really good way of identifying when something was in fact 'right', only that it wasn't broken (whenever it was reviewed/inspected). New info could always come along which would call in to question previous decisions, and new people would come in with different (or no) understanding of previous stages.
Couple that with organizations in which everyone is free to say 'no' to things, or they're free to chime in with demands without having to actually commit resources for dev or support, and you get weird situations (I've got a few war stories, as I'm sure many people here do too).
I disagree. If overbuilding or over planning is wrong. Then the question comes as to why you spent $200k building a feature that just got thrown out. Or worse, they try to use a unneeded feature just to justify the fact that it was built.
I didn't say it isn't wrong. I said it isn't punished. That's a very, very different thing.
Enterprise environments encourage toxic behavior in numerous ways. This is one of them. Risk aversion is generally more important than cost control in the enterprise. Conway's Law in action, if you think about it.
These days, I think "Yagni" is often used as an excuse for sloppy code and as a justification to avoid thinking about design.
I think the principle as Fowler describes it is valuable, but it can be easily abused. Taking the application of "Yagni" at the microlevel to its logical conclusion can be used to justify a mindless, just move on it and fix it later style of coding into which good architecture cannot be retrofit.
Hi lmm, perhaps the world "architecture" has bad associations for you that I don't intend to imply. If you code anything non-trivial, you are doing architecture whether you like it or not. The only question is how well you do it.
Over-engineering is a real danger, to be sure, and terrible ideas can by justified in the name of "architecture," just as they can in the name of YAGNI.
Fwiw, I have personally seen costly decisions made in the name of YAGNI, literally. So I don't think it is the essentially "safer" default.
My view is that you can't always wish away complexity with heuristics. Often, you have to think through your special situation, weigh the factors, and make tough judgment calls without knowing if they'll be right.
I'd be interested to hear the specifics, or examples of the kind of "architectural decision" you're talking about. Let me tell my story of how architecture went wrong:
* We defined the data structures early on in a generic way, when we thought we were building a product for multiple regions. In fact we tried two regions and discovered it only sold in one of them, so we were producing a product that only actually needed to support a single region, with a massively overengineered generic data structure
* We defined component boundaries, saying that certain functions would live in certain modules. In fact these were not the correct boundaries, and simple operations involve several rounds of back and forth, exacerbated by:
* We built a "SOA" style system with multiple components on the grounds that we would need horizontal scalability. We never got to the level where we needed that kind of scale, and the architecture massively slowed down development/debugging.
* We decided that certain interfaces would use Java datatypes because we thought the system would primarily be written in Java. As we built the system it became clear that Scala was a better choice for many components, so we ended up with a lot of code that was converting scala objects to java just to send them through an interface, and then the system on the other side was converting them back to scala objects.
* We tried to make a choice of RPC system early on. We ultimately went through three iterations of different RPC systems, as the various approaches failed.
You could say these are just wrong choices, and there's an element of this. But I think in all of these cases we'd have made the correct decision had we been driven by use cases (YAGNI style) and deferred making decisions until we actually needed them.
We have one rule when planning product features: Everyone may shout YAGNI at every time. Saved us so much time and complexity, it's incredible.
On the other hand, I have a love/hate relationship with YAGNI, as it requires careful weighing - additional features that need to be reflected in architectural changes which would not only take refactoring code but API and data model changes as well, possibly with data migrations, coordinated across multiple teams are a completely different picture. The costs of building it now may be an order of magnitude cheaper than doing it later. Further down the road, startup mechanics and traction apply as well. If you have an order of magnitude more resources to fix YAGNI later, it might be worth the additional speed right now. I feel that putting all of these into context and doing something that makes sense is just freaking hard every single time.
Except that Fowler used YAGNI to produce the "Chrysler Comprehensive Compensation System" that FAILED MISERABLY. I still don't understand why people continue to listen to him--he has yet to demonstrate a large project he was in charge of that succeeded.
YAGNI gets you the easy 80%. The problem is that YAGNI means you didn't prepare for the hard 20%, and now you're going to get killed.
Of course, YAGNI is really good if you're an ambitious manager since you'll be gone when the 20% comes home to roost.
Except that there was already a working system. So, the C3 system was totally wasted money, and never achieved any significant subset of functionality of the old system.
A "proper" use of agile would have been to gradually convert chunks of the legacy system to a unit-tested system. Use bug reports and feature requests to prioritize the chunks which need to be tested. Once that happened, you could begin evolving the old system and adding features without worrying about breaking it.
But that wouldn't get noticed. Much better to have a BIG, NOTICEABLE failure that you blame others for than a less noticed success. That's how you get promoted, don'tcha know.
Such failures are more often because of bad developers than anything else. It's not really fair to blame a single person on a big project with lots of people on it.
He is preaching "Follow my religion and your project will succeed! Pay me money to teach you."
Given that, it is entirely fair to blame him and his system for the failure.
Just so you know: I do agree with you that big projects rarely fail because of a single person. But Fowler pisses me off because he preaches his religion with zero empirical evidence it is any better than anything else.
> But Fowler pisses me off because he preaches his religion with zero empirical evidence it is any better than anything else.
And people take it as truth rather than opinion. Look at the way he was all-out in favour of microservices, then backpeddled recently. It's right for a clever man to think out loud and change his mind; but not for people to act on it as if it's fact.
I think this is trying to simplify an already simple idea, succinctly: "Optimize your time intelligently".
Rules of thumb (like YAGNI) may not be correct, depending on the situation. Estimating time complexity is hard, so deferring development defers mistakes in estimation. Great.
However, if you know that you have a 90% chance of needing a feature in 6 months, and it will be 2x easier to build now when you have a team actively engaged on a similar project, then YAGNI is the wrong choice. Counter examples exist, like you are in such an extreme environment where (like first month of a startup) the opportunity cost of those programmers is very high.
Regardless, I would ask "Is this the best use of time now" rather than "YAGNI".
The whole point of the article is that you're wrong. And I tend to agree. Not trying to predict feature needs down the road is extremely empowering. Building you're unnecessary feature now is just as likely to increase the complexity of everything in the future as it is to be 2x easier to build now. YAGNI dramatically simplifies the answering of your question: if it's not needed now, don't do it.
Making decisions by flipping a coin also simplifies them dramatically -- it doesn't mean it's optimal.
I think your reply captures both the appeal and the danger of YAGNI well. While I think it's often a good heuristic in practice, it is just a heuristic and it can be badly abused in cases where future needs can be accurately predicted today. Saying "But can they really ever be predicted?" is a cop out imo, because in some cases the answer is yes.
Not really. Sometimes you know the cost right now will be less than a few hours. The future cost of the same changes will be at least that, with a very high liklihood (sometimes, a near certainty) of being 2-10x that.
Are you really arguing that this rule of thumb leads to optimal decisions 100% of the time?
Because that is clearly untrue. If you want to make the argument that allowing exceptions to the rule opens up a pandora's box that is ultimately worse than just always following the rule blindly, okay. I think I'd still disagree, but it's at least a defensible argument.
Using your example:
Let's say I know it will cause a complexity burden in the future. Shouldn't we ask if that complexity burden and opportunity cost are outweighed by the expected benefits, rather than rather than simply relying on a simplistic rule of thumb of always deferring?
People are capable of (which, to your point is not the same as being consistently good at) making good calculations as to what is a waste of time.
And yet, you might take longer to finish the project than someone else. Just because something is freeing doesn't mean it's a good idea. Nor does it mean it is a bad idea, it just means it's an uninformed idea.
Obviously it's suboptimal to ponder an un-answerable question. That doesn't mean it you shouldn't spend 1 minute thinking about it.
Then you need better management on your development team. A project manager who can't give at least an intelligent assessment of the risk/likelihood of key features changing some way ahead is about as useful to a programmer as a programmer who can't give an estimate of how long it will take to implement a simple feature within an order of magnitude to a project manager.
I am still waiting to encounter this hypothetical project where the future always deviates significantly from any reasonable prediction with negligible overhead. On the contrary, my experience has been that any successful project team probably has at least a reasonable idea of where the development is going some way ahead and that confidence usually increases the closer you get. While that level of confidence may not justify immediate full development of future features that aren't 100% known to be required, it does often let you make reasonable allowances for likely future work in terms of project architecture so you don't back yourself into a corner unnecessarily.
As chris_va has been saying, it always comes down to risks and confidence and cost/benefit analysis, like any business decision. You can repeat "You ain't gonna need it" until you're blue in the face, but that doesn't make it true. Much of the time, you will in reality need a feature that everyone has been talking about for six months already and your sales guys have been promising to customers for seven of them. In fact, any development team that really does dwell extensively on development work they aren't actually going to need later should probably question the general competence of their management. Responding effectively to changing requirements does not require sticking your head in the sand, and it certainly doesn't require making ad-hoc changes to design/architecture at any level you feel like for every new feature.
It's possible you have worked in a world where there really is requirements-level (which means business-process level) stability over a year. I never have, and I don't think that describes most businesses out there.
Given that, writing software that is responsive to the needs of the business must accept that change in unknown directions is a certainty.
Now, does that mean that you literally never know anything at all about what will come up? No, of course not. Sometimes you'll see a thing coming for nine months, and it'll actually land just like everyone thought.
But the point of YAGNI -- and the point Fowler made so well in his essay -- is that you still don't need to do your chin-stroking calculations and elaborate plannings on whether you should build for that now. Because building for it now has a real guaranteed cost (the direct cost of building it, the opportunity cost of the value you didn't deliver by building something more immediately necessary, the technical debt of maintaining it in the face of other evolution until you get to the time when it actually is needed) that strongly argues against building it even if it will happen.
In any other field, this is really obvious. If a contractor is building a house, and they say "you know, we've already got the cement-pouring team here, we know we're going to build a dozen houses this year, why don't we just build all twelve basements right now" they're going to be slapped down quickly -- that's absurd, don't tie-down all those resources speculatively, and anyway if you do that, the house you're building is going to be late, let's focus on what's needed. And that's true even if the contractor was totally right about needing those dozen houses (and exactly which dozen, and exactly what they should look like).
That you're probably going to be wrong to one degree or another (even features that end up happening often look significantly different when they happen than they did in prospect, months out) makes the YAGNI equation that much more compelling; but even without that, the numbers rarely are going to come down in favor of speculative construction for hypotheticals.
So, yes, you need to weigh the benefits against the costs. It just turns out that this is nearly always a calculation that leads inexorably to one result: Don't do it until you actually need to do it.
I think the difference between the pro-YAGNI crowd and the sceptics here is one of absolutism.
You have immediately translated my position, which was about risk and confidence and making a cost/benefit judgement based on the best information at any given time, into an absolute one, where we know the full requirements for a project a year in advance. But that isn't what I said, or what several of the other sceptics here are saying either.
To put it in your terms, we're not saying you should build 11 spare basements because you've got the concrete guys on-site today. We're saying if you're building a concrete basement then you can reasonably assume this house will also soon have external walls that will roughly match the signed-off, legally-binding planning consent, so if you don't put the rebar in before the concrete sets you're going to have a much harder time finishing the house later or may even find it is no longer cost-effective to do so because the cost of doing that preparatory work only when you're certain you need it will be very much higher.
a) The boss has changed his mind and now we're building a pool, so we need to cut off all this f* rebar sticking out. More time to remove it, plus the cost of adding it in the first place.
b) Its software, so adding the rebar is as easy as F6-refactor.
You are scaring the children. Stop it with the FUD.
You can always come up with exceptional counter-examples in any argument about probabilities and making cost/benefit judgements. As I noted elsewhere, my experience has apparently been very different to some here, because I'm still waiting to see the project where the entire direction suddenly shifted so fundamentally that it wound up writing off significant amounts of code that was now useless. (Just for the record, I'm also still waiting to see the house where the rebar is in changed to require a pool where the planning consent stated a house was to be built, though I've seen a lot of houses with the rebar in where the walls then went up a short time later without having to rebuild the entire foundation because the rebar was already in.)
I've written elsewhere in this HN discussion about a few different types of project where you can't just refactor your way out of trouble if you go too far down the wrong path. The cost of doing so can easily become more expensive than just throwing out the whole thing and starting again.
This isn't FUD, it's a few decades of professional programming experience talking. They just happen not to be the same few decades that some of the other posters here have had. And as once again I seem to keep writing now, that is why it is unwise to generalise too far from personal experience or a small anecdotal base to an entire field as vast as programming.
I don't find having varying levels of data and confidence about the likely future directions of a project to be "clutter", nor do I find they tend to complicate whatever I'm trying to develop right now. On the contrary, I often find it useful to have more context and frame of reference about what I'm doing, even though often I may choose not to act on that knowledge immediately for much the same reasons that others here are arguing for always following YAGNI.
Please remember that my argument is not that you should always try to anticipate future requirements or over-engineer designs to cope with every hypothetical you can think of. My position is merely that you should weigh the costs of acting unnecessarily now against the costs of not having acted when it turned out to be useful, and make a decision according to your best estimate of how likely it is that you will benefit from following either path.
YMMV, of course, and certainly sometimes the decision will be different to others.
In principle I'd agree. In practice, I think people consistently overestimate the value of planning, and my experience is so overwhelmingly in favour of not planning that I don't think it's worth spending even a small amount of time (which is a nonzero cost) trying to calculate whether this is one of the one-in-a-million cases where it would be worthwhile.
What if features X, Y, and Z needed in < 6 months each are 2x harder to code because of the planning of feature B that has a 90% chance of being needed after them? You potentially "saved" time by doing feature B but depending upon the total time to code feature X, Y, and Z, 10% of the extra difficulty in features X Y and Z may still outstrip 90% of the time saved in feature B.
Nevermind that as feature B goes unused for 6 months bugs and incompatibilities may creep in. Eventually what you wrote for feature B may be mostly useless when you finally get around to actually needing it. I have had this happen on projects before.
I agree. Projects should not be scheduled in isolation. Delay may be a great idea, or a terrible idea. Sometimes prediction is easy, sometimes it is not. Trying to oversimplify the decision making is a mistake.
I think it is important to differentiate between "early building" of features you don't yet need, and building in a way that doesn't require much refactoring in order to extend/add those features on.
More common than overbuilding features is overbuilding function or library capability. The right balance, IMHO, is _thinking_ about the extendability & designing for it but leaving out the actual details. This tends to lead toward good, SOLID designs that are easy to work with later rather than a blind-ally you need to spaghetti-mofongorate to get to work.
"Refactor later" is fine for folks like Martin Fowler who know _how_, and who haven't painted themselves into a corner with bad early decisions. It is a bit dodgy to tell the average mid-level dev who just wants to "get it done" so his Project Manager can check off their Excel spreadsheet that they are on time.
Extensibility is itself a feature, so YAGNI would imply it shouldn't be assumed until proven necessary.
I personally use the "zero, one, many" rule (i.e. those are the only counts of things, there is no such thing as 2 or 3, etc.) as my proof of necessity. If I ever get to the point that I need two of something, I make it useful for many of something.
I've found that designing for extensibility is wrong more often than right. The code doesn't stretch the way you expect, and then it strains against other boundaries you didn't realize you were setting.
saying, "I think it will need feature X, which will require parameter x_id, so I'll add x_id = 0 now" is wrong.
writing (just as an example) a function that takes an config array as its param so you can later build out, instead of passing a_id, b_id, c_id...."oh, crap, how long a list will this be?" in the early build is the type of thing I mean.
Writing extensible code more often than not causes you to write smaller, tighter, more testable components. Writing in this style is what I'm trying to say. Many less experienced devs would not do this and end up with long procedural code because they feel it is wasted time for something they aren't anticipating.
The only thing I design to extend is the data. That is, I get suspicious now when I take an existing method and try to parametrize it, because what I'm doing if I do that is building up a richer all-purpose code edifice - and that is a Bad Idea when we know that code is disposable and data isn't. By default, all programmers start by planning and testing code towards one use-case, and then they add the other cases one by one. Code that is arbitrarily made "generic" or "extensible" imposes a testing regime that programmers do not arrive at within the natural edit-test-debug cycle, but instead have to consider separately.
With data this isn't really the case. Achieving both YAGNI and DRY at the same time is _easy_ with data. Data tends to become more generic and more fungible as it becomes more crude and simplistic, and less so as a defined schema builds up - for example, a Point class with "x" and "y" fields, vs. a numeric array of 2 or more entries. With the former, you can end up with your Point not being the same type as someone else's completely equivalent Point. With the latter, agreement mostly rests on which kind of numeric type you're using(which can be enforced by the language), and the ability to operate on many points as well as just one point is straightforward to support yet not strictly required. And so if the goal is reuse, you design downwards; if it's type safety, you design upwards. Achievement of both imposes an enormous code burden as you try to enumerate all the types and provide various translation layers and fallbacks - in turn creating more code paths, more situations where you can throw an exception or return a null value, etc. That can add up to a lot of lines of code, and a performance tax, and more failure modes, in exchange for sometimes eliminating some runtime errors.
I think we naturally tend towards overdefining our data, though, because it looks more "correct" to tightly define each block of code so that it says "point.x" instead of something like "x = point[index * 2]".
You say so much that I agree with, except your core thesis.
YAGNI must be applied with DRY, yes. Code is disposable (maybe-leaning-to-yes), data is not, yes.
But not properly defining your data is a huge mistake. Torvalds has been noted to say: "git actually has a simple design, with stable and reasonably well-documented data structures. In fact, I'm a huge proponent of designing your code around the data, rather than the other way around, and I think it's one of the reasons git has been fairly successful […] I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important."
Well-documented data structures. Schema is data structure documentation.
Your Point example is good. A Point is not a Size, but an array of numbers could be interpreted as either, or as a Latitude/Longitude pair, or as anything else. How do we know, looking at an array of numbers, what it is meant to be?
But if we knew it was a Point (because the type expressed it as such), then we'd also get to assume we can do point-like things to it.
Of course, YAGNI would suggest not creating a Point and Size classes until you need to differentiate between multiple types of ordered pairs. That's fine. If your system only ever handles point data, then it only ever handles point data and we state that up front, allowing us to assuming "oh, an array of numbers? Must be a point".
But being able to handle different types of data is a feature, and if you prove you need it, then you need it. Write that Point and write that Size class.
There's a lot of times where you know that something needs to be extensible, but you only know the immediate requirement.
From Martin's example, the team needs to support storm risk, and think they need piracy risk in the future. I'd suggest that they are 100% sure they need to support multiple risk calculations, they just only have the requirements for the first type now.
It's at this point you have several options, but by choosing something that is a bit more generic than the base case but without being an over-architected mess will save you a lot of time in the future.
Except "over-architected mess" is what any generic thing is, if you don't need it.
It's very, very easy to convince yourself that a particular extensibility point is "obviously" going to be needed down the road. It's very, very easy to be wrong about this, and extraordinarily common to find out that your attempt at making it generic now makes it harder to extend in the way you actually need it extended.
It turns out if you make things bone-dumb to begin with, they're never really that hard to extend later on. Anyone can refactor really ridiculously simple stuff.
I'm always a bit surprised when we treat "cost" as something binary.
Successful people, in my experience, are those who take calculated risks and are right often enough to differentiate themselves. Software is no different.
When faced with a YAGNI situation, you need to make a judgement call. There will be costs in building it now. Those costs might be less now vs later (sometimes just because the context is fresh). The cost may vary depending on how much work you do.
There will also be "expected benefits". The estimated (judgement call) likelihood of actually needing it times the cost savings (future cost - current cost). Compare the expected benefits between projects and pick the one you expect to add the most value.
As pointed out by others, this often means simply drawing the abstraction in the right spot to make future development easier. Even if future development never happens, it's likely you built a good abstraction that was well thought out. The cost is low (you had to frame the abstraction somewhere) and the expected benefits are high.
Yagni only applies to capabilities built into the software to support a presumptive feature, it does not apply to effort to make the software easier to modify.
And here is where the actual frontlines of the battle are fought--because it's needlessly verbose and abstract architectures (or what people think are them, anyways) that are railed against by folks chanting "yagni yagni yagni".
This article neatly sidesteps the actual thorny issue of "well, when is designing extra actual a good idea?" by ignoring one of the biggest use-cases of yagni.
I hear you, I think this article is really good for someone who isn't familiar with the term YAGNI. However as the above poster mentioned it sidesteps a big issue which is when YAGNI is wrong.
Generally I feel like YAGNI is appropriate for features but not appropriate for architecture. I feel like the purpose of architecture is to try to plan ahead, or to make the determination how far ahead one is attempting to plan. Its perfectly find to architecturally choose not to support something, but it should be a conscious decision.
If I may extend your metaphor, the problem then becomes the scaffolding that lies between architecture and features. Another attempt at a rule of thumb is to ask the question: can I describe how this likely-but-not-yet-needed change would be addressed in the code? If there's no answer (or the answer is not satisfactory), the question becomes: what do I need to do to have a good answer to the previous question? And so on.
This is just my experience, but "YAGNI" is often shorthand for "I don't want to think about it." I'm not talking about criticisms of wasteful design/features, but somebody actually saying "YAGNI" with little elaboration. It's imprecise and lazy.
I especially hate hearing it from well-meaning folks who, say, bring it up whenever talking about architecture decisions and devops stuff, but apparently see no reason not to spend one or two weeks fiddling around making The World's Best Angular Form Field Input Directive, or painstakingly tweaking the secondary and tertiary colors in their alert boxes or some damn fool thing.
EDIT:
At this point, especially in the web, I'm now considering Diarrhea Driven Development: shit out as much as fast as you can and make it as viral and sticky as possible.
I think the ability to use YAGNI scales with the experience and abilities of the developer and the team. I have walked into wonderful lean codebases where an experienced hand has kept featuritus down to a minimum, but I have also walked into codebases where multiple kitchen sinks were not only present and plumbed in, but a pile of new ones were ready and waiting by the side to join them, and it was a massive distraction.
These comments seem to illustrate something I have felt for a while: yagni is incredibly divisive topic and for every person saying praising it is another who is at the very least cautious of its application.
It makes me wonder where the divide is.
Perhaps more large 'enterprise' shops where projects can spin out of control into what they 'must have' whereas the small shops where you need to get a product out ASAP?
Or maybe age of developer, with the young eager guy confident he can whip out all n features in 3 weeks so we might as well add that feature too vs the experienced dev who knows better and we will add it once it is needed?
I have put some time and effort into asking people about this very question, but still am no closer to understanding why some really appreciate it and others hate the phrase.
Personally, my guiding principle after being bit by yagni is that it is a wonderful principle to use when making product/feature decisions but should be exercised very carefully when making architecture decisions. Which makes sense, features come on and go, but architecture is generally something you need to live with. In other words, whenever yagni is used is an argument against what are more good coding and design practices than practical problems, then it may be something more of a case of 'no really, we are going to need it'
Funny, I think of the age divide running the other way. Younger devs, who came up during the age of agile, take YAGNI as a given, while older devs have seen the results of too many hasty decisions.
The advice I was given when I started out was, "it's cheaper to fix something upstream". Catching a potential problem during design is always cheaper than catching it during development, which is cheaper again than catching it after release.
This argument always seems to get lost in the YAGNI discussion.
YAGNI only makes sense in the context of other extreme programming[1] practices, such as making code safe to modify through unit tests[2] and easy to modify because you refactored mercilessly[3] to produce a nice, modular design.
I suspect the divide in reactions to YAGNI comes between people who have embraced those practices and those who haven't (and probably don't even know what they would look like, and can't imagine an environment that uses them).
In the context of a project where making changes is hard and dangerous, and where you have few automated tests, you will be tempted to change a lot at once, so that you can then manually check everything works once, rather than having to do that multiple times. And that might well be the better strategy.
Consider the question "What if we need extra fields in our 'product' table?". The Extreme Programming answer is both 'embrace change' and YAGNI. That means 1) we absolutely definitely require some general mechanism to be able to add columns to tables in the future, so we must have that in place, and not assume the product table schema is fixed forever 2) we have no idea what columns or how many columns will be needed in the future, so we don't add any now speculatively.
But many people work in environments where adding a column to a table would be a massively expensive task (thousands of stored procedures that need rewriting etc.). In this context, the 'YAGNI' response which just refuses to think about future does sound stupid, because it is.
I think it works beautifully if you're applying it along with "refactor mercilessly", backed by good unit tests. It's a principle that doesn't necessarily stand on its own; it needs to be part of a healthy, iterative system of development.
If refactoring becomes difficult or dangerous for whatever reason (eg, designing an API for long-term, widespread usage), then YAGNI may not work.
The use case in these discussions is never about framework or shared code.
Taking the example of the lookup tables for error messages, imagine if you shipped your framework and lots of teams started using it, then you iterate to change your APIs in a breaking way and suddenly the cost of that change is multiplied by all the teams refactoring.
It is also worth considering the cost of changing existing production code, this is typically where most bugs are introduced, especially when someone else is implementing the fix without the domain knowledge of the original author. Unit tests won't help if they also have to be re-written to match the new API, as they are by definition new/different tests.
I think writing malleable code is the key takeaway from this article, and if thinking about future possibilities helps with this, then it should be encouraged.
Being unable to predict the future doesn't mean you shouldn't anticipate and prepare for change.
There's an extensive literature on opportunity costs in economics, but you wouldn't know it from reading this article, even though that's basically what it's about. Discussions of programming methodology often seem to involve reinvention of the wheel. Has there been any work done to quantify these theories?
I used to play Age of Empires. Its a real time strategy game where a common match is for two teams of three to try to kill each other. A feature of the game is that each player can spend resources (gold, wood, etc) to reach new "ages". Players start in the stone age, and advance to tool, bronze and iron ages. Each new age offers better weapons, better technology, greater efficiency.
Consider two games. In the first game, we observe that the players take on average 30 minutes to get to tool age, and only one player ever makes it to iron age before a team is victorious after about two hours.
In a second game, we observe that the players average ten minutes to bronze age and all but one player makes it to iron age before the game is won after about an hour. All players invest in technology before fighting.
I can only assume that the point you're trying to make is that there's not nearly enough information for a reasonable answer. It's certainly hard to come up with solid ideas without a clear understanding of the game's pacing.
In RTS games, teching and building armies compete for resources, so I'd hazard guessing that Game 1 happens because somebody opted for early aggression, which forces everybody to spend money on units to fight off the early threat. Looks like a well balanced game where both teams kept the aggression level high, which justifies the long duration, and the low-tech end-game.
Second game looks like there was little to no early-game aggression, which meant that everybody got to funnel resources into teching up early. I'm not sure if late-tech battles are more all-or-nothing, or whether one team was just clearly stronger than the other -- though if that's the case, why didn't they push for an advantage earlier? AoE is much more symmetrical than, say, Starcraft IIRC, so one team having a weaker early game but a stronger late game doesn't seem likely.
I'm curious, though: how does this relate to the article?
The players in the first game are likely experts and if a team from the first game plays against a team from the second, the game will be over in less than ten minutes. Its not merely that "someone opted for early aggression", its that when playing at the elite level all players opt for aggression. Its not rock, paper, scissors. Its a game of rock, paper.
The point is that deployed features trump undeployed "it'll be great when its ready" technology.
I'm happy for my competitors to opt for spending money attempting to achieve "good architecture" over "working code".
Ok, so the analogy is kind of shitty. Unlike Age of Empires, spending money on "good architecture" is actually less effective at creating good architecture than writing working code.
It depends on the market conditions. If you are the only player you should spend your time investing so when the competition comes out with pointy sticks they can't get through your electrified Graphene perimiter.
If you don't then there is a risk you get iPhoned, Facebooked, Googled etc.
Other than the navy wiping out all the pirates, I foresee a couple of other unforeseen scenarios:
1. Within the next few months, a new risk emerges with higher priority than piracy. Implementing this feature will delay the piracy addition, but only by 1 month because much of the basic "support multiple risks" problem will be solved by this other feature.
2. Within the next few months, a new product or tool will become available/known that makes it trivial to support storms, piracy, and a dozen other risks, making this whole effort redundant. It may be impossible to migrate to the superior alternative if all the resources are already tied up in a half-finished project that has generated no benefits to date.
I love how these types of articles always bring out the architecture astronauts with all their pronouncements of "it depends" and whatnot. How hard is it to understand the notion of "building what you need, and not what you don't"?
> How hard is it to understand the notion of "building what you need, and not what you don't"?
Very hard, because it's in fact a tradeoff. Let's say you are building a house. Will you have wife and kids in the future? YAGNI says you can always expand your house once you have kids. But do people really build houses that way?
To take more extreme example, first thing you expect from housing is a protection from elements. So according to YAGNI, you should build the roof first, and see if it's good enough for you (and you don't bother, for instance, that you can only stand in the middle of the room). Again, no one really does that with real houses.
The whole art of engineering is to pick the reasonable tradeoffs, and this includes hedging against the risk of future expansion. That's why simple pronouncements as YAGNI are not of much help.
> To take more extreme example, first thing you expect from housing is a protection from elements. So according to YAGNI, you should build the roof first, and see if it's good enough for you (and you don't bother, for instance, that you can only stand in the middle of the room). Again, no one really does that with real houses.
Maybe they should. I once stayed in a house with no internal walls. It seemed weird at first, but then I realized I actually quite liked it.
The extreme examples are just to illustrate the tradeoff.
Some architectural decisions have to be made in advance, because they are hard to take back. In your example, size of the property (if any) on which your house is standing is such a decision. Good luck building a pool in a 4-bedroom apartment.
(Interestingly, you said, "like pretty much everyone does" - isn't that actually a sounder advice than YAGNI?)
> it does not apply to effort to make the software easier to modify
But there's a difference between making software easier to modify (where the same effort could be expended later to retrofit the software) versus making architectural decisions. Often times trying to implement something teaches you that your architecture isn't sufficient and you need to change it. If you can learn that while you're building the original architecture, then you've avoided a potential huge amount of work later trying to retrofit it.
To use the example from the article, what if supporting piracy risks reveals that the fundamental design used for representing pricing decisions isn't sufficient to model the piracy risks, and the whole thing needs to be reimplemented? If you can learn this while implementing the pricing in the first place, then you haven't lost any time reimplementing. But if you defer the piracy risks by 4 months, and then discover that your pricing model needs to be re-done, now you need to throw away your previous work and start over, and now your 2-month feature is going to take 3 or 4 months instead.