Or rather, senior devs have learned to care more for having clear code rather than (over-)applying principles like DRY, separation of concerns etc., while juniors haven't (yet)...
I know it's overused, but I do find myself saying YAGNI to my junior devs more and more often, as I find they go off on a quest for the perfect abstraction and spend days yak shaving as a result.
Yes! I work with many folks objectively way younger and smarter than me. The two bad habits I try to break them of are abstractions and what ifs.
They spend so much time chasing perfection that it negatively affects their output. Multiple times a day I find myself saying 'is that a realistic problem for our use case?'
I don't blame them, it's admirable. But I feel like we need to teach YAGNI. Anymore I feel like a saboteur, polluting our codebase with suboptimal solutions.
It's weird because my own career was different. I was a code spammer who learned to wrangle it into something more thoughtful. But I'm dealing with overly thoughtful folks I'm trying to get to spam more code out, so to speak.
I’ve had the opposite experience before. As a young developer, there were a number of times where I advocated for doing something “the right way” instead of “the good enough way”, was overruled by seniors, and then later I had to fix a bug by doing it “the right way” like I’d wanted to in the first place.
Doing it the right way from the start would have saved so much time.
This thread is a great illustration of the reality that there are no hard rules, judgement matters, and we don't always get things right.
I'm pretty long-in-the-tooth and feel like I've gone through 3 stages in my career:
1. Junior dev where everything was new, and did "the simplest thing that could possibly work" because I wasn't capable of anything else (I was barely capable of the simple thing).
2. Mid-experience, where I'd learned the basics and thought I knew everything. This is probably where I wrote my worst code: over-abstracted, using every cool language/library feature I knew, justified on the basis of "yeah, but it's reusable and will solve lots of stuff in future even though I don't know what it is yet".
3. Older and hopefully a bit wiser. A visceral rejection of speculative reuse as a justification for solving anything beyond the current problem. Much more focus on really understanding the underlying problem that actually needs solved: less interest in the latest and greatest technology to do that with, and a much larger appreciation of "boring technology" (aka stuff that's proven and reliable).
The focus on really understanding the problem tends to create more stable abstractions which do get reused. But that's emergent, not speculative ahead-of-time. There are judgements all the way through that: sometimes deciding to invest in more foundational code, but by default sticking to YAGNI. Most of all is seeing my value not as weilding techno armageddon, but solving problems for users and customers.
I still have a deep fascination with exploring and understanding new tech developments and techniques. I just have a much higher bar to adopting them for production use.
We all go through that cycle. I think the key is to get yourself through that "complex = good" phase as quickly as possible so you do the least damage and don't end up in charge of projects while you're in it. Get your "Second System" (as Brooks[1] put it) out of the way as quick as you can, and move on to the more focused, wise phase.
Don't let yourself fester in phase 2 and become (as Joel put it) an Architecture Astronaut[2].
Heh, I've read [2] before but another reading just now had this passage stand out:
> Another common thing Architecture Astronauts like to do is invent some new architecture and claim it solves something. Java, XML, Soap, XmlRpc, Hailstorm, .NET, Jini, oh lord I can’t keep up. And that’s just in the last 12 months!
> I’m not saying there’s anything wrong with these architectures… by no means. They are quite good architectures. What bugs me is the stupendous amount of millennial hype that surrounds them. Remember the Microsoft Dot Net white paper?
Nearly word-for-word the same thing could be said about JS frameworks less than 10 years ago.
Both React and Vue are older than 10 years old at this point. Both are older than jQuery was when they were released, and both have a better backward compatibility story. The only two real competitors not that far behind. It's about time for this crappy frontend meme to die.
Even SOAP didn't really live that long before it started getting abandoned en masse for REST.
As someone who was there in the "last 12 months" Joel mentions, what happened in enterprise is like a different planet altogether. Some of this technology had a completely different level of complexity that to this day I am not able to grasp, and the hype was totally unwarranted, unlike actual useful tech like React and Vue (or, out of that list, Java and .NET).
> The focus on really understanding the problem tends to create more stable abstractions which do get reused. But that's emergent, not speculative ahead-of-time.
I think this takes a kind of humility you can't teach. At least it did for me. To learn this lesson I had to experience in reality what it's actually like to work on software where I'd piled up a bunch of clever ideas and "general solutions". After doing this enough times I realized that there are very few general solutions to real problems, and likely I'm not smart enough to game them out ahead of time, so better to focus on things I can actually control.
> Most of all is seeing my value not as wielding techno armageddon, but solving problems for users and customers
Also later in my career, I now know: change begets change.
That big piece of new code that “fixes everything” will have bugs that will only be discovered by users, and stability is achieved over time through small, targeted fixes.
> The focus on really understanding the problem tends to create more stable abstractions which do get reused. But that's emergent, not speculative ahead-of-time.
Thank you for putting so eloquently my own fumbling thoughts. Perfect explanation.
Here is an unwanted senior tip, in many consulting projects without the “the good enough way” first, there isn't anything left for doing “the right way” later on.
Why inflict that thinking on environments that aren’t consulting projects if you don’t have to? That kind of thinking is a big contributor to the lack of trust in consultants to do good work that is in the client’s best interests rather than the consultants’. We don’t need employers to start seeing actual employees in the same way too.
The important bit is figuring out if those times where "the right way" would have helped outweigh the time saved by defaulting to "good enough".
There are always exceptions, but there's typically order of magnitude differences between globally doing "the right thing" vs "good enough" and going back to fix the few cases where "good enough" wasn't actually good enough.
Only long experience can help you figure this out. All projects should have at least 20% of the developers who have been there for more than 10 years so they have background context to figure out what you will really need. You then need at least 30% of your developers to be intended to be long term employees but they have less than 10 years. In turn that means never more than 50% of your project should be short term contractors. Nothing wrong with short term contractors - they often can write code faster than the long term employees (who end up spending a lot more time in meetings) - but their lack of context means that they can't make those decisions correctly and so need to ask (in turn slowing down the long term employees even more)
If you are on a true green field project - your organization has never done this before good luck. Do the best you can but beware that you will regret a lot. Even if you have those long term employees you will do things you regret - just not as much.
I don’t like working in teams where some people have been there for much longer than everyone else.
It’s very difficult to get opportunities for growth. Most of the challenging work is given to the seniors, because it needs to be done as fast as possible, and it’s faster in the short term for them to do it than it would be for you to do with with their help.
It’s very difficult for anyone else to build credibility with stakeholders. The stakeholders always want a second opinion from the veterans, and don’t trust you to have already sought that opinion before proceeding, if you thought it was necessary to do so (no matter how many times you demonstrate that you do this). Even if the senior agrees with you, the stakeholder’s perception isn’t that you are competent, it’s that you were able to come to the right conclusion only because the senior has helped you.
In many cases, we didn’t deliver sooner than we could have, because my solution had roughly equivalent implementation costs to the solution that was chosen instead. In some cases the bug was discovered before we’d even delivered the feature to the customers at all.
Ah, but that’s assuming the ‘right way’ path went perfectly and didn’t over-engineer anything. In reality, the ‘right way’ path being advocated for, statistically will also waste a lot of time, and over-engineering waste can and does grow exponentially, while under-engineering frequently only wastes linear and/or small amounts of time, until the problem is better understood.
Having witnessed first-hand over-engineering waste millions of dollars and years of time, on more than one occasion, by people advocating for the ‘right way’, I think tallying the time wasted upgrading an under-engineered solution is highly error prone, and that we need to assume that some percentage of time we’ll need to redo things the right way, and that it’s not actually a waste of time, but a cost that needs to be paid in search of whether the “right way” solution is actually called for, since it’s often not. The waste might be the lesser waste compared to something much worse, and it’s not generally possible to do the exact right amount of engineering from the start.
Someone here on HN clued me into the counter acronym to DRY, which is WET: write everything twice (or thrice) so the 2nd or 3rd time will be “right”. The first time isn’t waste, it’s necessary learning. This was also famously advocated by Fred Brooks: “Play to Throw One Away” https://course.ccs.neu.edu/cs5500f14/Notes/Prototyping1/plan...
> In reality, the ‘right way’ path being advocated for, statistically will also waste a lot of time, and over-engineering waste can and does grow exponentially, while under-engineering frequently only wastes linear and/or small amounts of time, until the problem is better understood.
The “right way” examples I’m thinking of weren’t over-engineering some abstraction that probably wasn’t needed. Picture replacing a long procedural implementation, filled with many separate deprecated methods, with a newer method that already existed and already had test coverage proving it met all of the requirements, rather than cramming another few lines into the middle of the old implementation that had no tests. After all, +5 -2 without any test coverage is obviously better than +1 -200 with full test coverage, because 3 is much smaller than 199.
You make a strong case, and you were probably right. It’s always hard to know in a discussion where we don’t have the time and space to share all the details. There’s a pretty big difference between implementing a right way from scratch and using an existing right way that already has test coverage, so that’s an important detail, thank you for the context.
Were there reasons the senior devs objected that you haven’t shared? I have to assume the senior devs had a specific reason or two in each case that wasn’t obviously wrong or idiotic, because it’s quite common for juniors to feel strongly about something in the code without always being able to see the larger team context, or sometimes to discount or disbelieve the objections. I was there too and have similar stories to you, and nowadays sometimes I manage junior devs who think I’m causing them to waste time.
I’m just saying in general it’s healthy to assume and expect imperfect use of time no matter what, and to assume, even when you feel strongly, that the level of abstraction you’re using probably isn’t right. By the Brooks adage, the way your story went down is how some people plan for it to work up front, and if you’d expected to do it twice, then it wouldn’t seem as wasteful, right?
This isn't meant to be taken too literally or objectively, but I view YAGNI as almost a meta principle with respect to the other popular ones. It's like an admission that you won't always get them right, so in the words of Bukowski, "don't try".
Your documentation will tell when you need an abstraction. Where there is something relevant to document, there is a relevant abstraction. If its not worth documenting, it is not worth abstracting. Of course, the hard part is determining what is actually relevant to document.
The good news is that programmers generally hate writing documentation and will avoid it to the greatest extent possible, so if one is able to overcome that friction to start writing documentation, it is probably worthwhile.
Thus we can sum the rule of thumb up to: If you have already started writing documentation for something, you are ready for an abstraction in your code.
C++ programmers decided against NULL, and for well over a decade, recommended using a plain 0. It was only recently that they came up with a new name: nullptr. Sigh.
That had to do with the way NULL was defined, and the implications of that. The implication carried over from C was that NULL would always be null pointer as opposed to 0, but in practice the standard defined it simply as 0 - because C-style (void*)0 wasn't compatible with all pointer types anymore - so stuff like:
void foo(void*);
void foo(int);
foo(NULL);
would resolve to foo(int), which is very much contrary to expectations for a null pointer; and worse yet, the wrong call happens silently. With foo(0) that behavior is clearer, so that was the justification to prefer it.
On the other hand, if you accept the fact that NULL is really just an alias for 0 and not specifically a null pointer, then it has no semantic meaning as a named constant (you're literally just spelling the numeric value with words instead of digits!), and then it's about as useful as #define ONE 1
And at the same time, that was the only definition of NULL that was backwards compatible with C, so they couldn't just redefine it. It had to be a new thing like nullptr.
It is very unfortunate that nullptr didn't ship in C++98, but then again that was hardly the biggest wart in the language at the time...
There is a human side to this which I am going through right now. The first full framework I made is proving to be developer unfriendly in the long run, I put more emphasis on performance than readability (performance was the KPI we were trying to improve at the time). Now I am working with people who are new to the codebase, and I observed they were hesitant to criticize it in front of me. I had to actively start saying "lets remove <frame work name>, its outdated and bad". Eventually I found it liberating, it also helped me detach my self worth from my work, something I struggle with day to day.
My 'principle' for DRY is : twice is fine, trice is worth an abstraction (if you think it has a small to moderate chance to happen again). I used to apply it no matter what, soi guess it's progress...
I really dislike how this principle ends up being used in practice.
A good abstraction that makes actual sense is perfectly good even when it's used only once.
On the other hand, the idea of deduplicating code by creating an indirection is often not worth it for long-term maintenance, and is precisely the kind of thing that will cause maintenance headaches and anti-patterns.
For example: don't mix file system or low-level database access with your business code, just create a proper abstraction. But deduplicating very small fragments of same-abstraction-level can have detrimental effects in the long run.
I think the main problem with these abstractions that they are merely indirections in most cases, limiting the usefulness to several use cases (sometimes to things that never going to be needed).
To quote Dijsktra: "The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise."
I can't remember where I picked it up from, but nowadays I try to be mindful of when things are "accidentally" repeated and when they are "necessarily" repeated. Abstractions that encapsulate the latter tend to be a good idea regardless of how many times you've repeated a piece of code in practice.
Exactly, but distinguishing the two that requires an excellent understanding of the problem space, and can’t at all be figured out in the solution space (i.e., by only looking at the code). But less experienced people only look at the code. In theory, a thousand repetitions would be fine if each one encodes an independent bit of information in the problem space.
The overarching criterion really is how it affects locality of behaviour: repeating myself and adding an indirection are both bad, the trick is to pick the one that will affect locality of behaviour the least.
twice is fine... except some senior devs apply it to the entire file (today I found the second entire file/class copied and pasted over to another place... the newer copy is not used either)
As someone who recently had to go over a large chunk of code written by myself some 10-15 years ago I strongly agree with this sentiment.
Despite being a mature programmer already at that time, I found a lot of magic and gotchas that were supposed to be, and felt at the time, super clever, but now, without a context, or prior version to compare, they are simply overcomplicated.
I find that it’s typically the other way around as things like DRY, SOLID and most things “clean code” are hopeless anti-patterns peddled by people like Uncle Bob who haven’t actually worked in software development since Fortran was the most popular language. Not that a lot of these things are bad as a principle. They come with a lot of “okish” ideas, but if you follow them religiously you’re going to write really bad code.
I think the only principle in programming I think can be followed at all times is YAGNI (you aren’t going to need it). I think every programming course, book, whatever should start by telling you to never, ever, abstract things before you absolutely can’t avoid it. This includes DRY. It’s a billion times better to have similar code in multiple locations that are isolated in their purpose, so that down the line, two-hundred developers later you’re not sitting with code where you’ll need to “go to definition” fifteen times before you get to the code you actually need to find.
Of course the flip-side is that, sometimes, it’s ok to abstract or reuse code. But if you don’t have to, you should never ever do either. Which is exactly the opposite of what junior developers do, because juniors are taught all these “hopeless” OOP practices and they are taught to mindlessly follow them by the book. Then 10 years later (or like 50 years in the case of Uncle Bob) they realise that functional programming is just easier to maintain and more fun to work with because everything you need to know is happening right next to each other and not in some obscure service class deep in some ridiculous inheritance tree.
The problem with repeating code in multiple places is that when you find a bug in said code, it won't actually be fixed in all the places where it needs to be fixed. For larger projects especially, it is usually a worthwhile tradeoff versus having to peel off some extra abstraction layers when reading the code.
The problems usually start when people take this as an opportunity to go nuts on generalizing the abstraction right away - that is, instead of refactoring the common piece of code into a simple function, it becomes a generic class hierarchy to cover all conceivable future cases (but, somehow, rarely the actual future use case, should one arise in practice).
Most of this is just cargo cult thinking. OOP is a valid tool on the belt, and it is genuinely good at modelling certain things - but one needs to understand why it is useful there to know when to reach for it and when to leave it alone. That is rarely taught well (if at all), though, and even if it is, it can be hard to grok without hands-on experience.
We agree, but we’ve come to different conclusions. Probably based on our experiences. Which is why I wanted to convey that I think you should do these things in moderation. I almost never do classes, and much rarer inheritance, as an example. That doesn’t mean I wouldn’t make a “base class” containing things like “owned by, updated by, some time stamp” or whatever you would want added to every data object in some traditional system and then inherit that. I would, I might even make multiple “base classes” if it made sense.
What I won’t do, however, is abstract code until I have to. More than that as soon as that shared code stops being shared, I’ll stop doing DRY. Not because DRY is necessarily bad, but because of the way people write software which all too often leads to a dog which will tell you dogs can’t fly if you cal fly() on it. Yes, I know that is ridiculous, but I’ve never seen an “clean” system that didn’t eventually end up like that. People like Uncle Bob will tell you that is because people misunderstood the principles, and they’d be correct. Maybe the principles are simply bad if so many people misunderstand them though?
good devs*, not all senior devs have learned that, sadly. As a junior dev I've worked under the rule of senior devs who were over-applying arbitrary principles, and that wasn't fun. Some absolute nerds have a hard time understanding where their narrow expertise is meant to fit, and they usually don't get better with age.
I had this problem with an overzealous junior developer and the solution was showing some different perspectives. For example John Ousterhout's A Philosophy of Software Design.
The sibling comment says "fire them". That sounds glib, but it's the correct solution here.
From what you've described, you have a coworker who is not open to learning and considering alternative solutions. They are not able to defend their approach, and are instead dismissive (and using an ageist joke to do it). This is toxic to a collaborative work environment.
I give some leeway to assholes who can justify their reasoning. Assholes who just want their way because it's their way aren't worth it and won't make your product better.
To be honest, at the point where they are being insulting I also agree firing them is a very viable alternative.
However, to answer the question more generally, I've had some success first acknowledging that I agree the situation is suboptimal, and giving some of the reasons. These reasons vary; we were strapped for time, we simply didn't know better yet, we had this and that specific problem to deal with, sometimes it's just straight up "yeah I inherited that code and would never have done that", honestly.
I then indicate my willingness to spend some time fixing the issues, but make it clear that there isn't going to be a Big Bang rewriting session, but that we're going to do it incrementally, with the system working the whole time, and they need to conceive of it that way. (Unless the situation is in the rare situation where a rewrite is needed.) This tends to limit the blast radius of any specific suggestion.
Also, as a senior engineer, I do not 100% prioritize "fixing every single problem in exactly the way I'd do it". I will selectively let certain types of bad code through so that the engineer can have experience of it. I may not let true architecture astronautics through, but as long as it is not entirely unreasonable I will let a bit more architecture than perhaps I would have used through. I think it's a common fallacy of code review to think that the purpose of code review is to get the code to be exactly as "I" would have written it, but that's not really it.
Many people, when they see this degree of flexibility, and that you are not riding to the defense of every coding decision made in the past, and are willing to take reasonable risks to upgrade things, will calm down and start working with you. (This is also one of the subtle reasons automated tests are super super important; it is far better for them to start their refactoring and have the automated tests explain the difficulties of the local landscape to them than a developer just blathering.)
There will be a set that do not. Ultimately, that's a time to admit the hire was a mistake and rectify it appropriately. I don't believe in the 10x developer, but not for the usual egalitarian reasons... for me the problem is I firmly, firmly believe in the existence of the net-negative developer, and when you have those the entire 10x question disappears. Net negative is not a permanent stamp, the developer has the opportunity to work their way out of it, and arguably, we all start there both as a new developer and whenever we start a new job/position, so let me sooth the egalitarian impulse by saying this is a description of someone at a point in time, not a permanent label to be applied to anyone. Nevertheless, someone who insists on massive changes, who deploys morale-sapping insults to get their way, whose ego is tied up in some specific stack that you're not using and basically insists either that we drop everything and rewrite now "or else", who one way or another refuses to leave "net negative" status... well, it's time to take them up on the "or else". I've exaggerated here to paint the picture clearly in prose, but, then again, of the hundreds of developers I've interacted with to some degree at some point, there's a couple that match every phrase I gave, so it's not like they don't exist at all either.
You mean they literally say "ok boomer"? If so they are not mature enough for the job. That phrase is equivalent to "fuck off" with some ageism slapped on top and is totally unacceptable for a workplace.