Hacker News new | past | comments | ask | show | jobs | submit login

I find that it’s typically the other way around as things like DRY, SOLID and most things “clean code” are hopeless anti-patterns peddled by people like Uncle Bob who haven’t actually worked in software development since Fortran was the most popular language. Not that a lot of these things are bad as a principle. They come with a lot of “okish” ideas, but if you follow them religiously you’re going to write really bad code.

I think the only principle in programming I think can be followed at all times is YAGNI (you aren’t going to need it). I think every programming course, book, whatever should start by telling you to never, ever, abstract things before you absolutely can’t avoid it. This includes DRY. It’s a billion times better to have similar code in multiple locations that are isolated in their purpose, so that down the line, two-hundred developers later you’re not sitting with code where you’ll need to “go to definition” fifteen times before you get to the code you actually need to find.

Of course the flip-side is that, sometimes, it’s ok to abstract or reuse code. But if you don’t have to, you should never ever do either. Which is exactly the opposite of what junior developers do, because juniors are taught all these “hopeless” OOP practices and they are taught to mindlessly follow them by the book. Then 10 years later (or like 50 years in the case of Uncle Bob) they realise that functional programming is just easier to maintain and more fun to work with because everything you need to know is happening right next to each other and not in some obscure service class deep in some ridiculous inheritance tree.




The problem with repeating code in multiple places is that when you find a bug in said code, it won't actually be fixed in all the places where it needs to be fixed. For larger projects especially, it is usually a worthwhile tradeoff versus having to peel off some extra abstraction layers when reading the code.

The problems usually start when people take this as an opportunity to go nuts on generalizing the abstraction right away - that is, instead of refactoring the common piece of code into a simple function, it becomes a generic class hierarchy to cover all conceivable future cases (but, somehow, rarely the actual future use case, should one arise in practice).

Most of this is just cargo cult thinking. OOP is a valid tool on the belt, and it is genuinely good at modelling certain things - but one needs to understand why it is useful there to know when to reach for it and when to leave it alone. That is rarely taught well (if at all), though, and even if it is, it can be hard to grok without hands-on experience.


We agree, but we’ve come to different conclusions. Probably based on our experiences. Which is why I wanted to convey that I think you should do these things in moderation. I almost never do classes, and much rarer inheritance, as an example. That doesn’t mean I wouldn’t make a “base class” containing things like “owned by, updated by, some time stamp” or whatever you would want added to every data object in some traditional system and then inherit that. I would, I might even make multiple “base classes” if it made sense.

What I won’t do, however, is abstract code until I have to. More than that as soon as that shared code stops being shared, I’ll stop doing DRY. Not because DRY is necessarily bad, but because of the way people write software which all too often leads to a dog which will tell you dogs can’t fly if you cal fly() on it. Yes, I know that is ridiculous, but I’ve never seen an “clean” system that didn’t eventually end up like that. People like Uncle Bob will tell you that is because people misunderstood the principles, and they’d be correct. Maybe the principles are simply bad if so many people misunderstand them though?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: