Hacker News new | past | comments | ask | show | jobs | submit login
Old Code Gets Younger Every Year (medium.com/bellmar)
240 points by mbellotti on June 8, 2020 | hide | past | favorite | 188 comments



> The real horsemen of the legacy apocalypse is the depth of the dependency tree.

I won't speak for Java or Python. For Node/JS, I do not see this being a problem for the future. The reason is because npm is so fragile that the entire thing falls over today if you just look at it wrong. The very second you run "npm install" you have a mountain of instant tech debt. The primary reason for this is because the people writing JavaScript do not know what they are doing. They have not cut their teeth on libraries in C/C++ or other languages and have no clue how backwards compatibility or versioning works. If you doubt what I'm saying, then I invite you to do your own quality inspection of any major JS library or framework out there. I'm not going to name names, but the vast majority of it is pure shit.

In my experience, JS code has a life expectancy of about a year and a half. Give or take. After that, it is rewritten. Which also contributes to the lack of quality in the JS ecosystem. There is a built-in assumption that this code won't survive past the time your average dev gets bored of it.

People, today, do not realize there are JS frameworks that predate React/Angular/etc. and have already died. Once popular frameworks. And I'm not talking jQuery. You don't see them because JS development = churn. You don't even see CoffeeScript mentioned today, and that was just a few years ago.


This sounds like the sort of thing I hear from folks who have only passively dealt with the Javascript ecosystem but somehow still seem to think they have a really great handle on the entire situation, how it got there, what exactly the root of the problems are, and (sometimes) how it can be fixed.

And while I partially agree in some cases, I don't think this is in any way a problem specific to the JS ecosystem. There is definitely a lack of quality in a whole lot of JavaScript code out there. But I've also seen TONS of horribly written Ruby gems, .Net libraries, Java apps, even Dockerfiles.

It is just much more prevalent when those ecosystems are still "maturing" - which I think JS, Node, NPM, etc still very much are. Even the early Android days were plagued with tons of horrible options for networking libraries, SQLite helpers, or whatever else wasn't fleshed out well. And that was just the _platform_ maturing with a language that was already quite mature. With JS you have not only the language itself maturing and changing rather quickly, but also all the underlying platforms, package management systems, security issues, etc.

Patterns, best practices, and standards often take quite a while to surface and become established. And they seem so obvious in hindsight! Which is why I think we get a lot of people coming from other ecosystems who seem to have selectively forgotten about that happening in their own established tech niche.


> This sounds like the sort of thing I hear from folks who have only passively dealt with the Javascript ecosystem but somehow still seem to think they have a really great handle on the entire situation, how it got there, what exactly the root of the problems are, and (sometimes) how it can be fixed.

Never in the last 11 years have I not been professionally developing JavaScript to some degree, and I fully stand by what was said in the post above.

> It is just much more prevalent when those ecosystems are still "maturing" - which I think JS, Node, NPM, etc still very much are.

Then why are things getting worse?

> Patterns, best practices, and standards often take quite a while to surface and become established.

The best practice for JavaScript at this point is to use the few good libraries (in most cases, that's literally only React) and beyond that, do as little in JavaScript as you can to minimize its impact on your system. User experience comes before everything, so that means you still have to do a lot of JavaScript.


It's very limited to think that the user experience comes before everything. If I focus purely on the user and not my business I don't have a business. If I do something good for the user experience but in fact breaks the law, then I am no longer in business. The user is incredibly important, but it's all a balancing act of resources and where to dedicate them.


If your business doesn't serve its users, I am not particularly concerned that you don't have a business.

Its my impression that most relevant laws are geared toward protecting users from businesses, but I'd be open to hearing if you know of any exceptions (GDPR isn't one--secretly collecting data on users without their consent isn't serving users, it's harming users).


User experience is a broad term. People here generally think of speed, but in fact that can actually result in a worse user experience. I believe users react better to a loader that takes longer for certain tasks like calculating tax returns because they like to think the computer worked hard to get the best result. It’s not always black and white.

I have had to implement features that are awful for the users, but are required by law to have. Others to protect the business from liability.

I’m not saying doesn’t serve users as the original comment was “user experience” and was specifically mentioning JavaScript in terms of a presumably slow site. User experience is many things and all of them need to be considered. I totally agree that sites have gotten way to bloated, but the js community is recognizing that and trying to work on it, lazy loading and improvements to module bundling are going to help. I think the craze of tiny packages is coming to an end. Libraries post proudly when they have zero dependencies. As the community evolves it’ll improve, though some sites will probably continue to get worse as higher ups demand more features.


> I believe users react better to a loader that takes longer for certain tasks like calculating tax returns because they like to think the computer worked hard to get the best result. It’s not always black and white.

This is a concern with creating the impression that you're serving the user, rather than actually serving the user.

> I have had to implement features that are awful for the users, but are required by law to have. Others to protect the business from liability.

This is basically what you said in the previous post, and remains lacking in concrete examples.


> awful for the users, but are required by law

I am really curious what sort of law does this. (Though maybe I'm better off not knowing...?)


> Never in the last 11 years have I not been professionally developing JavaScript to some degree

I know a designer who has also been professionally developing JS to some degree for over 15 years. Still very much a JS novice.

The terms "passively dealt with the Javascript" and "to some degree" sound like the same thing to me.


How’s it getting worse, exactly? I must admit I am intrigued by this contrarian view Than the average, but this feels light in specifics so I don’t have any way to address it on a point by point basis. Could you elaborate your positions here?


Classes: JavaScript developers couldn't be arsed to learn prototypical inheritance, so they added class-based inheritance. Except they didn't really--classes in JavaScript are just an abstraction built around prototypical inheritance. So now you have to understand prototypical inheritance to understand how the class abstraction leaks. It's fewer lines of code, but it brings in a bunch of implicit complexity.

Promises: this is just syntactic sugar around callbacks, and does nothing to address the underlying problems of why callbacks were painful. It does introduce two ways of doing the same thing though, so now if you want to write any sort of abstractions you have to handle both ways.

Leaning hard into global frameworks that actively resist reusability and composition: Angular, Ember, etc. just start from the assumption you're writing an SPA. React is a rare gem of an exception, and is an awesome project, but there you have Flux/Redux/etc. trying to turn every React App into an SPA. This is probably the clearest example of JS projects being unsustainable: if you sacrifice reuse and composability in order to have a framework that does a lot for you, that lets you spin up projects quickly, but the project will slowly grind to a halt. React projects are great as long as you can fight the fight to keep Redux et al out, but eventually Redux fanboy gets it in and it quickly consumes your project. Global state is bad, folks. And don't tell me about reducers: I know, and if you think that means there isn't global state, that's only because you don't understand what global state and/or reducers are. I also am not interested in hearing about the other library you use to clean up your Redux code--instead of piling on more libraries to fix the problems you introduced, don't introduce the problems in the first place.


The classes thing is even worse than what you are describing since it is the only way to create correctly functioning sub-types of built-ins (e.g. `Error`). So not only is it still syntactic sugar for prototype-based inheritance for most things, there are also these edge cases where there’s actually no fully functional prototype-based equivalent.


So, I see small bits here, that all boil down to developers not understanding enough about their language of choice. The web being the vast platform that it is and JavaScript eating so many domains may very well be a situation where not having full mastery of a language will still get you very far perhaps - which says a ton of positive about the platforms imo- but that’s the heart of what I’m seeing here.

So, perhaps the issue Heyer is lack of education, not that the ecosystem itself is unquestionably broken?

Also, curious, how do you handle state full actions without some kind of global? Genuinely curious


> The web being the vast platform that it is and JavaScript eating so many domains may very well be a situation where not having full mastery of a language will still get you very far perhaps

Sure, it gets you into a project far enough to commit a lot of resources to it, and then falls over.

> So, perhaps the issue Heyer is lack of education, not that the ecosystem itself is unquestionably broken?

The problem may be a lack of education, but that becomes a problem with the ecosystem when the uneducated are determining the direction of the language. Classes are a feature of JavaScript now: that's irreversible damage done to the language. We can educate people about prototypical inheritance and explain to them why hacking classes over top of them is a bad idea, but that's too little too late: we can't remove classes from the language because there's already too much code that depends on them. And we can't even add classes into linters and say "don't use them", because as another commenter pointed out, core language features now depend on it. At this point, we have to embrace the suck.

And there's a fundamental opposition in the JS community to education. Education starts with admitting you don't know something, and you can see in this thread a lot of people jumping in to defend the JS ecosystem. But the type of defenses you see are telling: instead of discussing the technical merits of the decisions made, people are shooting the messenger[1] or taking it personally[2]. The only defense of JS that's vaguely technical is brundolf's post, which isn't so much a defense of JS as a strategy for writing good code in a crappy language. Not a single post here is actually defending the JS ecosystem on technical grounds.

> Also, curious, how do you handle state full actions without some kind of global? Genuinely curious

Let's say you are writing a configuration interface for virtual servers. You have three attributes on sliders bars: memory units, CPUs, and solid state drives. There's also a total cost field. The Redux global state way looks something like:

1. Have the CPUs, Memory Units, SSDs, and Total Cost in a global store.

2. Create 3 actions and corresponding reducers that update the Total Cost when any of the slider bars change.

3. Create 3 slider bar components, which dispatch to the global store with the proper actions.

4. Create a base component that renders the whole thing.

This approach already runs into some problems:

1. First, you've got three slider components that are basically the same. You can pull out the rendering code into a Slider superclass, and that reduces the repetition, but your reusability is limited: every time you want to add a slider elsewhere in the app you have to write a whole new component. And when you look at those components, you've got a lot of implicit code being pulled in by the superclass, which means you have to switch between the component and the supercomponent to understand what's going on.

2. Your reducers operate on changes in different pieces of data, but ultimately each all have to know about all three data sources to calculate the total. You can check out "Reusing Reducer Logic"[3] and that's certainly a better idea than repeating yourself three times, but you can see that there's a complexity explosion happening here.

3. In actuality, what happens too often is the duplication is just left in the three slider classes, and you have three reducers since reuse is too complex. Even if you're very disciplined about it, the complexity of achieving reuse sometimes mean you don't have time or energy to do it. In short: this design punishes reusability and rewards duplication.

4. If you want to have multiple private servers, you now have to change literally every single one of these components, because every single one needs to know in some way which server it's operating on.

Let's design the same functionality, without Redux:

1. Have a slider component which accepts a min, max, initialValue and onValueChanged callback as props.

2. Have a parent component that holds the three values in its state, and renders the three sliders with onValueChanged callbacks that update the parent component's state. The total can be rendered without being stored in state, but if you want to have multiple servers on the page, you might want to put the total in state.

At first glance, each of these components is slightly larger, and it seems annoying to have to pass all the info into the slider components. But consider the benefits:

1. All your server state is in one place. No one else stores information on servers, and nowhere else even knows that servers exist. It's decoupled.

2. Because of this decoupling, the slider is a reusable component. You can use the same slider to modify literally anything with a min/max/value, favoring composition over inheritance[4].

3. The only thing you have to understand here is React components. The complexity is greatly reduced.

4. If you want to have multiple servers on the page, the parent component is already most of the way to being a reusable component. All you have to do is expose props for the initial server configuration, and an onServerChanged handler. There's one gotcha here: changes to the sliders need to propagate up to the parent of the parent component and still trigger the proper events and re-rendering. Pitfalls can be avoided here by using an immutability library--I prefer Immer, but I imagine ImmutableJS can do the same things. You could even roll your own, since you really don't need much to handle this--a simple set of copy-on-write functions would be adequate, although they would stop being performant if your data structure gets more complex.

[1] "This sounds like the sort of thing I hear from folks who have only passively dealt with the Javascript ecosystem"

[2] "Almost every comment thread, regardless of topic, has someone shitting on front-end developers and it's extremely demoralizing."

[3] https://redux.js.org/recipes/structuring-reducers/reusing-re...

[4] https://en.wikipedia.org/wiki/Composition_over_inheritance


Just to say, I was taking a neutral position on this, genuinely not trying to get caught in the lack of details I was seeing in the back and forth, so I appreciate the detailed response.

I'll say this with redux: precisely. You can achieve reusability if you build with that in mind, but even fresh projects I've seen get caught at some point on duplicating code irresponsibly here. The fact that they didn't just make this a core part of the redux experience - I'm referring to the abstraction they use in the example on the page named `createNamedWrapperReducer` specifically - to simply allow you to pass a unique identifier you can associate with a component (which it could then offload from the state container if the component is removed from the page, saving memory) or even requiring it as a prop (people might grumble, but it'd catch on regardless) would have been a much more effective way to do this. I'm shocked 5 version sin we don't have this as the default...leaving it up to the user to create a very basic abstraction around this just irks me, as I don't see how consumers benefit from that.

Yet, I don't want to create _yet another library_ since redux is so well entrenched. Internally where I used to work, we had a library that basically was just what I described. You specified a special prop on a component and any reducers were created from a factory that respected the prop, worked well enough.

Now, with that said, I have to say this:

I think JavaScript is a great language. I think TypeScript is an even better manifestation, despite some quirks.

Now, onto some technical bits.

Classes are more than just fancy wrappers on prototypical inheritance. The way its going with the standards committee, and the way I've interpreted the point of having classes, is to have formal encapsulation[0]. With things like the private fields & method proposal[1] already well on their way to formal standardization, classes are starting to make more sense. They also provide a way to extend builtins in a way the engine understands (This is also the logic behind Web Components I feel like everyone misses, but that's another story).

While I know there is plenty of object oriented pitfalls everyone has seen, like abuse of inheritance, I find that the very common pre-class way was just, if not more so, prone to pitfalls. Extending classes is pretty obvious to most programmers - I mean this in terms of syntax and typical expected behavior - I coded up a simple example here to illustrate just how gnarly ES5 "classes" are.

https://gist.github.com/ScottAwesome/67d2d10761bd4993eec9d9d...

I also think classes provide the correct abstraction for exposing public and private APIs, as I already mentioned the private fields and method proposal, but also its more conventionally straightforward to encapsulate this even using a WeakMap to make private methods/fields unexposed to the consumer, (one of its primary use cases when it was introduced). Plain objects, without some sort of system to manage their creation (like factories) and their visibility (Object.seal, Object.freeze, Object.preventExtension, Object.DefineProperty - all of which work on classes too, of course), they're otherwise going to leak their members and its all too easy to find private APIs on object members. I know you can with classses - sans using a WeakMap or the new private fields API. I find it feasibly easier however to integrate classes when I need to encapsulate a well defined interface with properties & methods, in large part because I can keep the private APIs private more easily, via an internal WeakMap, without sacrificing extensibility.

I know you can do this with objects, however its much more clear that you get a discrete instance with classes. When that sort of thing is important, I think classes are useful.

I also think there is a lot of value to the mixin pattern.

I also think this reaches into what most developers just expect out of a language. Arguably, the complete lack of usage of the Reflect[2] APIs I see in the wild and where I've worked tells me most people just don't reach for these kinds of APIs. Therefore from a language development point of view, I can't argue classes are bad, either.

Now I understand you can get there with plain objects, and I think this is an under-utilized mechanism of the language. In fact, I think the plain javascript object are amazing, and very few languages have such a powerful construct. I'm a big fan of composition over inheritance (to which end I have leveraged the stamp specification and its toolkit[3]. I think it would have been nice to see more specialty objects (with a static `Object` factory to create them) over having to just introduce classes. Alas, they are here to stay.

Truth be told, I don't use classes that often, preferring pure functions & plain objects most of the time, but sometimes there just isn't a better way to encapsulate behaviors than the a class, in fewer lines of code, that's easy to reason about and maintain.

Now one trend I don't like, is that a lot of things are becoming class only APIs (Web Components being a major one). This complicates things, but you can certainly 'hide' the implementation details via factories if you don't want to expose classes directly, so far in these cases, that's pretty straightforward to do.

Worth noting: I'm just not strongly opinionated enough to say you're right or wrong on this. We already acknowledged a massive education problem in the industry, one that, I will admit is in my personal experience seemingly true, that not enough people take the language serious enough to spend the time to learn how to work with it appropriately. Until that's solved, I don't think its worth fighting over other minutiae in this particular discussion. Solving the problem of good developer evangelism in the JS ecosystem is a discussion for another time, though, and one that will ultimately be futile if we can't get the major stakeholders to have some agreement on what that means.

[0]: https://en.wikipedia.org/wiki/Encapsulation_(computer_progra... [1]: https://github.com/tc39/proposal-private-methods#private-met... [2]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe... [3]: https://github.com/stampit-org/stamp-specification


That could be the case. It could also be the case that those deeply rooted in the Javascript ecosystem aren't listening to domain experts.

The numerous security problems with how NPM installs dependencies was pointed out as problematic from the get go by language maintainers, package manager maintainers, & security experts from other projects.

I think it's possible that Javascript is getting better & more responsive here & I bet at least some of the defensiveness was that it has also been unfairly criticized which makes it hard to separate signal from noise.


> It is just much more prevalent when those ecosystems are still "maturing" - which I think JS, Node, NPM, etc still very much are. Even the early Android days were plagued with

Node is about half a year younger than Android, and npm another half year. Javascript is, of course, over a decade older.

If JS, Node, NPM, etc is "still maturing" after all that time, then that is a pretty strong indication that there is something wrong with that ecosystem.


You compare it to Android as if Android dev isn't full of its own problems. Here is some Android complaints that I agree with from the other day: https://news.ycombinator.com/item?id=23438732

Android is definitely still maturing. So is iOS/macOS dev.

I guess client development isn't as easy as we told ourselves.


It's not a scenario where you can look at "absolute years from inception". It's all relative. With the JS ecosystem, you have rapidly changing Web APIs and standards, varying degrees of browser support and Node versions. And no real "central authority" to guide things in a specific direction. With Android you have a much more managed platform, with Google providing at least some guidance on best practices (explicitly or not). I'm sure the recent introduction of Kotlin added some complexities and confusion to that system. But I still don't think things are changing as rapidly as they are in the JS world, and from so many different angles.


> With the JS ecosystem, you have rapidly changing Web APIs and standards, varying degrees of browser support and Node versions. And no real "central authority" to guide things in a specific direction.

All of these (except Node versions) are constraints of the original problem domain: running user interface code in a browser. These are unavoidable for client-side Javascript.

But these are not constraints of server-side or command line code, so it's absurd to write server-side code under these constraints. Server-side JavaScript should not exist. Command line utilities written in Javascript should not exist.

What part of using having a bunch of completely different ways to bind to events to read a file makes any sense? https://stackoverflow.com/questions/9168737/read-a-text-file...


> But these are not constraints of server-side or command line code, so it's absurd to write server-side code under these constraints. Server-side JavaScript should not exist. Command line utilities written in Javascript should not exist.

This is a myopic view entirely disconnected from reality. The purpose of code isn't to be perfect or pretty, its purpose is to do useful work. The most relevant constraint is always available resources which is dominated by human capital and experience. Productivity is measured using useful output, not code quality.

> What part of using having a bunch of completely different ways to bind to events to read a file makes any sense?

The part that puts food on the table. I can't name a single language I have any experience with that has One True Way™ of binding to events except for C and assembly which can only pass callable pointers around.


> The most relevant constraint is always available resources which is dominated by human capital and experience.

Which is why it's particularly puzzling that JavaScript community ignores vast swaths of human capital and experience. Global state? Awesome! Principle of least astonishment? For the birds. Caring about security of your dependencies? What on earth for?

> Productivity is measured using useful output, not code quality.

Useful output is strongly correlated with code quality, and that correlation grows stronger as a project grows.

> The part that puts food on the table.

Sure, my billable rate is the same no matter what language I'm writing and if that's all you care about, okay I guess. But I've rewritten a server-side JavaScript application in Python, and my clients wouldn't hesitate to say that they got more for my billable hours than they did for the team of three JavaScript developers who wrote the original application.

> I can't name a single language I have any experience with that has One True Way™ of binding to events except for C and assembly which can only pass callable pointers around.

You need to take a bigger step back to understand what I'm saying. It's not about having One True Way, it's about keeping your code simple enough that you can read it when you come back to it a year later, you can break it into composable chunks, and reuse it. Every one of the ways on the page I linked forces you to think about an event loop in a process that's fundamentally procedural. It's fundamentally the wrong abstraction for the situation, but JavaScript literally can't represent a better abstraction.


I don't think the GP was saying JS is bad because there's not 'one true way' to read from a file. I think the GP was saying it's bad because we're trying to get it to read from a file in the first place. ('Server-side JavaScript should not exist')


SSJS has existed more or less from the beginning of JS.

https://docs.oracle.com/cd/E19957-01/816-6410-10/816-6410-10...

Any general purpose language will be used in ways and places that it was not expected to be used. Both Netscape and MS offered some level of SSJS, and MS added a JScript Windows Script Host.


I mean, if Javascript is eternally stuck in a state of "still maturing" or it's environment make maturation take much longer than other ecosystems, then wouldn't that make it just a bad ecosystem?


npm is not 'maturing'. It is now regressing.

There is no spec, there is no feature list, and so patches keep breaking behavior and then it gets declared as being on purpose.

I am using several flags for CD builds that are not documented on the npm page (or at least were not when I started using them).

If you are thinking of deflecting by talking about how maybe my builds are too complex then you can save your breath, because I don't want to hear it.

Tinker toy apps won't have any maintenance problems, aren't the sort of things you get paid to work on for years, can be replaced instead of mucked with. And quite frankly, bringing up happy path applications just dates you as a young developer with no field experience (or worse, an idiot).


I think you replied to the wrong person.


People seems to forget how many people program in JavaScript. Just because the average is low doesn’t mean there aren’t just as many really good ones out there as any other language. It might be common to have lots of dependencies but I can tell you gist hand you only need a few with a really shallow tree to build a full featured web app (both server and client).


It’s not just the average that’s low. It’s the median as well. There’s an incredible long tail of skill with JS and the related ecosystem.


It's possible to write good code in bad languages.


> The primary reason for this is because the people writing JavaScript do not know what they are doing.

Hello. I'm a JavaScript programmer who cut my teeth on Java twelve years ago, followed by C++ two years later and throughout the course of my Computer Science program, and over the years I've done personal work in half a dozen other languages including Rust, Clojure, Python, C#, etc.

Despite all of that TypeScript remains by go-to general-purpose language of choice, for any arbitrary project without particular constraints.

Take your flame-bait elsewhere, please.


It's always neat to read reasoned replies to comments like these.

> > The primary reason for this is because the people writing JavaScript do not know what they are doing.

I think GP has a point here, though- it feels (to _me_, someone with only a small amount of programming experience) that there are all too many beginner tutorials that jump straight into framework-of-the-week.js and that leave huge knowledge gaps- if people start with node.js and angular and and and and, they might not know that they can do xyz in a vanilla install of JS, thus making their core knowledge of the language dependent on a particular framework.


I think there's some truth to this, but the same is true in, say, the Java world where you're almost not even writing things in Java-the-language, you're writing things in Spring. That said, it's probably more true in the JS world because it is so feasible to teach someone to make something in a JS framework before they know the fundamentals. Which is, honestly, an enormous compliment to the language, not an indictment! But it does come with some interesting side-effects.

I do feel the need to make one small correction (the OP actually made the same mistake): Node.js is not a framework, it is a runtime. If you want to run JavaScript the way you run Python, you use Node (or, very recently, an alternative called Deno). You also don't have to do anything special with your code to run it in Node, like you would with a framework, unless you want to use its system APIs which of course do not exist in the browser. But even those are really more like libraries.

But yes, your overall point is valid


I personally (anecdotal) feel that the Java world is a lot more formal; there are some Strong Opinions held in what "good" Java code is, which is missing in Javascript. This is in part due to great books like Effective Java, and in part due to editor / IDE support that will warn you and help you write great code - keep in mind that good IDE support has been missing in Javascript for years because of its lack of support for e.g. modules or namespaces, and its lack of Opinions.

Java's community has been Opinionated, whereas the JS community has been a lot more freeform. JQuery and NodeJS have been very influential in adding an opinion to the language; jquery by adding a kind of namespacing and scoping (also thanks to its predecessors / peers like YUI), NodeJS by adopting a module system (CommonJS) and centralized dependency management (NPM).

I came into JS via Java and while the lack of standards and rules / expectations was liberating, I was also missing it. It took a while before the best practices of Java made it into Javascript; in my experience, AngularJS was a big push for that, in that it was a much more opinionated way to structure your components and application - with a strong emphasis on (unit) testability.


This is mostly because there is not really a formal way of learning webdev, which is both a strength and a curse. JS's closest thing to a prescriptive and authoritative guide to do things is the MDN, which isn't very prescriptive at all; there's certainly nothing like the C programming book, or even contenders equivalent to something like a Headfirst Java book. If you want strong direction and strong opinions JS does not have an overly dominant one unless you pick something that starts off with its own ecosystem like React.

Asking for something on vanilla JS is like asking for an "original" English copy of the Bible, whatever that means.


I think the inclusiveness of the statement is the issue. He didn't say "the majority of the people writing JavaScript", he implied all of them.

I'm sure there's a lot of people who are writing decent JavaScript. But running JavaScript is easy. Every single computer comes with a JavaScript development environment. Because all you need is a text editor and a web browser. The barrier to entry is so low it might as well be non-existent.

So a lot of people with no clue of what they're doing will get in there and write some atrocious code.


Vanilla JS for UI work is like using Windows Forms (the Microsoft tech from 2002 for building windows UI in .NET). Unless you want something pretty basic, you are going to be writing your own frameworks. Maybe not a bad thing from a learning perspective, but I'd prefer a battle tested framework in general.


I'm interested in your take wrt to the dependency hell GP mentions.


Personally, I:

1) Keep my direct dependency list slim and mostly limited to "too big to fail" projects backed by major companies

2) Avoid frameworks in favor of libraries

I don't stress about indirect dependencies. If one of React's dependencies stops being maintained, or has a security problem or a bug, Facebook isn't going to not address it. Far too much money is riding on React as a whole, so whether they have to switch to a new dependency, or update the version, or reimplement the functionality themselves, they will. On top of this, the left-pad catastrophe can't happen again, because NPM publications are now impossible to remove after 72 hours: https://docs.npmjs.com/cli/unpublish

On the other hand I'm not going to add a direct dependency that I don't truly need. If it's a one-off function that I can write myself, or copy, I'll do so. On time I copied and pasted Lodash's debounce function into my codebase because that was all I needed, and I didn't want to bring in the whole library.

Libraries, compared with frameworks, are easier to predict, to fill gaps in, and to replace if necessary. And even some things that we call ""frameworks"" like React can really be used as libraries. If you only use React as a rendering layer - what it's best at anyway - it actually has a very thin and elegant boundary with your code: you construct a data structure, and you hand it off. Others, like Angular and Aurelia, are truly frameworks and I tend to avoid them as a result.

I wrote a blog post recently about the libraries/frameworks thing which got some attention/discussion here on HN: http://www.brandonsmith.ninja/blog/libraries-not-frameworks


There is a NPM package lodash.debounce if you only want this feature from lodash. Lodash is available splitted in many small packages.


The point remains, though, that I avoided an unnecessary dependency. I copied that function once and never needed to mess with it again, because it was 15 lines and I could verify personally that it did what it needed to do.

For contrast, we wanted to be able to incorporate new versions of React, because it isn't a simple enough thing that it can be written once and be done. It evolved, and offered real improvements in terms of performance and useful features.

As in all things, there are costs and benefits to making something an external dependency or not. The key is to realize that there are indeed tradeoffs, and that outsourcing every single thing you can isn't necessarily ideal.


Right, but his point about that was essentially:

"Why depend on that as a dependency rather than just have a copy directly in my project that I know will never behave any differently?"


Because it's one line to a dependency with a name, a documentation, a bug tracker, many users, and that could be improved in the future.

If you are scared about packages behaving differently, you should know that npm has lock files that describes your full dependency tree with versions and checksums.


"Could be improved in the future" is a risk when it comes to a simple utility function that does a simple thing that you don't want to change.

In the same way that someone constantly tweaking your utility functions in your own project would be a risk - and you'd understandably want to code review all that - the more you choose to (or have to) outsource "trivial" stuff, the more brittle your project is.


Hence the lockfile, on the flipside if you've copy and pasted source across which contains a security update you'll most likely not realise


In any other package ecosystem you'd be laughed out of the room for suggesting to add a dependency for a 15 line function, instead of just copying it into your codebase. And the 15 line function here is a fairly major function. People add dependencies for 2 or 1 line functions!


"one of React's dependencies"

Which one? There are... several. (someone posted this here a while ago, and it's lovely to look at)

http://npm.anvaka.com/#/view/2d/react-native


That's react-native. React (http://npm.anvaka.com/#/view/2d/react) does have... several: 5 of them.


Guess I'm wrong then! Thanks for the correction.


The point is that it's not my problem. Those things, together, comprise "React" as far as my application is concerned. React being a huge project which millions of dollars have been poured into, and which billions of dollars probably ride on, is going to be kept working, one way or another. As long as it continues to provide a functioning interface for my code, that's all I care about.


The problem isn't just what you see, the problem is sustainability for the whole ecosystem as a whole. FB has a lot of money but how do you know they won't determine at some point this is not sustainable and in two years you have to learn some other framework?


I don't know if you realize how massive the React user-base is at this point. That would be like saying, "how do you know they won't determine at some point python is not sustainable and in two years you have to learn some other language?" I mean, I guess. But even in that scenario, I'm sure the hundreds of companies who depend on it would pick up the slack.


It happened to Angularjs. It didn't die just stopped evolving in the 1.x branch. Everyone had to accept 2.x or move on or stop evolving.


That React has a lot of dependencies isn't a problem in itself. At this point, React is stable. It's backed and relied upon by a gigantic company called Facebook. It's used by millions of developers around the world.

If there's a major problem with one of React's dependencies, it will be promptly fixed.


React doesn't have a lot of dependencies. The person you are responding to linked to React Native, which is an entirely different thing.

I think it's a bit misguided to think that code quality won't be visible to users. As someone who has used both React and React Native, the quality of these two projects from a user perspective is worlds apart, and I have little doubt that their different approaches to dependencies are partly to blame for that.


That's not React, that's React Native.


I cut my teeth on VB6. Then PHP, .NET and finally Java.

I look back at all the code I wrote.. And it's all bad. Except anything I wrote 2 weeks ago, that was my crowning achievement. In 2 months I'll hate it, but right now - it's perfect.


Another perspective: It's nice that there are many nicer languages that target JS, besides the fairly new TypeScript there's ClojureScript, ReasonML, Elm, CoffeeScript, Dart, Haxe etc etc.


But, but, but, if you don't write in C and manage your memory you are not a real programmer! Says slashdot/HN...


Recently came back to a React GUI at work. Doing the Docker build is almost comical as the npm red text rolls by :). You would think the thing is surely going to explode. Of course, when you run the app React reminds us of all the new best practices which were non existant 6, 8 months ago


I’ve got a 5 year old react app that still gets worked on just fine. It doesn’t need every new state management library.

The problem with JS isn’t the major frameworks. It’s people that complain about how many packages exist. This is a consequence of its popularity. Take some time to use the language more, and you won’t be installing packages that are out of date in a month.


The packages aren't directly installed by you. They're installed as dependencies of the frameworks you use.


Almost every comment thread, regardless of topic, has someone shitting on front-end developers and it's extremely demoralizing.


If you read that GP post as a personal attack, you might be identifying with your tools. We all do it sometimes, but it's something we should keep an eye out for because it's IMO the most toxic part of programmer culture, and a big part of what's holding us back.

Almost all of my programming time these days is JS, and I've been doing JS as long as anybody. I know what I'm on about, and I can tell you the JS+NPM ecosystem as a whole is a hot mess. GP is not some crazy person with no idea what they're on about. Well, they might be, but you can't tell from that post :)


Please re-read the GP carefully: "The primary reason for this is because the people writing JavaScript do not know what they are doing".

If that isn't a personal attack on a particular group of developers, then I'm not sure what constitutes personal attacks.

Also, as a top-flight engineer working in a multi-billion company (and also having worked with Java and .net professionally for many years), the JS/TS ecosystem as a whole has never been better than it is today.

And TS remains my lang of choice today, for professional and personal projects :)


I think it's important to realize that these criticisms are primarily leveled at Javascript and its ecosystem, not at front-end developers, who have to work in that ecosystem. It's possible to write good code in a crap ecosystem. You aren't your tools, don't take it personally when your tools get criticized.

I've written some scathing things here and elsewhere about JavaScript. Does that mean I think everyone who writes JS is a crappy programmer? Of course not--I write JavaScript myself every day. I criticize Javascript because I write a lot of it.


On the other hand, true criticisms of Javascript, the language, are boring. Yeah, there are problems. You learn to deal with those problems. They aren't really that horrible. I worked with a lot of less convenient languages than JS (especially ES6, which is actually not bad). The standard library is pretty bad, but the language itself is easy enough to use that it's pretty trivial to implement what you need yourself. There are also a few pretty well written third party libraries with no other dependencies which you can use. The build environment is horrible, but not really any more horrible than some other environments I've had to deal with.

No, the real problem is that a lot of Javascript developers choose to stick hot pokers in their eyes. They don't read the code of their dependencies. They don't care how many ridiculous dependencies of dependencies they use. They refuse (absolutely refuse, to the point of calling you an imbecile if you even suggest it) to write their own tools. They choose the build tools that are the most wonky and are built on the most insane internal code -- because they don't care to ever look at that code. They look at the "box features" and say, "Oh, everyone is using that and it has all the features we want. You are crazy if you want that stupid boring thing that barely does anything (and yet works)". They don't do any planning for configuration management. They don't think about how they want to upgrade their dependencies, and especially don't dare think about inspecting the code in the dependencies. "Latest is best! If it breaks, we'll deal with it then".

Javascript is not really that bad. It really is that the community does not have a particularly good grasp on how to minimise risk in large projects. On the other hand, it's a common refrain on other platforms. While Javascript is not really that bad, other platforms are considerably better and you can get away with really poor practices for a lot longer. Not that they won't absolutely kick your ass eventually -- it's just going to be a couple of years away when you have moved to another company at a higher pay scale.


Yeah, I think you're hitting the nail on the head here. I'd go so far as to say that the language itself is actually better than C (in the sense that it's more intuitive and less error-prone). C has the excuse of age. But the C community has a culture of carefully designing architectures to minimize pain points and bugs, and avoid other pitfalls--and that was true 20 years ago when I started programming and C was a lot closer to as old as JS is now.

As I've said elsewhere, you can write good code in bad ecosystems, and some people are doing amazing things in JS.


I think it's because people grow to hate languages they merely dislike but are forced to use for a long period of time.

At least, that's why I hate Javascript. I don't care for it, and thanks to the web I basically had no other option for around a decade.


Yup ignore it. Front-end dev is super complex these days! I started when it was just HTML/CSS. JavaScript dev these days is much more complex than some 'real' programming languages!


JavaScript wasn't too complex back in the HTML/CSS days though. Lets be honest, we viewed it as a toy to sprinkle in some slick UI. Then some jQuery scripts started getting really out of control. Then I think ember and angular came out, or maybe it was backbone. Then things just kind of got more and more complex. Webpack is insane to learn. I finally took the time to learn it in isolation with es6, away from the heavy frameworks, and it wasn't so bad. But then when I look at using it with the heavy frameworks it makes me want to run for the hills.

I was playing around with puppeteer recently, which is a fantastic tool. It's perfect for what I needed. After I put together an ugly procedural proof of concept I wanted to figure out how to do things right and move it into classes. It's an adventure figuring out the right way to layout a project or even the right way to build a class. The fact that there isn't an authoritative answer tells me the eco-system is not mature or maybe I just didn't Google it right.

Meanwhile, PHP has https://www.php-fig.org/psr/. So anytime I can't remember how to name an Interface or an Abstract or anything really, I just google $blah + php psr. JavaScript is a shitshow that I have to live with, unfortunately. I know its funny coming from a guy that just used PHP as example for good language design, but at least the community has coalesced around standards...and composer > npm.

Its to the point now that I actively avoid making dynamic pages and SPAs AT ALL COSTS. It reduces the complexity of the code a ton that way. Thats not to say I haven't. I used VueJS for a big SPA because marketing wanted feature sets that required it. I fought it and lost. VueJS wasn't so bad, at least the learning curve was less than React, but SPAs are still terribad. Yuck.


I would suggest to ignore them. You read it 10 times and you are fine, it's always the same.


> it's always the same

They're usually the same. This one forgot to flame about how the JS community is supposedly constantly reinventing things real programmers discovered decades ago. I think there's still time to edit that comment into conformance with the JS-bashing standard.


That's what it was like for VB developers in olden times.


Do you get upset at other examples of people confirming their baiases? That's all comments like that are.

Anytime you see someone take a complex problem and flatten it down into a simple, single platitude that neatly solves the entire issue like this, (and also happens to reinforce a common belief) turn on your bullshit meter. Cause that's what it is.

There are absolutely issues with the node ecosystem and the dependencies and the quality of the software in that ecosystem. Is it because millions of engineers all don't know what they're doing and they're all idiots and they all happen to be using the same language, and they're all too stupid to realize it? Probably not, it's more likely that the person saying that is making an immature, emotional argument and lacks context.


Same happens with every article about Perl. Or PHP. Or any other technology that people love to hate.


Shhh! This keeps everyone away from frontend work, and the rates for frontend freelancers high!


The rates for frontend are not high though.


Really? Expert React developers are paid top of band in my current company. Someone who is -really- good at React is very hard to find.


They are hard to find but not hard to create :-)


Maybe a US thing then?


I know that it's a thing in Berlin, where a big chunk of programming demand comes from creative agencies and similar, where frontend generally sells better to their clients.


I've been programming since the late 90s and as recently as one or two years ago I would have agreed with a lot of this. However, having recently come back to JS and currently in the midst of developing a large, complex non-frontend app in it; I have to say the language, tooling, standards and ecosystem have jumped forward significantly.

It has honestly been a pleasure building this out. Yes, you still need to be selective about dependencies but that's the case for many languages. Beyond that, JS/Node in 2020 (in the right hands and for the right project) is pretty awesome.


The parent is not decrying developing things right now.

They are suggesting that what is built will not survive a very long time.

It would be more interesting if you had picked up a 3yr old project and could tell us your experience with that; failing this it would only be interesting if you come back in a few years to tell us how it is.


Honestly what the post author is describing sounds about my experience with the Java + Maven deps world about 10 years ago when I last used it. A giant trail of dependencies of questionable quality (and how many problems with commons-logging?).

I have no desire to write JS, really, but it doesn't sound unfamiliar.


>They are suggesting that what is built will not survive a very long time.

I suspect that thanks to the recent jumps forward (language itself, common best practices, tooling) that this argument doesn't hold as much weight as it once did. I see a lot of JS library code today that looks eminently supportable and survivable beyond a ~3yr time horizon.


I would like to agree, but given that we have no evidence for this and a history that has not supported this assertion- we cannot make that claim.

Only time will tell.


> In my experience, JS code has a life expectancy of about a year and a half.

Maybe for toy projects. I‘m working on projects that take years to build and are around for many years.

There are problems with JS but with TypeScript and React I find it more maintainable and robust than other languages.

The worst project I‘ve worked on in the last few years had 800k lines of C# around a C++ core from the early 90s. It was terrible to work with and far worse than anything I‘ve seen in the Javascript world.


Probably because its nearly 40 years old now.... Find me when we have the first large scale JavaScript project that is forty years old.


I think "a few years" is a bit of an understatement. CoffeeScript came out one year after Google Chrome, in the heyday of Ruby on Rails. The entire web development landscape has shifted a lot since then.


That kind of supports the parent comment. RoR is not even that old and, "the development landscape has shifted a lot since then."


It would support the parent comment if the point of the parent comment had been something like "The state of the web has changed a lot since 2005," but it wasn't. The parent comment was specifically castigating the JavaScript ecosystem for having so much "churn" "because the people writing JavaScript do not know what they are doing." Ruby on Rails is not a JavaScript framework, so if there's been lots of change throughout the web ecosystem in same timeframe, how does showing the same trend reflect negatively on JavaScript developers in particular? It seems a bit like pointing to somebody in an earthquake and yelling at them to stop moving around.


Yes, it was bad then, too. Doesn't seem like much of the Node ecosystem learned from that, though.

Rails projects were pretty famously brittle and painful to deal with for doing major upgrades.

I've moved to JVM-land and don't miss that part of working with Rails, though there are still things I miss about Ruby.

But everything my coworkers tell me about Node projects sounds like it's even worse than Rails was.


Ha. I've only had one experience with 'npm' (user requested install), and it went just like that. It seemed to want to install about 500 modules and just exploded half-way through.


Not sure I understand this post. You say it isn't a problem for the future but then go on to say it is a problem.


He’s saying it’s not a problem for the future, because that implies it’s not a problem now, which he asserts that it is. It’s humor


I used to think backbone was kind of cool! RIP I guess.


It was, at the time - this was right before Angular, then React made a splash, but before those became mainstream, Backbone was a really good framework to be using.

Plus, both Angular and React kinda relied on the performance of JS engines becoming what they are nowadays.


I totally disagree but I just want to compliment you on a delightfully written comment. I laughed out loud multiple times.


marionette, backbone, knockoutjs! lol the list is endless.

in 5 years, react??? who knows.


In fairness, a lot has also changed/churned in Windows desktop UI development, iOS development, and Android development over the same time period. (I know a lot less about the latter two, but didn’t they at least both change languages and see the significant rise of PhoneGap/Titanium and then React Native and then Flutter?) And those are all ecosystems that, for better or worse, have a single dominant corporation lording over it.


I love seeing these opinions on HN, it means my job as a qualified JS dev is very, very secure. If you work at a place that has problems with npm deps, just understand that the problem is your team, not the ecosystem. Choose which tools you use, and choose carefully my friends.


Software developers have been rewriting legacy codebases since forever. My early career was spent bouncing in and out of projects that the customer deemed in need of a rewrite. If it was perl, it needed to be replaced with Java. When it was Java, it had to be replaced by Ruby. And now all the Ruby needs to be replaced with JS. Now that we are stuck with JS, devs are just finding new reasons to rewrite stuff.

PHP projects never seemed in need of a rewrite.


That last line! Oh boy have I seen some PHP projects rushed out that sure do live by the skin of their teeth. They really do need a rewrite.


There's a fundamental question here which no one ever seems to ask: Why is the modern software industry in such a constant state of flux? Are we really becoming so much collectively smarter every year, and if not, why did the previous version of Software X make the wrong decision? Is there an end point where we all actually figure out how to develop software properly?

As I see it, truly mature software should introduce breaking changes twice a decade at most, and ideally less often than that. I don't doubt that most software updates provide a net benefit, but what of the inherent cost of change? Changes require every single user to put in extra work, to adapt to the new version's functionality. Why are software projects so cavalier with their users's time?

And this applies to end-user software too, by the way. Every time Slack redesigns its interface, users need to relearn where all the buttons are. Nothing can possibly justify rolling out ten million redesigns; if a facelift is in order, leave it in the oven for long enough to get it right, and then be done.


There's a simple arithmetic to this: Computer Science is a growth industry (still), and has been on an exponential curve for decades. It doubles roughly every 5 years, which means that every 5 years half of all programmers have less than 5 years of experience, by definition.

More importantly, the absolute number of these inexperienced programmers (and architects, and admins, and ops people, etc...) is growing, so the "total ignorance" is increasing.

I've just gotten involved in a project to "modernise" a "legacy" ASP.NET project. By legacy, they mean ASP.NET 4 running on .NET Framework 4.8 with SQL Server 2017. Not exactly ancient! This apparently is no longer the hot new thing and "has to be" rewritten using cloud microservices or something.

The app actually runs just fine in its current state. But none of the programmers involved know the pre-core version of ASP.NET or anything but Azure. So it's time to revamp it.


> Why is the modern software industry in such a constant state of flux?

I see huge software frameworks sort of like political parties. The community around each of them promotes the benefits of using their framework over someone else’s, and we “vote” by hitching our products to these tools.

If you didn’t choose wisely, or the framework didn’t develop in a way that benefited you or your org, you’re probably keeping an eye out for what’s next.

The other component to all of this is figuring out how to stay relevant in the industry, and I see more and more frameworks making design decisions around this fact.

Take Gatsby, for example. I still can’t figure out why they use graphql to populate templates other than developers wanting graphql experience, ergo let’s use gatsby. The whole point of graphql was to minimize data transfer from server to client, but somebody thought this was somehow more beneficial to generating a static page than a vanilla JS object.

> Every time Slack redesigns its interface, users need to relearn where all the buttons are. Nothing can possibly justify rolling out ten million redesigns;

I think this partly falls on engineering orgs with development teams that need to justify their existence by constantly rolling out these kinds of changes. Because of Slack decides to freeze the UI, how many front end devs are going to need to be reallocated?

This rant was somewhat aimless, but I think ultimately that many of your rightful criticisms are the product of the politics of engineering companies and communities.


> I think ultimately that many of your rightful criticisms are the product of the politics of engineering companies and communities.

I don't doubt that. I would, however, like to see some people in those organizations try to enact changes, particularly those nearer the top. The business benefits of doing less unnecessary work are relatively obvious.

Separately, a lot of the attitudes towards legacy software use ought to change. Most techies regard organizations still running Windows 7 in 2020 with a mix of amusement and scorn. "Windows 7 came out X years ago, why haven't they upgraded by now?"

Well, why haven't they? Maybe because they already paid for the development of Custom Software X, and aren't particularly keen in redoing all of that work just because Microsoft decided to change things. I don't feel great about Microsoft holding security patches hostage just because more Windows 10 customers would improve their brand or some such. I understand needing to charge for said patches past a certain period, but why not continue providing it for a reasonable price, as long as their are paying customers?


It might help to think of the entire field of software development as a single living organism that's clumsily, through trial and error, trying to grow into maturity, to find the best way for software to go from concept to realization.

It's very young. Thousands of years passed between the earliest mathematics and the first calculus. Software has had, at most, a hundred years.

Things like microservices, SOLID, dependency hell, npm are all examples of the software organism trying to figure out what works in reality and what doesn't. None of the individual cells -- us developers -- can get a big enough view of the whole thing to understand ahead of time exactly what will work and what won't in large scales and over long timelines. We might argue from some ideology or personal experience, but we don't really know.

Software wants to be inexpensive, both in initial development and in ongoing maintenance. But what does inexpensive really mean? Does it mean that it starts with a formal proof that takes weeks to develop and then is cast into code and never modified again, or does it mean a function that's banged out in minutes and then has an entire suite of test cases written against it and is maintained in minutes and hours here and there for years?

The software organism has learned some things it shouldn't do, but that required lots of developers trying it out, and other developers wailing and gnashing their teeth about it and altogether saying, "this is bad and it doesn't work and we must do it another way!"

We aren't going to see software development reach maturity in our lifetimes. I don't think it will happen for another hundred years at least. I think, eventually, software might settle down to a handful of purpose-built languages, and somehow all the problems of code reuse and distribution and documentation will gradually get solved, probably over some massive and absurdly complex network. Developers might become more like immune system cells, wandering around the network and fixing up the little bits here and there that they specialize in. I dunno. In any case, before any fo that happens, we first have to find out why every other approach to software development is worse.


Besides the fact that software is a relatively young field, it also aims to exploit hardware devices which have been evolving ridiculously quickly.


AI will be writing the code by the end of the century. You'll long for these debates when writing code is relegated to what hobbyists do in their garages with wood. There will be few admirers and even fewer buyers for your carpentry then.


Without taking the time to make a data-driven argument, I'll just hypothesize that, it's a natural result of lowering the barriers/costs of entry, to making changes.

30 years ago, if you worked in software, then you were almost certainly building shrink-wrapped products, that had to "go gold" before being printed onto 1000s of CDs at a factory. How quickly/easily could you change anything? Not for years, typically. Hence, you were very careful to try and build things right the first time, but the ironic thing is, with such long feedback cycles (i.e. you never really got feedback), that all this waterfall-style effort to plan everything and do everything right the first, rarely ever materialized as perfect/seamless UX for the end user anyways. It turns out, you need to iterate to improve.

Nowadays, the pendulum may have swung too far in the opposite direction... it could legitimately be said that, some things are too easy, tinkering with changes is too easy, and thus nobody expects to have to maintain something for a long period of time. If the cost of rewriting everything is low enough, people will be tempted to just do that whenever they change their minds about something, rather than forcing themselves to predict the future upfront and make durable decisions that will survive for 5-10 years out in the wild with no changes.

IMO, there's no single right answer for all scenarios, all products, all forms of engineering, etc. But it's explainable why such trends may emerge over time.


> it could legitimately be said that, some things are too easy, tinkering with changes is too easy

This is certainly the case in modern game development. It's now routine for a game to receive a major patch within 2 weeks of release.

> there's no single right answer for all scenarios, all products, all forms of engineering, etc.

Sure. A search engine for cat photos can go ahead with a 'minimum viable product' release, and improve incrementally. Not so for avionics.


To be fair, the minimum viable product for avionics is more complex than anything most of us have written, so it's not implicitly wrong.


I don't see how avionics' complexity is relevant. Game engines are complex too.

Avionics cannot be developed iteratively, for domain-specific reasons. The costs of re-testing, re-certifying, and deploying, would be crushing. Besides that, how can an aircraft be delivered if its avionics are unfinished? What's the gap between 'viable' and 'finished' here? Games are of course a different domain.


I think that frequent changes and experimentation are great qualities for new software, which is still figuring things out. However, this should be a temporary state, and it should be clearly communicated to users with some sort of extended "beta" moniker. After some number of years, the software's creators should recognize that their product has matured, and the pace of change should slow down drastically.


I'd say 50 years is the magic number.


Just to be clear, by "some number of years" I was thinking closer to 3–5 years, although I of course understand where you were going with that. :)


> Why is the modern software industry in such a constant state of flux?

Not all of it is. C is remarkably stable, for instance. C++ is much quicker to add language features, but has a real emphasis on backward-compatibility. I think Java and C# have done a pretty good job there too. Web front-end frameworks have a higher churn rate, yes, but for now, Angular and React are king, and have been for a while. Maybe they'll be toppled by some other near-identical framework in time.

> Is there an end point where we all actually figure out how to develop software properly?

Depends on the domain I suppose. Embedded code, operating systems, and drivers, have been written mostly in C for decades now. Perhaps languages like Rust will make some inroads, but the 'normal way' has been pretty stable. I imagine other non-software fields have seen more radical changes than that over the same interval.

> Every time Slack redesigns its interface, users need to relearn where all the buttons are. Nothing can possibly justify rolling out ten million redesigns

I once read a rather cynical take on this same question: these companies hire a UI team, and that team isn't about to admit that they have no meaningful work to do.


> Why is the modern software industry in such a constant state of flux?

Is it though? Or is it barely trying to catch up with development of hardware and new fields of application?

Or should we just do AI, web and realtime distributed databsesa in Fortran and Cobol?

Maybe people who feel the excessive flux just lack the capacity to notice how fast environment changes? And that's why they feel flux excessive not borderline inadequate?


This might be cynical but maybe because most individual pieces of software don't matter too much.

Software is much easier to change than other things that humans build. It's easier to replace, sometimes with success, than what other disciplines have to work with. A lot of software isn't used in safety critical areas either so failing is acceptable. Lots of companies on the web have lots of downtime and still make money.

Even poorly written software can absolutely print money with huge margins. And old software that's still around and kicking is often the most successful!! A form of survivorship bias.

Sure, _products_ often matter a lot. But within them and as long as they still solve their problems, it doesn't really matter what's inside. If a product I use as an end user has a lot of internal churn, but still solves my problem adequately, I don't really care for the most part.

Sure, if it's medical software, or aviation software, or software to replace an elevator panel, I might care a lot. But your average web app or desktop program?


I think software engineering is depressingly, suboptimally stable. The space of all possible ways to write programs is so, so large and almost completely unexplored. Lisps, Hindley-Milner type systems, Cox, neural networks with autodifferentiation ... whenever we substantially depart from the local optima around C-like programming, we find cool shit.

However, the vast majority of programming is done to build nearly indistinguishable CRUD SaaS apps. If we could just spend half our time doing real, interesting exploration of programming-space to satisfy our curiosity, maybe we could all agree to build apps with a more boring tech stack. However, that would require a large scale change in incentives that is hard to imagine shy of a Space Race style event.

... or Jonathan Blow was right and C was more or less the global max, and everything else was a mistake. I really hope that isn’t the case.


It's because not even the foundations are not properly engineered, mostly due to lack of funding (because they can't be sold, and anyway the market doesn't reward proper engineering beyond a basic level, and governments don't massively fund open source), as well as lack of accountability for broken and insecure software.

For instance, look at programming languages: to write provably correct code you need dependent types, but none of the dependently typed languages produce optimal code or have a viable ecosystem. This problem has been unsolved for more than 30 years. We also didn't even have a barely decent production language until Rust in 2015.

Or databases: the best database is PostgreSQL and it doesn't even have things as basic as unsigned integers in core (!!!) and doesn't support multi-master in core (and of course, written in an horrible language, which is C). Again, problem unsolved for more than 30 years.

Or operating systems: all OSes are fundamentally insecure, because they are not written in a proper programming language, and at least the desktop ones don't generally use a hierarchical architecture, but rather have a single gigantic kernel-user boundary (plus they have horrible APIs, written in horrible languages, etc.).

Most other programs have no chance of even being provably correct to a given spec since the foundations are missing, and thus much less of provably correctly implementing the provably optimal solution to a problem. They are not even engineered properly with reasonable expectation of correctness, because there is no liability if you release an incorrect program.

Of course it's not clear if this is fixable without developing a general purpose AI to write the software or massive societal changes.


> Why is the modern software industry in such a constant state of flux?

This is true of both language design and software practices. It's painfully obvious. There are no standards for what constitutes "better" or even "good". Is strict typing better? To what degree? What's it worth as a tradeoff? There are almost no quantitative analyses (https://quorumlanguage.com/ - an attempt was made), which is lamented https://vimeo.com/9270320 by some.

Even ROLES are not well defined (https://www.youtube.com/watch?v=GhfVK_ubk8U) because roles are dependent on known concerns, which are also fuzzy in this industry. Saying development is immature is so understated, it's laughable.


For python 2/3, it was because the BDFL made a wrong call. But we are past that now, and we collectively learnt something from the debacle, including the BDFL himself:

> As for "Machiavellian Python maintainers", that's real. There have been explicit attempts to apply pain to Python 2.7 users. This has decreased since von Rossum got stuck maintaining Python 2.7 code at Dropbox as his day job.

(https://news.ycombinator.com/item?id=13020680)

For java 8/9, it was because Brian Goetz still has a boss. We can just compare https://www.youtube.com/watch?v=2y5Pv4yN0b0 to https://www.youtube.com/watch?v=wHoRBvt3U6o. Hopefully some lessons were learnt as well.


Uncle Bob used to blame it on the doubling of developer counts every 5 years. Which means half of us have less than 5 years experience. Which makes democratic decisions a little awkward.


Arrogance. Younger programmers or people just getting into the industry assume they can automatically just do it better without taking the time to understand what is already out there. Reading and maintaining existing code is harder that writing your own.

Laziness. Rather than work from the lowest level denominator (what can vanilla javascript do for you?), do a quick google, see some random comment somewhere on the internet that uses library X and just add it to the lumbering Frankenstein's monster of dependencies without a care for the future.

React + Redux + Typescript is nice in terms of frontend development, but the shear number of packages required for a small list of direct dependencies is staggering. Really it is absurd. It was created by people who have the above two tendencies I would say.


While I think there is plenty of people not learning the pasts problems, I think it's unfair to say it's all laziness and way over complicated. Business solutions are hard, as they're designed by people through a game of telephone until you get to the engineer intended to make it. Maybe the company put the burden on a junior engineer to save money. Should they tell their boss they need to spend a couple months researching? Okay so the company hired a senior engineer, who has been using X tech for 15 years that has seen a large decline in usage. Senior builds product, but gets a better offer and leaves? Who now maintains this? Hiring for X tech is hard, you bring someone on to learn it but they kinda lied at how much they knew and since no one in your org knew much about it they passed through. Now they're making a mess of the beautiful architecture the senior laid out.

Stop blaming this all on junior programmers and javascript. That's just saying those damn kids and their lack of respect for their elders is what's causing it. Truth is this is a relatively young industry, that's being used in just about every single aspect of life, where the cost to make a change is minimal, where the ability to learn is one of the most accessible skills, where business demands change on a dime.

Stop your own arrogance and laziness in understanding the landscape and the people in it.


Apparently you didn't need any answers after all as you have it all sorted out for yourself already!


Human factors. Developers ship a new OSS framework Z because it makes an extremely subjective improvement on framework X in their opinion. And the process of writing Z satisfies an intellectual curiosity to write something from scratch and it also looks great on your resume to have written your own framework. Z framework actually gets popular, other developers, kind of bored at work from using the same tools for ages try out framework Z. And it's nice to try out something new, it makes your work day feel fresh and different and you learn something. Framework Z slowly gets more popular and now it's the hot thing to have on your resume because the smart bleeding edge developers are using Framework Z. So more devs start using Framework Z, try it out on a side project, or a new project. Eventually even the skeptics about Framework Z have at least tried it out and put out a blog post with their thoughts on Framework Z.

Framework Z feels legitimized, it's a boon to a lot of careers. Lots more people use it. Recruiters start to ask for it as a requirement.

I don't want this to sound cynical, because I think it can sound cynical. I suspect the industry starts from good first principles of wanting to learn new things and try new things and make improvements, and grow themselves. But there is also serious capitalist pressure on the whole system. And a lot of real money on the line, and careers and Twitter followers and conference speaking engagements.

So between the good reasons and the bad reasons, in web development in particular, things move way way way too fast for little gain.


Quoting the article:

Around 60% of packages on npm have not been updated in a year or more. Despite the lack of maintenance these packages are still downloaded billions of times.

This is a problem across many other languages/frameworks as well. Many popular packages have a single maintainer and the entry in the package manager index is accessible only by that maintainer. If that maintainer stops paying attention, the problems could be worse than the package just bitrotting, as we've seen from supply-chain attacks like event-stream[1].

There are volunteer orgs like Jazzband[2] which take group ownership of popular packages to ensure ongoing maintenance, but I've not seen many of those so far.

[1] https://www.hillelwayne.com/post/stamping-on-eventstream/ [2] https://jazzband.co/


Yeah, this is a big problem with pip. If you can't get the old maintainer to add you as a contributor, you can't update a package on pip.

So now you have to fork it, come up with a new name, and then publish that instead. And then hope everyone who depended on the old package switches to your package.

Reddit had this same problem with abandoned subreddits. They instituted a policy where you could apply to take over an abandoned subreddit, so if there had been no mod activity you could take over.

Pip needs a process like that. They need a way to take over an inactive project. Safety would be a big concern, you don't want someone malicious taking over a popular but abandoned package and then hacking it. But safety the other way is important too.

There is also the odd situation where you could end up owning the code and not the package on pip. When my company acquired another, we got all their code. We only realized later that we didn't get the credentials for pip. Luckily the old owner was kind enough to just give us the credentials, but we could have been stuck owning the code but not the pip package.


There is such a process, PEP 541: https://www.python.org/dev/peps/pep-0541/

It was adopted in March 2018, and many projects / names have been claimed under this process.

You can see a list of some of the projects claimed or in the process of being claimed here: https://github.com/pypa/pypi-support/issues?q=label%3A%22PEP...


It's suprising what an email from PyPi admins can do. After many emails from different people and countless GitHub issues over the past year, the only thing that got a reply from the maintainer was an email from the PyPi admins.


There's now a PEP for that that seems to have had some work on it recently: https://github.com/pypa/warehouse/issues/1506

(I got an email referencing it a couple days ago from an old project I asked to take over).


That issue was last updated in July 2019, did discussion continue elsewhere?


Isn't there a bit of irony there though?

Lack of maintenance could be abandonment, which is the implication...or it could be it's complete, and does what it needs to, and there hasn't been any reason to change it. It's especially ironic when talking positively about COBOL in the same post.


Honestly I don't know why a lot of open source is so centralized. I think Github can make a big difference there.

I think open source projects should have redundancy in their owners. They should also have a bigger group of people that can decide (either independently or democratically / by consensus) on merging and making a release, which is already assuming that "master = release" hasn't been automated yet.

And it should be easier or possible to take over a project, on both source code hosting sites like github and dependency publishing organizations like NPM. In theory anyone can take a library and fork it, but in practice the original library has a monopoly on the name still and will be installed by default for a long time.

NPM could instate a deprecation warning on libraries that haven't seen a release or any activity in a year. Mind you, a lot of libraries will be 'done', but those could be marked as such. If anything, the people responsible for the library should be pinged every once in a while and indicate that they are still responsible for the library, and will be able to act in case of e.g. a security vulnerability.


> I’m just not inclined to agree that civil society can’t continue to run on millions of lines of COBOL for another 60 years. It certainly can.

> Java 8 and Python 2 on the other hand are a far more serious threat. When systems can’t get off end of life technology they miss security updates, performance enhancements, and new features.

This seems contradictory. Is it not also a problem that COBOL hasn't been getting security updates for years (I assume)? Are COBOL systems typically isolated from the outside world in some way?


Sort of... COBOL programs are usually batch programs, and grab input from tape rather then the net. Interactive stuff tends to only be run on internal networks (3270 terminals used to be the norm).


Yeah, I wondered if it was something like that. Sounds like input is sanitized before it even touches that system.

Which begs an interesting train of thought: could we make systems that last longer by doing a better job of isolating pieces of them from the outside world? If malicious input never reaches your code, does it even need security updates?


The rubber has to meet the road. Users have to interface with the software.

"Backend" software tends to be very long lived because it's not client facing. Clients don't care that the software was written in Java2 and uses some weird-ass collection of korn shell scripts to keep things moving. These scripts usually just need to eat data from one location, and poop it into another. And because these systems are old, they are well tested and battle hardened.

But that backend is no good without some form of user interface. This is almost always the place where things break down. There are many more people involved in the decision making for UX, and we have more and more devices with different use-cases. Most tech-ish companies can't afford to ignore completely new computing paradigms for decades.


At the risk of stretching the metaphor, what I'm talking about are bearings that separate the wheel from the rest of the car. Maybe you have a thin, outward-facing server that does nothing but authenticate, validate, and sanitize input data, and then redirects that to some internal system that knows for sure exactly what it's going to get, and doesn't get to talk to the outside world directly, and therefore doesn't have to worry about attacks.


Congratulations, you just invented "middleware".


IBM still maintains the platform: I’m sure there’s still system updates and the like, IBM just has done a better job maintaining backwards compatibility than “modern” stacks: e.g. the AS/400 has gone through a couple different CPU architectures without requiring that programs be recompiled because it was designed to decouple the code output by a compiler from the machine code of the underlying platform.


I think it's probably because COBOL has almost no dependencies. Correct me if I'm wrong.

> The real horsemen of the legacy apocalypse is the depth of the dependency tree. Modern software development stacks abstraction on top of abstraction.


Some companies that have large COBOL code bases don't hire "software developers" -- they hire people who have worked in other careers and want to switch to software development and then they extensively train them internally. This could explain the median age of COBOL developers remaining constant.


what other careers do they prefer to draw future Cobol developers from?


I worked with a former construction worker who was going through the training. He saw this as his opportunity to get out of that kind of work. A lot of the hires had non-technical and non-domain backgrounds.


a guess: something with domain experience banking/finance?


I met one who was trained out of school in Austria. He was in his 20s.


> The real horsemen of the legacy apocalypse is the depth of the dependency tree. Modern software development stacks abstraction on top of abstraction.

I think Python2 will be 2050's COBOL.


I totally agree. I think that we are headed for a "Dependecalypse," with Jurassic-scale disasters over the horizon.

Dependencies are not per se bad, but some dependencies are just bad.

Also, we can get these hideous cocktails, where dependencies are mixed together, with alarming results.

A fairly common issue with me, is if I am setting up a Web site, I need to be very careful what theme I use (assuming I don't write my own), because the themes tend to come with a fair bit of "baggage," that does not play well with others.

I'm the original author of a fairly ambitious system that has been getting a lot of traction, lately (partly because I stepped back, and let some "new blood" take the reins).

I am constantly reading reports of dependency collisions. When I wrote the original framework, it had zero dependencies, and fit everywhere.

But I can't argue with the results. When these new folks came in, and started adding dependencies (sometimes, under protest from me), the utility of the system skyrocketed.

So did the problems.

These guys are pros. I trust them implicitly, and they have been anything but reckless, yet they have still had some issues.

I cannot say the same for a lot of folks out there. I see people throw together massive systems, with hardly a thought to the dependency debt.


I’m going with the author on this: npm, pip, and friends will be the 2050’s cobol, but supporting them will be an order of magnitude harder.

Startup idea: snapshot package managers every 6 months or so. Provide access to the snapshots for free. Once enterprises are “hooked”, and fall off the nightly upgrade train, charge out the nose for backported security updates.

I guess this is similar to the RHEL model, but I suspect it’s even stickier.


> I think Python2 will be 2050's COBOL.

I expect otherwise. My company has various tools and applications that weren't migrated to python3 and it's getting harder and harder to support then as dependencies keep dropping support for them (someone had the bright idea of not versioning dependencies). Sooner or later a nasty bug is going to be found that makes continuing to use it untenable and you'll be forced to upgrade. Python is still under development enough to make upgrading attractive.

If a language manages to become hugely popular and then stops development, that's how you end up with Perl or COBOL.


> If a language manages to become hugely popular and then stops development, that's how you end up with Perl or COBOL.

Perl development didn't stop. In fact, Perl 5.32 will be released in a few days and it will introduce several new major features, such as operator chaining, the 'isa' operator or the ability to disable the indirect object syntax.


I know of at least 3 banks that use python2 inside their trading platforms (the big two use python as the built in scripting languiage) -- which would be a multi-year multi-10s-millions project to migrate to 3.. We won't be saying goodbye for a while...

For standalone stuff, yeah, it's not so bad. Upgrading between node releases has been more painful to me than python 2->3..


Wow! What are the problems you encountered upgrading Node releases? I noticed how they removed tail-call optimization, which broke Lamdu.


Pinned dependency X stops working in Node X+1. Upgrading dependency X to work on Node X+1 breaks code using dependency X. Sorting out the mess stops development and leads to cascading breakage.

So just say "fuck it" and wrap the project up on a Docker container running Node X. Now work can continue. Luckily the project is internal so missing security fixes is less of a problem than something customer facing.


>Upgrading between node releases has been painful to me

This is surprising to me. What kind of deps do you have that are changing? From my view starting around Node 4, only Buffer really went thru big API refactors, and otherwise going LTS -> next LTS is seamless


> Python is still under development enough to make upgrading attractive.

> If a language manages to become hugely popular and then stops development, that's how you end up with Perl or COBOL.

This contains two notions I am not sure whether they are right.

First, you argue that Python is still under development. Sure, from a language designer's view, people who use Python2 can relatively quickly learn Python3, and the language designers have interest in people migrating, so they might sell it as the same language.

But from the point of view of a language user who has a lot of legacy code and has no interest (or not even the resources) to rewrite and re-test it all, Python2 and Python3 are actually different languages. A program written in Python2 will not run with a Python3 interpreter, and there is no switch or option which would make it work. So, its different languages (which unfortunately share the same file extension for their source code files).

The second argument is somehow that organizations and people keep using Python2 "in spite of" or "although" it is not developed further. But I think many of these organizations might also chose so (perhaps not even consciously) because Python2 is stable and is not going to have breaking changes, which is identical to one of the reasons why COBOL is still used. For these users, stability is far more important than the addition of fancy new libraries, or new syntactic features which make the code actually harder to read.

Sure, there is the argument about security fixes. For web services, or software on desktops, this absolutely matters. But there are at least two domains where security fixes are far less relevant than stability and backwards compatibility: Large enterprises, and scientific research. In the first case, the code runs well shielded inside the corporate systems, and does not encounter untrusted input, so it does not matter whether it is secure. It matters more that the developer understands the code well and many edge cases are fixed. In the second case, security is, so far, a non-concern. And in addition, the people who wrote the code, which is almost always PhD students and researches which had to go for another temporary job, are no there any more in most cases, and there are no resources available to hire new people just to port the code. If new people are hired, they just will write new code, to produce new research results. Because this is what brings in the money which keeps the whole ship floating.

(There are interesting exceptions to that, some research projects are that big and long-lived, and use so much software, that they employ people who actually know both science and software engineering, but that's more the exception).

As a result, stability and backwards compatibility beats new features, and some people and organizations simply will not port because it does not meets their needs.

There is another language which is in a slow but steady process of fragmentation, C++. So far, there have no (almost no) backwards-compatible changes. But, the main selling point of C++ in the 80ies was that it was compatible to C, and now the official C++ Core Guidelines (https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines) explicitly discourage the use of C constructs like pointers (https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines...). At the same time, large users, such as Google, have their own C++ guidelines, which discourage C++ exceptions (https://google.github.io/styleguide/cppguide.html). The C++ Core Guidelines in turn require use of RAII which in many cases requires exceptions, especially in algorithms.

A foreseeable effect of these opposed forces is that C++ is splitting into several communities which use significantly different idioms, and it is well imaginable if some part of the different user communities gives up on backwards compatibility as well.


I've very much started to trade expediency for limiting dependency. Of course, since I have to reinvent the wheel, some claim that has many of the same costs as any dependency. But since the code use is all in-house and doesn't have to satisfy the needs of a very broad user-base or too wide a spectrum of use-cases, its not much of a problem at all, compared to finding that your code depends on something someone else supposedly maintains, but doesn't really, or who decides that backwards compatibility is not terribly important.

All kilometer-age will vary, of course.


I'm sure Java will be, given how it's being applied in a lot of large back-ends for e.g. banking, administration, etc.

That's my making-some-quick-money-before-retirement plan set at least. I'm not a great Java developer but it'll probably be good enough.


Python’s standard library is essentially a Swiss army toolkit for manipulating data. It has email and imap parsing built into it! I’m sure this doesn’t completely explain the dependency tree size but it does explain the proliferation of simple packages in JavaScript. Even data structures are difficult to find in JavaScript. Try looking on npm for a max heap. Python has heapq built in. JavaScript has ten libraries that are all equally unpopular.


This is one of the better tech articles I've seen on here recently. The analysis of why Python vs. node ecosystems vary in dependency depth and breadth is good food for thought. Python has long been a "batteries" included language whereas node has not. Node makes it easy to publish packages whereas my experience with python is that it is quite difficult to get a handle on doing PyPI (versus Anaconda!) the right way. Publishing wheels versus source is confusing and it took me a while to understand the nuances. There's also the idea of supporting python2 vs python3, which people from different quarters will criticize no matter what approach you take.

Publishing binaries on Python is also difficult and a highly skilled endeavor requiring additional knowledge of compiling shared objects, docker, and the baroque manylinux concept. Once you have a system down, it becomes easy, but that took me about 6-12 months of accepting increasingly complex requirements before I could get a decent handle on it. I have not tried publishing binaries on npm so I can't say what the relative experience is.


Kinda surprised to not see Perl in there in some form, since it's still around as the glue holding a lot of older systems together (and for new stuff, but that's not the point of this article). Then again, it's got a little bit different of a story, since most (almost all) the old Perl from decades ago will run find on a brand new Perl interpreter released recently.


I have a Perl problem right now. It's not the Perl interpreter at all, which as you say runs the old code fine. It's the hierarchy (read as "dungheap") of package dependencies that have changed since the last primary software version was installed.


Well, the good thing is all those old versions of modules are archived[1] and you can still get them if you really need to, and there are utils to spider the dependency chain and tell you the packages and versions (and I think stuff that will make sure those versions get fetched) on CPAN itself, so it shouldn't be too hard to get the prior stuff up and running if you go that route (even if it might have security implications). The only stuff it would be hard to back down in versions would be core stuff, but I'm not aware of any core Perl modules that have had breaking API changes.

1: http://backpan.cpantesters.org/

Edit: Cleaned up an ambiguous sentence in the middle there.


Perl 5 is still actively supported, unlike python 2.


As I once told an interviewer, "Legacy code is the future!"


I can't help but look at this from the other direction: why does code constantly have to be updated? There's something to be said for something that, once built, simply works.

People love plenty of old things: antique furniture, oldtimer cars, classic books from centuries ago, monumental buildings. But software needs to be constantly rewritten en updated, and that takes a lot of work.

On the one hand I don't want to make the case for outdated languages and systems, but on the other, we are spending a lot of effort just keeping things up to date with the latest technologies. Sometimes that's really necessary of course; security holes need to be fixed, and frequently new features are necessary. Better ways of doing things have been discovered or developed. But man, there's a lot of effort going into these legacy systems.

Although I love learning new things, I also kinda hope that some day we'll reach systems and languages that are so well-designed that they don't need to be changed much, or at least keeping them up to date will become trivial.


You can build simple things that work forever (or as long as the underlying platform remains backward compatible). It's just rather costly, and most software isn't important enough for that.

As the owner of a 40 year old motorcycle, I can assure you that such things also need a lot of maintenance, using increasingly rare parts. Leave it sit for too long and you need to clean the carbs. Even my 2 year old motorcycle needs new oil, brake fluid, tires, brake pads and clutch cables, not to mention insurance, plates and a biannual inspection. Keeping those things on the road is an active job.

Of course, objects with few moving parts and no external dependencies will last longer.


It's because the landscape moves on. Take Windows applications; plenty of apps were built for Windows 95 or Internet Explorer 6, but they no longer work in newer versions of Windows or IE.

Companies are holding back on updating their users' browsers because they depend on these applications. This leaves them with two options; invest to have the application updated or rebuilt for modern browsers, or take the risk of outdated and insecure software.

I guess it can work with sandboxing and virtualization and the like (how they nowadays keep old mainframe applications working on modern hardware), but it's a patch.


I know why it happens, but I still regret it when that means old things become inaccessible. Old games become unplayable, or websites stop working (not to mention that they are frequently unfindable on search engines), etc.

We treasure the writings of the distant past, but the 1990s are already becoming surprisingly inaccessible.

For games, there's fortunately gog.com that tries to keep old games playable, and archive.org of course tries to preserve old web content, but that only really works for static content. And then there's all the cool stop-motion lego animation I found before Youtube came along, and now I can't find them anymore.

Internet sometimes feels like a city bulldozering their entire city center to rebuild everything according to the latest standards. It's certainly convenient, but you lose so much history.

But now I'm talking more about the loss of content than about the fact that everything requires constant updating and is never done. These are two different concerns, but I feel them both.


I'm happy I found Marianne Bellotti. Good stuff. I'm following her now.


She has so many great pieces on Medium. It's worth looking through them.


I know the Rust Evangelism Strike Force is a meme, but Rust genuinely has a good solution to language rot with its "edition" concept, where the language can be updated and the compiler just converts all code to a mutually-interoperable internal representation. It’s not dissimilar to using something like Babel for JS, except that because this internal representation is extremely simple, it’s far easier for the language to change more, add restrictions, remove restrictions, and so forth, while still being interoperable. Sadly, Rust has no good equivalent when it comes to libraries, and it’s not unheard of to have two libraries that should be interoperable fail to compile because there’s no way to find a single version of a shared library that satisfies both of their version constraints, even though the actual types in question are identical. I’m not sure if the error messages still look like this, but they used to say something like `expected foo::Bar, got foo::Bar`.


A bit of fun - "That's what I love about this code, man. I get older, it stays the same age”. “Alright, alright, alright.”


> In all likelihood the reason the average age of COBOL programmers is stable is because COBOL programmers develop their depth of experience and expertise in other languages before moving over to COBOL later in their career.

No, it's probably that there's a steady but small influx of new developers coming in and the bell curve is throwing off the 'average'. Lies, damned lies and statistics.

Answering questions with statistics is a rookie mistake.


Young programmers do not have the option of learning COBOL.

The university I went to still offers COBOL classes.


I'm banking on Java still being needed when I'm 65 (~20 years from now) and close to retirement. When my 401k is worthless, I'll still be able to make a living until I'm dead and buried. Seriously though, Java is the COBOL of tomorrow.


Meanwhile, Android still doesn't fully support java 8.


> from the TFA, The end of life for Java 8 was supposed to be 2019

Nope, that the EOL of the free support of Oracle JDK 8.

Currently, the OpenJDK 8 is maintained mostly by RedHat, so EOL of Java 8 is at least 2026 [1]

[1] https://access.redhat.com/articles/1299013


My schadenfreude regarding Android is that some Kotlin cool library on the JVM ends up requiring Java 8+ features and then all those #KotlinFirst heads on Android will finally get it why compatibility with newer versions is a must.

However most likely what will happen is some fork to make it somehow work on Android, even if with less features.


She found plenty to talk about without even mentioning the Lava Flow anti-pattern.

Her other essays are also insightful. The one about Steve Jobs is the best about him I have read, even considering Isaacson.


Couldn't read the article because I've "read all of your free stories this month". I would prefer a few ads rather than the Medium's garden.


I suggest you get a "Cookie Autodelete" browser extension. Not for Medium per se, but if you have one this problem also goes away.


there's a trivial workaround to medium's paywall:

open a private/incognito browser window




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: