Hacker News new | past | comments | ask | show | jobs | submit login

Not all code is run high enough times for that trade off to be always justified.

It is very hard know if your software is going to be popular enough for costs to be factor at all and even if it would be, it is hard to know whether you can survive as a entity long enough for the extra delay, a competitor might ship a inferior but earlier product or you may run out money.

You rather ship and see with the quick and dirty and see if there demand for it to worth the cleaner effort .

There is no limit to that, more optimization keeps becoming a good idea as you scale at say Meta or Google levels it makes sense to spend building your own ASICs for example we won’t dream of doing that today




> Not all code is run high enough times for that trade off to be always justified

If you're running a web server, it definitely is. Compute and memory literally translate into money.


Not if you don't have users, that is the point.

Vast majority of applications both in enterprise and consumers space that get built do not close to even say 10k monthly active users.

You can stick several of those apps in crappy PHP or NodeJS code on to the cheapest $50 / month VPS and nobody will know the difference ,or do it even cheaper on run them on serverless stacks such as firebase, lambda etc.

There is a threshold for usage of code under which the significant cost driver is developer time and skill levels needed. 95%+ professional developers never write code that will exceed this threshold.

This economics is what drives

- So much poorly written plugins for CMSes or apps in easy to start languages like PHP or nodeJS,

- No code solutions as diverse as Retool or Shopify get so much revenue and valuations

- Copilot style AI assistance would have a market even when they were not good enough for skilled developers.

This economics works for startups too, all of here use the cloud, and it will keep making sense both technically and economically until we hit perhaps 10s if not 100s of millions of users. We don't care about somebody else DC with network mounted disk with shitty I/O performance or be bothered about paying for metered bandwidth or the language we use.

There are only few pieces of code that truly cross the threshold where cost of performance of infra(RAM, Disk, Memory, Bandwidth, CPU etc) costs are much greater than cost of developer time so it makes sense to optimize it.

Facebook fiddled with Hack run time before optimizing out their PHP stack, Twitter famously rewrote out their ruby stack after they kept hitting limits. Products have to only worry about this problem IF they scale, most don't.


That's a fair point, but I think the fallacy here is that choosing a less-performant stack means it's easier/faster to deliver features. I don't think that's necessarily true.

When Facebook started PHP was a good choice. These days though you'd probably be better off going for Springboot or .NET. These are more performant and much more "batteries included". I would say the same thing goes for Node.

Node.js is... usable. It's certainly not nice to write JS on the backend and you need a LOT of libraries to make it work. Then the problem is they don't all interop perfectly together and you won't have great tooling. I think most people, even startups, would be better off going with a backend framework with good tooling. If you're able to do a bunch of codegen + you have all the batteries, I would imagine developer velocity would be faster.


Batteries included frameworks+languages such as .NET or RoR or Springboot and similar was perhaps optimal choice 5 or 10 years back.

They balanced developer velocity over time and the learning curve needed to use them. Learning curve is important because steeper the curve, more experienced/skilled developers are needed and that translates to more $/hr cost of dev time. Simpler learning + with codegen tools was the pitch that RoR or .NET and all the frameworks inspired by them had in late 2000s.

Today it has shifted, to models like Firebase, Supabase or Hasura, NextJs or similar stacks using GraphQL, gRPC or occasionally RESTful APIs generation workflows instead of boilerplating tools .NET, Springboot et al provided . These frameworks come with hosting services and typically language agnostic, however TypeScript/ JavaScript is dominant choice in this model the developer now only focuses on business logic and not worry about organizing code or running it or about standard components like auth, so frontend teams are more likely to own this now and they will write TS/JS more often than not.

Even runtimes like Deno are getting into the game, instead of just writing the runtime code and make money with consulting, Deno wants to make DX for running code in their managed runtime so simple that a lot of small teams would just use that out of the box.

Until the app is at 10s of million scale - non NodeJS + unmanaged stacks won't make economic sense. People will build software in any system of course, because it is what they know not because it is the rational decision.


Note that ASP.NET Core is significantly faster than Spring. The closer alternative in both UX and performance is going to be Vert.X instead.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: