If every developer cared for optimizing efficiency and performance, development would become slower and more expensive though. People don’t write bad-performing code because it’s fun but because it’s easier. If hardware is cheap enough, it can be advantageous to quickly write slow code and get a big server instead of spending days optimizing it to save $100 on servers. When scaling up, the tradeoff has to be reconsidered of course.
We all should think about optimization and performance all the time and make a conscious decision of doing or not doing it given a time constraint and what level of performance we want.
People write bad-performing code not because it's easier, it's because they don't know how to do it better or don't care.
Repeating things like "premature optimization is the root of all evil" and "it's cheaper to get a bigger machine than dev time" are bad because people stop caring about it and stop doing it and, if we don't do it, it's always going to be a hard and time-consuming task.
It is even worse for widely deployed applications. To pick on some favorites, Microsoft Teams and One Drive have lousy performance and burn up a ton of cpu. Both are deployed to tens/hundreds of millions of consumers, squandering battery life and electricity usage globally. Even a tiny performance improvement could lead to a fractional reduction in global energy use.
I wish they would do this. But my experience is that building efficient software is hard, and is very very hard the larger the team gets or the longer the product exsits.
Even zoom, used to be very efficient, but has gradually got worse over time :-(
I would find this more compelling if we were not discussing a trillion dollar company that employs tens of thousands of programmers. The One Drive performance is so bad I cannot imagine anyone has put any effort into prioritizing efficiency. Naive, first effort attempt was packaged up and never revisited.
While that is true, its really not easy to do without re-writing from scratch and scrapping a load of features which is organisationally difficult to do.
What large piece of software with a user interface do you work with that is actually fast and stays fast? For me, its probably just Chrome / Firefox. Everything else seems to get slower over time.
I doubt that it would be good business for Microsoft though. The people who use them, and the people who buy them and force others to use them are two separate groups, and anyone who cares even a bit about user experience and has power to make the decision has already switched to something different. It's also the users, not Microsoft who pays for the wasted power and lost productivity.
Strongly disagree with this sentiment. Our jobs are typically to write software in a way that minimizes risk and best ensures the success of the project.
How many software projects have you seen fail because it couldn't run fast enough or used too many resources? Personally, I've never seen it. I'm sure it exists, but I can't imagine it's a common occurrence. I've rewritten systems because they grew and needed perf upgrades to continue working, but this was always something the business knew, planned for and accepted as a strategy for success. The project may have been less successful if it had been written with performance in mind from the beginning.
With that in mind, I can't think of many things less appropriate to keep in your mind as a first class concern when building software than performance and optimization. Sure, as you gain experience in your software stack you'll naturally be able to optimize, but since it will possibly never be the reason your projects fail and presumably your job is to ensure success of some project, then it follows that you should prioritize other things strongly over optimization.
I see it all the time, applications that would be very usable and streamlined for users from a ui perspective are frustrating and painful to use because every action requires a multi second request. So the experience is mostly reduced to staring at progress spinners.
This is doing a lot of heavy lifting. Just because an app is slow doesn't mean Rust would've made it faster. It may just be slow because of a bad query or otherwise poor architecture, especially in web development. The commenter is asking what projects you've seen fail because the language itself hit a performance limit that couldn't be worked around.
this honestly sounds like you're describing the most successful software on the market. I can't think of many social media sites slower than facebook or instagram, or chat slower than slack or email clients slower than gmail.
I'm not so sure this "race to the bottom" is a characteristic I want to avoid. The most competitive markets in the world are always in first world countries or in countries quickly turning into a first world country. Take south korea for example. They had sweatshops maybe 70 years ago and now they arguably have the best healthcare in system in the world.
I'll take a fast food job for 40 hours any day of the week over most of the options in poor countries. Sure, nothing but the best, fulfilling jobs for everyone is ideal, but until I see that exist I'm not informed enough to know whether it's possible outside of someones political ideals.
It also depends on where the code is running. To put it simply; nobody cares how much RAM the server is using, but they do care if their clientside application isn't responsive. UI being performant and responsive should have priority over everything else.
Are you sure? Is my one PHP server running with 50% more electricity gonna outweigh the 12 developers with beefed rigs just to get Rust compile times somewhere reasonable? Or how much longer they will be using their computers because it will take longer to write the code itself? Especially when I have 1000 monthly users and a $6 VPS is more than enough anyway?
I'm not so sure. I use Rust for simple web services now, when I would have used Python or JS/TS before, and the development speed isn't much different. The main draw is the language/type system/borrow checker, and reduced memory/compute usage is a nice bonus.
Not the other poster but I moved from Go to Rust and the main packages I use for web services are axum, askama, serde and sqlx. Tokio and the futures crate are fleshed out enough now that I rarely run into async issues.
That's pretty much where I'm at, plus a few basic packages for auth, caching, job/queue stuff. I can't remember the last time I had to care about async, but it does occasionally come up when dealing with things like background tasks.
I'm not totally happy with sqlx and the logging situation, but most issues that come up are the "solve once and never worry about it again" type.
I have to agree, despite using it a lot, async is the worst part of Rust.
If I had to do some of my projects over again, I'd probably just stick with synchronous Rust and thread pools.
The concept of async isn't that bad, but it's implementation in Rust feels rushed and incomplete.
For a language that puts so much emphasis on compile time checks to avoid runtime footguns, it's way too easy to clog the async runtime with blocking calls and not realize it.
Code is usually ran many more times than it is written. It's usually worth spending a bit of extra time to do something the right way the first time when you can avoid having to rewrite it under pressure only after costs have ballooned. This is proven time and time again, especially in places where inefficient code can be so easily identified upfront.
Not all code is run high enough times for that trade off to be always justified.
It is very hard know if your software is going to be popular enough for costs to be factor at all and even if it would be, it is hard to know whether you can survive as a entity long enough for the extra delay, a competitor might ship a inferior but earlier product or you may run out money.
You rather ship and see with the quick and dirty and see if there demand for it to worth the cleaner effort .
There is no limit to that, more optimization keeps becoming a good idea as you scale at say Meta or Google levels it makes sense to spend building your own ASICs for example we won’t dream of doing that today
Vast majority of applications both in enterprise and consumers space that get built do not close to even say 10k monthly active users.
You can stick several of those apps in crappy PHP or NodeJS code on to the cheapest $50 / month VPS and nobody will know the difference ,or do it even cheaper on run them on serverless stacks such as firebase, lambda etc.
There is a threshold for usage of code under which the significant cost driver is developer time and skill levels needed. 95%+ professional developers never write code that will exceed this threshold.
This economics is what drives
- So much poorly written plugins for CMSes or apps in easy to start languages like PHP or nodeJS,
- No code solutions as diverse as Retool or Shopify get so much revenue and valuations
- Copilot style AI assistance would have a market even when they were not good enough for skilled developers.
This economics works for startups too, all of here use the cloud, and it will keep making sense both technically and economically until we hit perhaps 10s if not 100s of millions of users. We don't care about somebody else DC with network mounted disk with shitty I/O performance or be bothered about paying for metered bandwidth or the language we use.
There are only few pieces of code that truly cross the threshold where cost of performance of infra(RAM, Disk, Memory, Bandwidth, CPU etc) costs are much greater than cost of developer time so it makes sense to optimize it.
Facebook fiddled with Hack run time before optimizing out their PHP stack, Twitter famously rewrote out their ruby stack after they kept hitting limits. Products have to only worry about this problem IF they scale, most don't.
That's a fair point, but I think the fallacy here is that choosing a less-performant stack means it's easier/faster to deliver features. I don't think that's necessarily true.
When Facebook started PHP was a good choice. These days though you'd probably be better off going for Springboot or .NET. These are more performant and much more "batteries included". I would say the same thing goes for Node.
Node.js is... usable. It's certainly not nice to write JS on the backend and you need a LOT of libraries to make it work. Then the problem is they don't all interop perfectly together and you won't have great tooling. I think most people, even startups, would be better off going with a backend framework with good tooling. If you're able to do a bunch of codegen + you have all the batteries, I would imagine developer velocity would be faster.
Batteries included frameworks+languages such as .NET or RoR or Springboot and similar was perhaps optimal choice 5 or 10 years back.
They balanced developer velocity over time and the learning curve needed to use them. Learning curve is important because steeper the curve, more experienced/skilled developers are needed and that translates to more $/hr cost of dev time. Simpler learning + with codegen tools was the pitch that RoR or .NET and all the frameworks inspired by them had in late 2000s.
Today it has shifted, to models like Firebase, Supabase or Hasura, NextJs or similar stacks using GraphQL, gRPC or occasionally RESTful APIs generation workflows instead of boilerplating tools .NET, Springboot et al provided . These frameworks come with hosting services and typically language agnostic, however TypeScript/ JavaScript is dominant choice in this model the developer now only focuses on business logic and not worry about organizing code or running it or about standard components like auth, so frontend teams are more likely to own this now and they will write TS/JS more often than not.
Even runtimes like Deno are getting into the game, instead of just writing the runtime code and make money with consulting, Deno wants to make DX for running code in their managed runtime so simple that a lot of small teams would just use that out of the box.
Until the app is at 10s of million scale - non NodeJS + unmanaged stacks won't make economic sense. People will build software in any system of course, because it is what they know not because it is the rational decision.
Caring about efficiency and performance doesn't have to mean spending all your time on it until you've exhausted every possible avenue. Sometimes using the right tools and development stack is enough to make massive gains.
Sometimes it means spending a couple extra minutes here or there to teach a junior about freeing memory on their PR.
No one is suggesting it has to be a zero-sum game, but it would be nice to bring some care for the engineering of the craft back into a field that is increasingly dominated by business case demands over all.
Yea but we also write the same software over and over and over and over again. Perhaps slower, more methodical development might enable more software to be written fewer times. (Does not apply to commercially licensed software or services obviously, which is straight waste.)
This is a decent point, but in many cases writing software over again can be a great thing, even in replaceing some very well established software.
The trick is getting everyone to switch over and ensure correct security and correctness for the newer software. A good example may be openssh. It is very well established, so many will use it - but it has had some issues over the years, and due to that, it is actually _very_ difficult now to know what the _correct_ way to configure it for the best, modern, performant, and _secure_ operation. There are hundreds of different options for it, almost all of them existing for 'legacy reasons' (in other words no one should ever use in any circumstance that requires any security).
Then along comes things like mosh or dropbear, which seem like they _may_ improve security, but still basically do the same thing as openssh, so it is unclear if they have a the same security problems and simply don't get reported due to lower use, or if they aren't vulnerable.
While simultaneously, things like quicssh-rs rewrite the idea but completely differently, such that it is likely far, far more secure (and importantly simpler!), but getting more eyes on it for security is still important.
So effectively, having things like Linux move to Rust (but as the proper foundation rather than some new and untrusted entity) can be great when considering any 'rewrite' of software, not only for removing the cruft that we now know shouldn't be used due to having better solutions (enforce using only best and modern crypto or filesystems, and so on), but also to remodel the software to be more simple, cleaner, concise, and correct.
Tempted to say it’s more the learning the language that takes longer than the writing it part.
From my casual dabbling in python and rust they feel like they’re in similar ballpark. Especially if I want the python code to be similarly robust as what rust tends to produce. Edge cases in python are much more gnarly
Agreed. When a VC backed company is in hyper-growth, and barely has resources to scale up their shaky MVP tech stack so they can support 100+ million users, I doubt anyone thinks its reasonable to give the engineers 6 months to stop and learn Rust just to rewrite already working systems.
Adding Rust into your build pipeline also takes planning and very careful upfront design decisions. `cargo build` works great from your command line, but you can't just throw that into any pre-existing build system and expect it to just work.