Hacker News new | past | comments | ask | show | jobs | submit | marcos100's comments login

We all should think about optimization and performance all the time and make a conscious decision of doing or not doing it given a time constraint and what level of performance we want.

People write bad-performing code not because it's easier, it's because they don't know how to do it better or don't care.

Repeating things like "premature optimization is the root of all evil" and "it's cheaper to get a bigger machine than dev time" are bad because people stop caring about it and stop doing it and, if we don't do it, it's always going to be a hard and time-consuming task.


It is even worse for widely deployed applications. To pick on some favorites, Microsoft Teams and One Drive have lousy performance and burn up a ton of cpu. Both are deployed to tens/hundreds of millions of consumers, squandering battery life and electricity usage globally. Even a tiny performance improvement could lead to a fractional reduction in global energy use.


I wish they would do this. But my experience is that building efficient software is hard, and is very very hard the larger the team gets or the longer the product exsits.

Even zoom, used to be very efficient, but has gradually got worse over time :-(


I would find this more compelling if we were not discussing a trillion dollar company that employs tens of thousands of programmers. The One Drive performance is so bad I cannot imagine anyone has put any effort into prioritizing efficiency. Naive, first effort attempt was packaged up and never revisited.


While that is true, its really not easy to do without re-writing from scratch and scrapping a load of features which is organisationally difficult to do.

What large piece of software with a user interface do you work with that is actually fast and stays fast? For me, its probably just Chrome / Firefox. Everything else seems to get slower over time.


I doubt that it would be good business for Microsoft though. The people who use them, and the people who buy them and force others to use them are two separate groups, and anyone who cares even a bit about user experience and has power to make the decision has already switched to something different. It's also the users, not Microsoft who pays for the wasted power and lost productivity.


Strongly disagree with this sentiment. Our jobs are typically to write software in a way that minimizes risk and best ensures the success of the project.

How many software projects have you seen fail because it couldn't run fast enough or used too many resources? Personally, I've never seen it. I'm sure it exists, but I can't imagine it's a common occurrence. I've rewritten systems because they grew and needed perf upgrades to continue working, but this was always something the business knew, planned for and accepted as a strategy for success. The project may have been less successful if it had been written with performance in mind from the beginning.

With that in mind, I can't think of many things less appropriate to keep in your mind as a first class concern when building software than performance and optimization. Sure, as you gain experience in your software stack you'll naturally be able to optimize, but since it will possibly never be the reason your projects fail and presumably your job is to ensure success of some project, then it follows that you should prioritize other things strongly over optimization.


I see it all the time, applications that would be very usable and streamlined for users from a ui perspective are frustrating and painful to use because every action requires a multi second request. So the experience is mostly reduced to staring at progress spinners.


> every action requires a multi second request

This is doing a lot of heavy lifting. Just because an app is slow doesn't mean Rust would've made it faster. It may just be slow because of a bad query or otherwise poor architecture, especially in web development. The commenter is asking what projects you've seen fail because the language itself hit a performance limit that couldn't be worked around.


this honestly sounds like you're describing the most successful software on the market. I can't think of many social media sites slower than facebook or instagram, or chat slower than slack or email clients slower than gmail.


Sure but it seems like race to the bottom. Faster development will beat better quality in the market. Especially in unregulated industry like this.


I'm not so sure this "race to the bottom" is a characteristic I want to avoid. The most competitive markets in the world are always in first world countries or in countries quickly turning into a first world country. Take south korea for example. They had sweatshops maybe 70 years ago and now they arguably have the best healthcare in system in the world.

I'll take a fast food job for 40 hours any day of the week over most of the options in poor countries. Sure, nothing but the best, fulfilling jobs for everyone is ideal, but until I see that exist I'm not informed enough to know whether it's possible outside of someones political ideals.


It also depends on where the code is running. To put it simply; nobody cares how much RAM the server is using, but they do care if their clientside application isn't responsive. UI being performant and responsive should have priority over everything else.


Worse even: it's super bad for the environment


Are you sure? Is my one PHP server running with 50% more electricity gonna outweigh the 12 developers with beefed rigs just to get Rust compile times somewhere reasonable? Or how much longer they will be using their computers because it will take longer to write the code itself? Especially when I have 1000 monthly users and a $6 VPS is more than enough anyway?

This has always been a poor argument.


We have Electron and we don't get rid of it for a decade, at least.


Do you share a part of you charge with the people that contributed? If not, what you're doing is the same as what others did when they took your work and profited from it.

You've all agreed what can and can't be done with your code based on the license you used.

If you want to make money with software, it's proprietary or dual-licensed (A)GPL with CLA. Anything else you'll bait and switch on people.


I believe the future will be to feed all your docs into a LLM for people to query.


As a backend dev, I love the technology. The problem is that you have to choose between a not-so-scalable solution (Server, signalR) or a minimum 2mb initial payload (WASM) that can easily go to 5mb.

Interested on how many concurrent users you have for Server to be a problem. Can you elaborate more on your performance issues?


I may have made a mistake in designing the architecture of our app. Since we're a small team, I opted for a big ol' monolith, hosting our APIs on the same server as our Blazor Server app. We normally serve a few hundreds requests per second on our APIs, which is totally fine. However, sometimes we got some spikes up to thousands of requests per second, which has the unfortunate consequence that our Blazor Server app becomes laggy and starts to drop WebSocket connections frequently. So, now, we are in the process of moving our API controllers to another project, which will be hosted on a separate server.


Sounds like you did everything right then. Started off simple, now your business is taking off, failures aren't catastrophic (they're grey, not black, from what it sounds like) and splitting out a component shouldn't be too hard, so you'll be ready for more scale soon. All while maintaining the business!


.NET 8 solves that exact problem as far as I can see. You can opt into auto mode and it uses server side Blazor until the client downloads all assets, then in subsequent visits it uses the WASM runtime. Seems to be a good compromise.


Is 5MB a real problems? Theoretically it may look big, but I have seen many websites much bigger, not to mention all video/image we download are already skewing download by a lot. Considering runtime is cached for long time, I don't see a real blocker. First page render would be an issue but SSR solves that.


> Is 5MB a real problems?

Well if you want to make small fast loading html pages with a minimal js library, and end up at a few hundred KB that you can understand, profile and optimize, then that is impossible with blazor. So it's a very real problem.

If you want 5mb blobs and do not care about what is going on inside, how to optimize or reduce the memory and bandwidth usage, then it's not a problem, works just as well as the websites you have seen, with 200 node dependencies.


On a phone, 5MB it not ideal. On a corporate desktop, not an issue.


Exactly. At least you have to show that you know what you're doing and it's deliberate. Depending on the seniority, I expect some kind of justification in the comments, like "it's O(nˆ2), but since the input will be very small, it's ok".

In real life people do a lot of O(nˆ2) code without realizing, and usually it's just some unnecessary loop inside another loop. I want to know that you care about some things.


Rewriting also costs less. Be it a typo or significant parts of a code. This allows you to try more things.


Because "we don't have the budget".


That "we" may be too low on the org chart, and the qualified manager may not be aware of the problem.


Exactly! Gramma and my luddite uncle are the most vulnerable to whatever bad actors want to do.


Agree, but for me, maybe because I'm so used to not having them that, when I see it, it grabs too much attention. It doesn't feels right.


Here you have selection bias. If you're a US company and you have to bring someone from outside the US, you'll probably bring the better ones and, as someone already said, if they are fluent in English, they are probably at the top.

Come work for a Brazilian company to see the that we have mediocre people everywhere.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: