Hacker Newsnew | past | comments | ask | show | jobs | submit | pepoluan's commentslogin

What's so bad about Python's virtualenv? It's a good way to have an LKGC (Last Known Good Configuration). When a CVE happens, you spin up a new venv and do smoke test, canary test, blue/green deployment, and so on.

If the update breaks things, you go back to the old venv. If the update goes well, then you just delete the old venv.

Not implementing security updates the right way is just human laziness and has nothing to do with the language in use.


You know, with all the babysitting needed, I wonder if effort is not better spent in just, you know, writing code.

Can you actually quantify the time & effort 'saved' letting LLM generate code for you?


For me, personally, I'm building things that would have been impractical for me to do as cleanly within the same amount of time - prototypes in languages that I don't have the muscle memory for, using algorithms i have a surface level understanding of but would need time to deeply understand and implement by hand, and, at my pace, as a retired dev, is probably quantified in terms of years worth of time and effort saved.

edit: also, would I take the time to implement LCARS by hand? No. But with an LLM, sure, took it about 3 minutes or less to implement a pretty decent LCARS interface for me.


True words.

I'd say "the fastest" is a side effect of "allowing one to tune their systems to their utmost liking." -march=native, throw away unused bits and pieces, integrate modules into the kernel, replace bits and pieces with faster -- if more limited -- bits and pieces. And so on.


Gentoo is LFS but with the interdependence between packages mapped out for you (all hail the USE flags!) Or, alternatively, Arch with even more customization knobs to twiddle.

I have had Gentoo in at least one nearby system (physical and/or VM) since about 15 years ago. It's always a blast interacting with it.


Safe and living also doesn't match.


We created computing to be fast, not safe. We could have made it safe, but we didn't, it was not a priority.

We can't say the same about living, because we have not created living.

Your comment makes zero sense.


Why, you want to buy a new printer?

If you want inkjets, buy those with ink tanks. More expensive up front, but operating cost is so cheap. And no more "you have to replace a whole cartridge just because Magenta is low"; if Magenta is low, buy a bottle of Magenta, and fill.

For laser printers, buy those whose toner cartridges are separate from the drum, and those whose toner cartridges can be reset mechanically. And refillable.

My go-to brand for printers is Brother, btw.


Type hints are 100% optional, though.

And to be honest when you start using it, even just for simple things such as function signature, with the proper IDE it helps you catch mistakes.


So. Another regex problem?


I am one of the maintainers of aiosmtpd [1], and the largest PR I ever made was migrating the library's tests from nosetest to pytest. Before doing that, though, I discussed with the other maintainers if such a migration is welcome. And after getting support from them, I made the changes with gusto. It took weeks, even months to complete and the PR is massive [2]

But still the crux of the matter is: Massive changes require buy-in from other maintainers BEFORE the changes even start.

[1] https://github.com/aio-libs/aiosmtpd [2] https://github.com/aio-libs/aiosmtpd/pull/202


LLM will guiltlessly produce hallucinated 'review', because LLMs does NOT 'understand' what it is writing.

LLMs will merely regurgitate a chain of words -- tokens -- that best match its Hidden Markov Model chains. It's all just a probabilistic game, with zero actual understanding.

LLMs are even known to hide or fake Unit Test results: Claiming success when it fails, or not skipping the results completely. Why? Because based on the patterns it has seen, the most likely word that follow "the results of tests" are the words "all successful". Why? Because it tries to reproduce other PRs it has seen, PRs where the PR author actually performed tests on their own systems first, iterating multiple times until the tests succeed, so the PRs that the public sees are almost invariably PRs with the declaration that "all tests pass".

I'm quite certain that LLMs never actually tried to compile the code, much less run Test Cases against them. Simply because there is no such ability provided in their back-ends.

All LLMs can do is "generate the most probabilistically plausible text". In essence, a Glorified AutoComplete.

I personally won't touch code generated wholly by an AutoComplete with a 10-foot pole.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: