I really want this general concept to be true from a “practicality sense” but am having trouble seeing it.
Expressed as a mathematical expression. Given a problem that arrives at timsort with galloping mode as the solution, how would you hear the initial problem, and then express the solution as timsort in order theory?
It feels like it would be awesome to do but also doesn’t feel like the logic of the algorithm would quickly be found simply by knowing order theory.
I think any Linux distro that work out of the box is optimal, so you won't need to fix things right at the start like WiFi driver is incorrect.
In my opinion don't look at Asahi Linux based distros. Too much configurability and sometimes you just want to code. That's including Manjaro Linux since it's almost like Debian only with pacman and with even more bloat.
Corey Schafer has one of the best Python playlists out there. His channel at YouTube. I don't believe any more than his channel is needed, other than Python docs.
This article tries to hint that Israel is doing a genocide at Gaza, which is not true.
I'm not sure what is wrong with this technology. They barely say at the achievements this technology has gained, and only speaking about the bad side.
This article tries to make you think behind the scenes that Israel is a technology advanced, strong country, and Gaza are poor people whom did nothing.
It didn't even speak about the big 7 October massacre, where tens or even a hundreds innocent women were raped, because they were Israelis. I'm not sure when this kind of behavior is accepted in any way, and it makes you think that Hamas is not a legit organization, but just barbaric monsters.
Be sure that Gaza civilians support the massacre, and a survey reports that 72% of the Palestinians support the massacre[1], spoiler: it's much higher.
Because OpenAI focuses on putting out quality models. Efficient execution of ML models is another skill set entirely. Projects like CTranslate2 (which is what faster-whisper uses) are focused on fast model execution and work across all kinds of models from speech recognition to image and speech generation and everything in between.
Also because OpenAI benefits from a certain measure of inefficiency to prevent models from being easy for the masses to run without them being in the loop extracting money as well as compiling new training data out of every inference that users feed them.
Also, the default of only 20 bits of entropy is not nearly enough, IMO. That's only ~1M options. If you're using this to generate a password that foolishly gets hashed using something like SHA-256, even a mobile phone from 15 years ago could have it cracked in seconds. A modern CPU would crack it before your finger even let go of the Enter key.
Again though, the information above does require that the attacker knows you used this to generate your password. Without that knowledge, the 20+ character passwords it generates are quite secure, especially if you modify them.
It probably depends on the reference point. If I now use 4 words (2*2 rhyming words) instead of passphrases with 2 words, then the password is more secure than before (even if the attacker knows for sure that I use a rhyming password).
Besides, there is the rhyme scheme, which is not necessarily known to the attacker.
The idea is fun. I'm just not sure it helps much in practice. If I can't remember the exact words, the rhyme makes me only marginally less lost than without it.
But as far as I am I don't believe there is any math in algorithms such as bubble sort.