yeah, it have lots of advantages, and that's why it was the default decades ago for everything. (windows, bsd, etc)
then people would have lots of trouble installing different software or updating security issues. so we invented package managers and took all the time in the world to make the base as small as possible.
it's advantages still make sense in some places, like modems with old flash memory. openwrt is static base with overlays. but again, it still carries the same downsides but because of the different aspects of the hardware it makes sense.
it would make sense for tech illiterate end users (hence android, ios, chrome os, wii, macos to a degree, etc) and containers (which already have infinite ways to convert from package to static). but anything else will literary harm the distro ability to evolve and adapt to software changes. imagine every change like systemd or a new browser or wm being atomic.
now people forgot decades of history. and it's so tiring.
I don't think I understand any of your objections.
When was Windows ever immutable in the sense of current immutable Linux distros? I wasn't able to find any reference to this ever being the case.
What do package managers and making the base as small as possible have to do with immutable distros? Package managers still exist, and the base is pretty much the same size as the non-immutable version of the same distro.
Why do immutable distros make more sense on modems with old flash memory?
How does being immutable harm the distros ability to evolve?
Either I'm not understanding your position at all, or you have a very different understanding of "immutable" than I do (after using Kinoite as my daily driver for a year).
> but anything else will literary harm the distro ability to evolve and adapt to software changes.
Comparing nixpkgs to the AUR seems to reflect the opposite trend. Arch is hamstrung by a dependence on outdated and poorly maintained repos that cannot coexist with up-to-date packages. Unless you fully embrace static linking or atomic installs, you'll end up with breakage.
MacOS went the static linking route, and Windows wrote back-compat for most old software by hand. The "decades of history" hasn't proven any approach right or wrong. They're all flawed.
macOS does not go the statically linked route. Apple encourage developers towards dynamic libraries (usually in the form of frameworks) in most scenarios.
Can you share an example of when Windows was immutable? I’ve been using windows since 3.1, and I can’t recall a Windows version where I wasn’t able to muck around in the system itself. Closed source != immutable.
On an unrelated note, I despise the constant insinuation that using Linux is an indicator of intelligence whereas users on other systems are tech illiterate.
Actually, I believe your entire argument contradicts itself multiple times because you give examples on both sides, and don’t stop to reconcile your views.
yeah, it have lots of advantages, and that's why it was the default decades ago for everything. (windows, bsd, etc)
then people would have lots of trouble installing different software or updating security issues. so we invented package managers and took all the time in the world to make the base as small as possible.
it's advantages still make sense in some places, like modems with old flash memory. openwrt is static base with overlays. but again, it still carries the same downsides but because of the different aspects of the hardware it makes sense.
it would make sense for tech illiterate end users (hence android, ios, chrome os, wii, macos to a degree, etc) and containers (which already have infinite ways to convert from package to static). but anything else will literary harm the distro ability to evolve and adapt to software changes. imagine every change like systemd or a new browser or wm being atomic.
now people forgot decades of history. and it's so tiring.