I have a 2002 TiBook[1]; it officially supports MacOS 9.2.2, but also every OS X release up to 10.5.8. I've been surprised to find that the retail copy of StarCraft that I bought in 2009 not only includes an OS X build, but also supports PowerPC!
IMHO good riddance. The VGA console is about as useful as the serial console - your escape hatch when everything else fails. If you're allergic to X11/Wayland, the framebuffer console is much more featureful (it displays cute penguins in the top left corner!)
But (again, IMHO) you can also just run alacritty in cage or a patched dwm. Comes useful when somebody sends you a cat picture.
The Docker client (cli) is easy to port - it's mostly just an elaborate frontend to the socket/API. Every other OS just runs Linux in a VM. Focusing on integrating the VM with the usual development workflow is the lowest hanging fruit, provides the best ROI, and is relatively future-proof.
Unless you mean running containers in production - I think OCI is a much better target in that case.
I remember discovering all these optimizations The Hard Way when making maps for id tech 3 games as a kid. I always tried to build grand and detailed spaces, and the BSP/vis would choke on all the geometry.
> What doom mapmaking needs is higher level tools. Like a tunnel kit. Just plug together pipes and fittings. Or crank out mazes generatively.
You're describing the contemporary Unreal Engine, which was "solid by default, carve out the space" kind of a thing. GtkRadiant (the id tech 3/4 editor) had all the basic CSG operations built in, so you could theoretically carve out spaces, but Radiant's CSG was widely regarded as "evil" - I never found out why.
Here, this. Small, focused teams usually deliver more output per person (or even overall) than larger ones. Less management overhead, clear goals and responsibilities, tendency to employ people with cross-disciplinary experience, hiring for talent and not checklists, etc.
> [...] can somehow use LLMs more effectively [...]
LLMs are an incredibly effective tool for the few areas where they do fit the problem. But there's so much "AI" hype going on, everyone is trying to cram it into anything and everything, running around with a hammer trying to smash things just in case they turn out to be a nail. Even the old-time players (who should know better) can't resist the urge.
It's almost like oligopolies faced with changing markets tend to start collapsing under their own weight.
Remember that factorio simulates every inserter swing, the exact position of each item on a sushi belt, every dip in the power grid, every biter egg about to hatch.
Some designs in 100% runs use sushi (e.g. red circuits, as they take a lot of space/time but few resources, and must initially be built by hand). You can estimate the rates, but you won't be able to find a bottleneck.
Inserter timing often comes up in plastic builds - on paper it's easy to build the exact amount of chem plants to saturate a blue belt, but if you ignore inserter timing you'll be leaving tiny gaps.
Also remember spoilage is now a thing (,: as well as loops, probabilistic recipes, thruster efficiency, probably some more.
Only Factorio can simulate Factorio. The best way to automatically find bottlenecks is probably with a mod.
> Valve does not officially support NVIDIA gpus atm
Given NVidia's reluctance to support DRM (direct rendering manager) and Wayland, plus the general levels of nightmare that their official Linux drivers are, I wouldn't say this is Valve's fault. They've already been a poor choice on Linux 20 years ago. Even Apple has always been uneasy about their relationship (while Radeon was a less powerful choice for "premium" machines).
Also consider the ongoing AI hype. NVidia is right now very busy making their GPUs do exceptionally useful work on Linux - except the money is not in the graphics.
That is for SteamOS 2. The newer SteamOS 3 does not officially support anything other than AMD hardware because the Steam Deck uses AMD hardware and their current focus is on improving support for that.
Nvidia had best in class support for linux and freebsd as a result of essentialy using the same driver across all three operaating systems. If you wanted good opengl support they were the only choice 20 years ago! Valve not supporting nvidia gpus at the moment has more to do with SteamOS on ly shipping on the steam deck which uses AMD hardware.
Yeah, if you wanted a low power x86 device (which pretty much means iGPU) then AMD is the only game in town. The open source drivers probably helped in this regard but if things were different I doubt Valve would have turned their nose up at Nvidia.
> [...] anything you do outside of the user space is subject to being wiped the next time you get an OTA update.
You mean outside of /home? This is actually excellent. Rock solid base OS that can be A/B booted, deliver extra apps via Flatpak/AppImage - this is close to what macOS achieved, minus all the crap Apple has been pulling recently. I'm tempted.
More precisely it would be /home and /var. Persistant changes can in theory also be done with systemd-sysext, but it isn't exactly the most user friendly thing currenty.
It's theoretically viable to run a 16TB ZFS NAS, which would be perfectly respectable for SoHo/homelab workloads.
I've been looking to upgrade my aging PowerEdge T20 (also hate the fan noise), this is looking very interesting. I wouldn't be surprised if Jeff Geerling makes a video about this exact use case.
[1]: https://www.rollc.at/posts/2024-07-02-tibook/
I'm not sure if it can be made to run m68k apps "natively", but on the other hand you can emulate just about any classic MacOS in a modern browser[2].
[2]: https://jamesfriend.com.au/pce-js/
reply