There's this great USENIX talk by Timothy Roscoe [1], which is part of the Enzian Team at ETH Zürich.
It's about the dominant unholistic approach to modern operating system design, which is reflected in the vast number of independent, proprietary, under-documented RTOSes running in tandem on a single system, and eventually leading to uninspiring and lackluster OS research (e.g. Linux monoculture).
I'm guessing that hardware and software industries just don't have well-aligned interests, which unfortunately leaks into OS R&D.
I think making it harder to build an OS by increasing its scope is not going to help people to build Linux alternatives.
As for the components, at least their interfaces are standardized. You can remove memory sticks by manufacturer A and replace them with memory sticks from manufacturer B without problem. Same goes for SATA SSDs or mice or keyboards.
Note that I'm all in favour of creating OSS firmware for devices, that's amazing. But one should not destroy the fundamental boundary between the OS and the firmware that runs the hardware.
Building an OS is hard. There's no way around its complexity. But closing your eyes and pretending everything is a file is a security disaster waiting to happen (actually, happening every day).
And furthermore, OS research is not only about building Linux alternatives. There are a lot of operating systems that have a much narrower focus than full-blown multi-tenant GPOS. So building holistic systems with a narrower focus is a much more achievable goal.
> As for the components, at least their interfaces are standardized
That's not true once you step into SoC land. Components are running walled-garden firmware and binary blobs that are undocumented. There's just no incentive to provide a developer platform if no one gives a shit about holistic OSes in the first place.
> But closing your eyes and pretending everything is a file is a security disaster waiting to happen (actually, happening every day).
How so? I can see the limited access control in Linux is an issue, and for this reason augmented security MAC (Mandatory Access Control) controls exist like SELinux and AppArmor.
But I don't see how the nature of everything being a file is a vulnerability in itself.
If you want to follow the principles of capability security, then a key part of the strategy is to eliminate “ambient authority”, that is, any kind of globally accessible way of obtaining a capability.
In traditional unix-like systems, file descriptors are very close to capabilities. But the global filesystem namespace is a source of ambient authority.
There are a couple of ways to fix this issue: fine-grained per-process namespaces like Plan 9, so that the filesystem’s authority can be attenuated as necessary, so it becomes more like a capability. Or eliminate absolute pathnames from the API, so you have to use functions like openat() to get an fd relative to an existing fd.
It was a lame attempt at humor, a roundabout way of referring to the simplifying assumptions that *nix systems generally make of the underlying machine.
It's about the dominant unholistic approach to modern operating system design, which is reflected in the vast number of independent, proprietary, under-documented RTOSes running in tandem on a single system, and eventually leading to uninspiring and lackluster OS research (e.g. Linux monoculture).
I'm guessing that hardware and software industries just don't have well-aligned interests, which unfortunately leaks into OS R&D.
[1] https://youtu.be/36myc8wQhLo