The fact so much hardware these days is running a full real-time OS all the time annoys me. I know it is normal and understandable but everything is such a black box and it has already caused headaches (looking at you, Intel).
This isn't even that new of a thing. The floppy disk drive sold for the Commodore 64 included it's own 6502 CPU, ROM, and RAM. This ran its own disk operating system[1]. Clever programmers would upload their own code to the disk drive to get faster read/writes, pack data more densely on the disk, and even copy protection schemes that could validate the authenticity of a floppy.
And all that engineering resulted in a floppy drive that was slower and more expensive than comparable units for other home computers. I'm not sure if there is a lesson there...
Well, it was slower due to a hardware problem. Basically, the hardware serial device had a bug which required a bit bang comms channel to the disk drive. Doing that amidst the sometimes aggressive video DMA is what caused all the slowdowns.
Back in the day I owned machines that did it both ways, but not a C64. My Atari computer also had a smart disk drive. Worked over something Atari called SIO, which is an early ancestor of modern USB. Back then, the Atari machine was device independent and that turned out to be great engineering!
Today we have Fuji Net devices that basically put Atari and other computers on the Internet, even to the point of being able to write BASIC programs that do meaningful things online.
The C64 approach was not much different, working via RS-232. But for a bug, it would have performed nicely.
Now, my other machine was an Apple ][, and that disk was all software. And it was fast! And being all software meant people did all sorts of crazy stuff on those disk drives ranging from more capacity to crazy copy protection.
But... That machine could do nothing else during disk access.
The Atari and C64 machines could do stuff and access their disks.
Today, that Fuji Net device works via the SIO on the Atari, with the Internet being the N: device! On the Apple, it works via the SmartPort, which worked with disk drives that contained? Wait for it!!
A CPU :)
Seriously, your point is valid. But, it's not really valid in the sense you intended.
Too late for me to edit, but yes I did confuse the source of the bug. Please clarify the C64 drive scenario source of slowness. Was it VIC-20 backward compatibility, or?
In any case, I maintain the engineering wasn't at fault, having a CPU etc. Fastloaders showed it to be just poor software, and that's a point I did not make clear enough.
Commodore wanted new C64 drives to be backward compatible with VIC-20 and vice versa. They failed the second goal, and C64 sold ~10x the number of units VIC-20 did making whole exercise pointless.
All to sell more outdated garbage chips made by MOS instead of using proper FDC controller on CPU bus with cheap standard floppy.
The slowness was due to a hardware bug in the 6522 VIA chip. The shift register (FIFO) would lock up randomly. Since this couldn't be fixed before the floppy drive needed to be shipped, they had the 6502 CPU bit-bang the IEC protocol, which was slower. The hardware design for the 154x floppy drive was fine, and some clever software tricks allow stock hardware to stream data back to the C64 and decode the GCR at the full media rate.
Probably not a fair comparison in some ways but this reminds me of that story of Woz making a disk drive controller with far fewer chips by being clever and thoughtful about it all. I’m probably misremembering this.
You’re talking about the Integrated Woz Machine. It was a custom disk controller that Wozniak created that was used in the Apple ][, /// and I believe on the original Macs. It was cheap, fast and worked.
The 1541 was slow because the c64’s serial bus was slow. Data was clocked over the bus 1 bit at a time. Various fastloaders sped up the data rate by reusing the clock line itself as a data line (2 bits at a time), later HW adapters adder parallel port or even usb to overcome the serial bus bottleneck.
Basically commodore was gonna use an ieee-488 bus for the drive and then decided it was too expensive late in the design and switched to this hacks serial bus that bottlenecked everything.
The 1541 was set to be a highly capable and performant machine, but an interface/design bug held it back and delivered dismal performance whenever connected to the C64. They tried to fix it but it couldn't be rescued, so speed freaks needed to wait for the 1570 series.
It was partially rescued by fastloaders and later JiffyDOS. Fastloaders tended to max out at 10-13x if the disk format was unchanged but if you could reformat or recode the files you could go anywhere from 25x speed to over 40x (transwarp) on stock hardware. DolphinDOS gave a 25x speed up by using the parallel port with 1541.
Epyx games used the Vorpal format which gave 15x load speedup.
The point is, the speed issues weren’t really the 1541’s fault although GCR coding could have benefited from a HW decoder.
Oh I know it’s been a thing forever. Hell, my NeXT Cube with its NeXTDimension display board was such. The NeXTDimension board ran its own entire stripped down OS. It used an Intel i860 and a Mach kernel…. It also was massively underutilized. If NeXT had did a bit more leg work and made the actual Display PS server run entirely on the board it would have been insane. But the 68K still did everything.
Yes, but ... Commodore did this because they had incompetent management. They shipped products (VIC-20, 1540) with hardware defect in one of the chips (6522), chip they manufactured themselves. The kicker is
- C64 shipped with 6526, a fixed version of 6522
- C64 is incompatible with 1540 anyway
They crippled C64 for no reason other than to sell more Commodore manufactured chips inside a pointless box. C128 was similar trick of stuffing C64 with garbage leftover from failed projects and selling computer with 2 CPUs and 2 graphic chips at twice the price. Before slow serial devices they were perfectly capable of making fast and cheaper to manufacture floppies for PET/CBM systems.
In the era of CP/M machines, the terminal likely had a similar CPU and RAM to the computer running the OS too. So you had one CPU managing the text framebuffer and CRT driver, connected to one managing another text framebuffer and application, connected to another one managing the floppy disk servos.
I guess I should have clarified more: I dislike everything running entirely separate OSes that you have no control over at all and are complete black boxes.
The fact they are running entire OSes themselves isn’t that big of a deal. I just hate having no control.
Oh God, the 1541 ran soooo hot, hotter than the C64 itself. I remember using a fan on the drive during marathon Ultima sessions. The 1571 was so much cooler and faster.
There's this great USENIX talk by Timothy Roscoe [1], which is part of the Enzian Team at ETH Zürich.
It's about the dominant unholistic approach to modern operating system design, which is reflected in the vast number of independent, proprietary, under-documented RTOSes running in tandem on a single system, and eventually leading to uninspiring and lackluster OS research (e.g. Linux monoculture).
I'm guessing that hardware and software industries just don't have well-aligned interests, which unfortunately leaks into OS R&D.
I think making it harder to build an OS by increasing its scope is not going to help people to build Linux alternatives.
As for the components, at least their interfaces are standardized. You can remove memory sticks by manufacturer A and replace them with memory sticks from manufacturer B without problem. Same goes for SATA SSDs or mice or keyboards.
Note that I'm all in favour of creating OSS firmware for devices, that's amazing. But one should not destroy the fundamental boundary between the OS and the firmware that runs the hardware.
Building an OS is hard. There's no way around its complexity. But closing your eyes and pretending everything is a file is a security disaster waiting to happen (actually, happening every day).
And furthermore, OS research is not only about building Linux alternatives. There are a lot of operating systems that have a much narrower focus than full-blown multi-tenant GPOS. So building holistic systems with a narrower focus is a much more achievable goal.
> As for the components, at least their interfaces are standardized
That's not true once you step into SoC land. Components are running walled-garden firmware and binary blobs that are undocumented. There's just no incentive to provide a developer platform if no one gives a shit about holistic OSes in the first place.
> But closing your eyes and pretending everything is a file is a security disaster waiting to happen (actually, happening every day).
How so? I can see the limited access control in Linux is an issue, and for this reason augmented security MAC (Mandatory Access Control) controls exist like SELinux and AppArmor.
But I don't see how the nature of everything being a file is a vulnerability in itself.
If you want to follow the principles of capability security, then a key part of the strategy is to eliminate “ambient authority”, that is, any kind of globally accessible way of obtaining a capability.
In traditional unix-like systems, file descriptors are very close to capabilities. But the global filesystem namespace is a source of ambient authority.
There are a couple of ways to fix this issue: fine-grained per-process namespaces like Plan 9, so that the filesystem’s authority can be attenuated as necessary, so it becomes more like a capability. Or eliminate absolute pathnames from the API, so you have to use functions like openat() to get an fd relative to an existing fd.
It was a lame attempt at humor, a roundabout way of referring to the simplifying assumptions that *nix systems generally make of the underlying machine.
Every cell in your body is running a full blown OS fully capable of doing things that each individual cell has no need for. It sounds like this is a perfectly natural way to go about things.
"Let the unit die and just create new ones every few years" is a brilliant solution to many issues in complex systems. Practically all software created by humans behaves the same way - want a new version of your browser or a new major version of your OS kernel or whatever else - you have to restart them.
"The creatures outside looked from DNA to k8s YAML, and from k8s YAML to DNA, and from DNA to k8s YAML again; but already it was impossible to say which was which."
Death isn’t a solution to maintenance issues, there are some organisms including animals that live many hundreds of years and possibly indefinitely. The reason seems to be to increase the rate of iterations, to keep up the pace of adaptation and evolution.
It's more of a "sleep mode". There are still a lot of wakeups, and cron jobs running clean up of temporary files, cache management and backup routines. Background services still run.
Poor comparison - DNA is compiled assembly language code. It is meant to be spaghetti to save space and reuse proteins for multiple functions. In that regard it’s the most efficient compiler in the universe.
No idea about dinosaurs but some reptilian red blood cells live much longer as in 500+ days for turtles vs 120 for humans. However, it varies widely with mice and chickens having much faster turnover. https://www.sciencedirect.com/science/article/pii/S000649712...
Five species of salamanders have similar enucleated red blood cells, but I can’t find out how long they last in comparison. https://pubmed.ncbi.nlm.nih.gov/18328681/
One theory is it’s an adaption to having unusually large genomes which would otherwise be an issue, but biology is odd so who knows.
I think that's the reason why mammals evolved in this way. Red blood cells go everywhere, and I mean everywhere, in your body. Most other cells can't get close enough.
Isn’t the primary purpose of the ME to run DRM and back door the system? How would it be useful at all open source? People would just turn it off entirely.
This has already been solved. Modern devices come with low power general purpose cores and OSs can wake up briefly to check for new messages. I just can’t see why you would ever want to remote manage your own laptop where some software installed on the OS isn’t sufficient.
Intel recent CPUs come with efficiency cores and support S0 standby, aka "Modern Standby", which can periodically wake up and do stuff like check for new emails.
I don't know. This sounds very computer-sciency-ish. We build smaller tools to help build big things. Now the big things are so good and versatile we can replace our smaller tools with the big things too. With the more powerful tools, we can build even bigger things. It is just compiler bootstrapping happening in hardware world.
The problem is that there's so much unexplored territory in operating system design. "Everything is a file" and the other *nix assumptions are too often just assumed to be normal. So much more is possible.
Possible, but apparently rarely worth the extra effort or complexity to think about.
The Unix ‘everything is a file’ has done well because it works pretty well.
It also isn’t generally a security issue, because it allows application of the natural and well developed things we use for files (ACLs, permissions, etc), without having to come up with some new bespoke idea, with all it’s associated gaps, unimplemented features, etc.
Hell, most people don’t even use posix ACLs, because they don’t need it.
> ITS (of PDP-10 hacker fame) - processes could debug and introspect their child processes. The debugger was always available, basically. The operating system provided support for breakpoints, single-stepping, examining process memory, etc.
> KeyKOS (developed by Tymshare for their commercial computing services in the 1970s) - A capability operating system. If everything in UNIX was a file, then everything in KeyKOS was a memory page, and capabilities (keys) to access those pages.
In every operating system, the basic unit of abstraction will be a process -- which necessitates a scheduler, some form of memory protection, some way for the process to interact with the kernel, and the notions of "kernel space" and "user space". There is a lot of room for innovation there (see ITS), but I suspect most of the room for innovation is in how an OS abstracts/refers to various parts of the system.
This is a bit like asking "Can you elaborate on particle physics" in 1900. The point is that we don't know because there's been so little experimentation in the space. Not a lot of funding for "build an OS that uses completely different idioms and concepts to what we know, possibly without even a goal other than trying something out".
Same. It's not about the principle, but that generally these OSes increase latency etc. There's so much you can do with interrupts, DMA, and targetted code when performance is a priority.
I sometimes wonder about how fast tings could go if we ditch the firmware, and also just bake a kernel / os right into the silicon. Not like all the subsystems which run their own os/kernels, but really just cut every layer, and have nothing in between.
You'd find yourself needing to add more CPUs to account for all the low level handling that is done by various coprocessors for you, eating into your compute budget, especially with high interrupt ratio as you wouldn't have it abstracted and batched in the now missing coprocessors