Not very different, since we're basically doing microkernel-y things anyway using various kinds of virtualization.
It would probably be more elegant to just run Hurd or whatever instead of for example running multiple JVMs on top of multiple Linux instances on top of a hypervisor (yes, this happens), but them's the breaks.
Well, kind of but not really. Xen isn't really bare metal in any sense of the phrase. It's just at a similar (but different) level of abstraction. In an embedded system you will need to have device drivers, respond to hardware interrupts, etc. On Xen you make hypercalls.
So eLua and these projects are similar in, say, the same way that a typical RTOS is similar to running Linux on AWS. They both have kernels, but the environments are vastly different.
Microkernels are not harder to debug than monolithic kernels. I'd even say that they are easier to debug, much easier. (Personal experience in debugging both.)
The problem with microkernel-based OSes is, as Linus Torvarlds aptly put it, that they turn well understood memory-protection problems into not-so-well-studied IPC problems. (The actual quote is «They push the problem space into communication, which is actually a much bigger and fundamental problem than the small problem they are purporting to fix.»)
The microkernel is not the real problem here, the big issue is debugging faulty IPC sequences between the servers that implement the OS services. A problem that is almost non-existent in monolithic kernel.
HOWEVER, current monolithic kernels are facing growth problems now because of two aspect: we want fancy remote storage accessed as easily as local storage (do you want to mmap a file stored in a RAID setup implemented with SATA-over-ethernet disks?) and the fact that the process model is too leaky and so we need stronger containers like VMs (that are becoming as much leaky abstractions as the current processes). All these new features require communication between various components that were previously though and implemented as independent. This means that the IPC problems are now creeping into the world of monolithic kernels.
There were a number of microkernel efforts at Apple in the 80s and 90s.
- Pink (later known as the money-burning party Taligent) had a 'new kernel' that was message-passing. They spent a lot of time working on RPC efficiency.
- The Newton used a message-passing kernel. Not the most efficient thing in the world, but there was MMU support to do some interesting page sharing / fault dispatch policy stuff, so you could get IPC-like behavior with faults. Basically hobbled by a 20Mhz processor with minimal cache, and not very much RAM at all.
Btw, I didn't notice the Newton being very hard to debug (except that all of our debugging was printf, or you stared at the disassembled output of CFront).
The way you word it, (and from my personal bias) it seems like people kicked the can on getting IPC right down the road. I like how plan9 handled it - one socket layer to rule them all, call into other programs over sockets, and let the filesystem handle the translations. I think in this modern era of extreme hardware juice, this could easily work. And work better than what we have.
Then again, I think shared memory is a harder problem to solve than using IPC through some buffered file metric. The synchronization becomes a pain in the butt, even if you manage a smidgen more performance out of the ordeal, it isn't very reusable, so it is only a situational tool. I feel like module injection a la Linux should have been a situational tool as well (at best), but we ended up putting everything in the kernel, and we keep adding more (KMS, KVM, DMI2, etc).
Compared to say an SOA platform with 200 service endpoints, a database with about 2000 tables, integration with 15 other providers with XML, CSV, SFTP and some wierd shitty non standard binary protocol. All written entirely by the lowest bidder without unit tests because they knocked the budget up. Oh and with a toolchain which barely works and no VCS system past an old corrupt source safe database full of compiled DLLs which there is no source for any more.
I've done both and am currently stuck doing the latter - I'd rather poke one of my eyes out and debug a microkernel if I had a choice.
Absolute nonsense. Micro kernels are much easier to debug than macro-kernels.
The reasons are obvious to anybody that has spent more than an afternoon working with both, a microkernel is so small that the chances that your problem is in the kernel are minute compared to having your problem in user space. That means that all your luxury debug tools are available to debug the vast majority of your problems.
After the initial write the kernel code of a microkernel system is hardly ever touched. A good microkernel will limit itself to do the most limited subset that will still allow the system to function and will move everything else to regular processes.
Stallman, the FSF and the free software ideals would probably be much more widely known. Many (most?) users of GNU/Linux call the whole OS "Linux" and honestly think Torvalds started the whole endeavour.