Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The QNX Operating System (abortretry.fail)
322 points by BirAdam 49 days ago | hide | past | favorite | 142 comments


I had some fun history with this OS.

First, we had ICON computers in my elementary school, we'd all try to spin the trackball as quickly as it would go. Not sure if we ever broke one.

The second is when I worked at BlackBerry. I was building a feature that allowed you to use your QNX BlackBerry as a Bluetooth HID device. You could connect it to any device and use the trackpad + physical keyboard to remotely control a computer. It was fantastic. You could hook your laptop up to a project and control slides from your BlackBerry.

Then some product manager with questionable decision making told me to lock it down so it would only work with Blackberry Playbooks for "business purposes", rendering it effectively useless (since Playbooks are all ewaste). I distinctly remember that meeting where Dan Dodge argued that since it's a standard, it should not be locked down.

I respect Dan Dodge for that, I don't think I'd work with that PM again.


QNX became BlackBerry 10—-still the best mobile OS. Too bad only a dozen of us ever used it.


My Passport was my favourite phone ever, I think if apps like WhatsApp hadn't stopped supporting it I might still be using it.


Oh wow - I didn't realize that ICON was based on this - I remember it was quite advanced in comparison to the generic PC computers at the time.

With one exception - you could crash other ICON systems or the overall network just via machine-machine chatting functions.


KDE Connect is honestly just that but better in some respects. And honestly, it's great.


I wish we were able to preserve the ICON experience in an emulator. I bet even a browser has enough compute power to emulate a network of them.


Bridge/Blend was cool shame it was restricted to Playbooks and desktop software. Slipstream had some pretty solid ideas.


FWIW, I used that HID function you worked on and I loved it too!


I really liked the QNX Photon aesthetic, for a long time I maintained an absurdly complex FVWM configuration designed to look like it.

This was a screenshot of my Gentoo desktop around 2004!

https://lock.cmpxchg8b.com/img/fvwm_desktop.jpg


That looks great even now! Do you still have your dotfiles somewhere?


I used your config as a base for mine and kept it like 15 years or so.


Man, this heavily tops my WindowMaker theme. Your FVWM looks soo cool.


Is that a Winamp ripoff on the screenshot? :)


This is XMMS, a X11 Clone of winamp. We all had it on our desktop these days. And hell yeah, our Linux desktop could look anything we wanted. What a time!

I also startet to used Gentoo around that time.


It could load Winamp skins too.


that really whips the llama's ass


I remember the first time I heard it. Was definitely a "did... did I just hear that right?"


I loved the idea of QNX. Got way excited about it. We were moving our optical food processor from dedicated DSPs to general purpose hardware, using 1394 (FireWire). The process isolation was awesome. The overhead of moving data through messages, not so much. In the end, we paid someone $2K to contribute isochronous mode/dma to the Linux 1394 driver and went our way with RT extensions.

It was a powerful lesson (amongst others) in what I came to call “the Law of Conservation of Ugly”. In many software problems, there’s a part that just is never going to feel elegant. You can make one part of the system elegant, which often causes the inelegance surface elsewhere in the system.


> what I came to call “the Law of Conservation of Ugly”. In many software problems, there’s a part that just is never going to feel elegant

This may be an instance of the Waterbed Principle: in any sufficiently-complex system, suppressing or refactoring some undesirable characteristic in one area inevitably causes an undesirability to pop up somewhere else. Like there is some minimum amount of complexity/ugliness/etc that it is possible for the entire system to contain while still carrying out its essential functions, and it must leak out somewhere.

https://en.wikipedia.org/wiki/Waterbed_theory


The terms I've seen used and prefer to use are "essential complexity" and "accidental complexity".


I have a really neat idea to improve the message passing speed in QNX: you simply use the paging mechanism to send the message. That means there is no copying of the data at all, just a couple of page table updates. You still have the double TSS load overhead (vs 1 TSS load in a macro kernel), but that is pretty quick.

But you are right that there is a price for elegance. It becomes an easier choice to make when you factor in things like latency and long term reliability / stability / correctness. Those can weigh much heavier than mere throughput.


This is sort of what Mach does with "out-of-line" messages: https://web.mit.edu/darwin/src/modules/xnu/osfmk/man/mach_ms... https://dmcyk.xyz/post/xnu_ipc_iii_ool_data/

(this is used under-the-hood on macOS: NSXPCConnection -> libxpc -> MIG -> mach messages)


Mach has always been a very interesting project. It doesn't surprise me at all to see that they have this already, but at the same time I was not aware of it so thank you. This also more or less proves that that may well be an avenue worth pursuing.


I learned of the idea from some paper or other of Barrelfish, which is a research OS based on seL4. Barrelfish is underrated! Aside from its takes on a kernel architecture, it also has interesting nuggets on other aspects of OS design, such as using declarative techniques for device management.


I haven't seen it implemented anywhere, but that sounds like the "pagetable displacement" approach described here: https://wiki.osdev.org/IPC_Data_Copying_methods#Pagetable_di...

The same idea occurred to me a while ago too, which is how I originally found that link :)


How performant is that in practice? I thought setting pages was a fairly expensive process. Using a statically mapped circular buffer makes more sense to me at least.

Disclaimer: I don't actually know what I'm talking about, lol


To be clear, since the other replies to you don't seem to be mentioning it, the major costs of MMU page-based virtual memory are never about setting the page metadata. In any instance of remapping, TLB shootdowns and subsequent misses hurt. Page remapping is still very useful for large buffers, and other costs can be controlled based on intended usage, but smaller buffers should use other methods.

(Of course I'm being vague about the cutoff for "large" and "smaller" buffers. Always benchmark!)


You can pretty reliably do it on the order of 1 us on a modern desktop processor. If you use a level 2 sized mapping table entry of say 2 MB, that is a transfer speed on the order of 2 TB/s or ~32x faster than RAM for a single core even if you only move a single level 2 sized mapping table entry. If you transfer multiple in one go or use say a level 3 sized mapping table entry of 1 GB that would be 1 PB/s or ~16,000x faster than RAM or ~20x the full memory bandwidth of a entire H200 GPU.


Pretty quick, far faster than inter-process memory copy. The only way to be sure would be to set it up and to measure it, but on a 486/33 I could do this ~200K per second, on modern systems it should be a lot faster than that, more so if the processe(s) do not use FP. But I never actually tried setting up say a /dev/null implementation that used this, it would be an interesting experiment.


Passing the PTE sounds great for big messages (send/recv).

For small messages (open), the userspace malloc is going to have packed small buffers into a single page - so there's a chance you'd need to copy to a new userspace page, the two copies might work out better.


The throughput limitation is really only an issue for big messages, for smaller ones the processing overhead will dominate.


The QNX call to do that is mmap().


Yes, I know. But I rolled my own QNX clone and I figured it would be neat to do this transparently rather than that the application has to code it up explicitly. This puts some constraints on where messages can be located though and that's an interesting problem to solve if you want to do it entirely without overhead.


I have a general distaste for transparent policies, which I always find to fall short for some use case. In this case, the sender would know best what to do with their message. Moreover, for small buffers, page remapping won't be an optimization. I recommend reflecting this as an alternative send interface.

The lower a transparent policy lies in the OS, the worse it contorts the system. Even mechanisms necessarily constrain policy, if only slightly. I strongly believe that microkernels will only be improved by adhering ever closer to true minimality. If backwards compatibility is important, put the policy in a library. But I think transparent policies are generally advisable only when user feedback indicates benefit.


If you want your send/receive/reply mechanism to work transparently across a network then you have already made many such decisions and likely this one will just appear as an optimization in case both client and server are local.


I agree that the decisions will likely be made in the end, but I want to say that the decisions should be made as high/late as possible. It is important to note not just that a decision has been made, but where it has been made, for any dynamic system. This will determine things like overhead or maintenance burden.

Contrary to QNX, I'm not entirely convinced that network transparency by default is ultimately best, though that is a separate concern.


Is "optical food processor" a metaphor, or is this actually a device that would cut up food items based on image feedback?


Usually it's about sorting. Take a lot of whatever (french fries, green beans, etc), accelerate them to something like 3 m/s, launch them off the end of a belt, scan them, looking for defects, and then use air jets to divert the defective items. Look on you tube for it. It's sort of mind boggling to see the scale at which french fries alone are produced. You see one line running at load, and then realize there are multiple lines in most plants, and there are hundreds of plants world wide. It's mind boggling.

The cooler machines were specialized for fries, they use a rotating knife drum above a belt to cut defect spots from fries.

I've not done that for 17 years now; the newer machines are that much cooler.


That's awesome. Thanks for the explanation.

I did find several machines like this on YouTube, and it's amazing to watch. (One of them had little motor-actuated slats that could kick the defective items away, almost like a foot kicking a soccer ball!)


There's an older talk Simon Peyton Jones (IIRC?) gave about some development or other in haskell, in which he suggested that many software systems have some aspect of the swamp or the marsh into which you must eventually wade - that there's a mucky, sticky, irreducible aspect to the problem that must be dealt with somewhere, regardless of how elegant the rest of the system is.

"that marsh thing" has stuck with me, and been a frequent contributor to my work and thinking. I'll happily take Law of Conservation of Ugly as a _much_ better name for the thought :)


Today though, I'd argue that with full DSMP support and much more capable systems, any overhead from message passing is much less of a concern, or at least outweighed by other benefits.


The ICON picture in this blog post is of my ICON and taken from my blog (with credit, which is a nice gesture). If anyone is interested in reading my two posts about the Burroughs ICON computer running QNX, here are the links (the first post has YouTube links of my QNX demos):

https://jasoneckert.github.io/myblog/icon-computer/

https://jasoneckert.github.io/myblog/lexicon-computer/


Do you have any plans to backup the software and ROMs to archive.org? Lack of access to the ROMs has been a stumbling point for creating an emulator for this system.


Probably about 1996(?) remember getting this on a floppy disk, full RTOS GUI with a networking stack, wondering how they could do that with such a small footprint. For reference I recall having to write stacks of disk set floppies for Slackware basic install, let alone Windows 95 :)


I remember that. It even included a web browser. There's a copy here:

https://winworldpc.com/product/qnx/144mb-demo


That was for show purposes. It was certainly intended to display OS capabilities while impressing people at the same time. Linux eventually came to dominate the live-cd scene in the early 2000's but, to this day, people still cute this demo as specially incredible. Actually, there was Linux with X11 and a functional browser that run from a floppy. And that is something that is really impressive: https://news.ycombinator.com/item?id=28515025


I worked on one of the Linux live CDs (Linuxcare bootable business card / LNX-BBC), which we designed to fit in 50 megabytes. I remember being kind of jealous of the QNX floppy, because I felt that it had about 25% as much software functionality as we did, but in about 3% of the storage space!

I mean, basically we could interact with a lot more hardware, support more file formats, filesystems, and network protocols, and had more high-level scripting languages. But there still seemed to be a huge disproportion where the QNX floppy was just so much more space-efficient for what it did.


I remember getting it around the year 2000 after having fiddled a bit with Linux desktops and being blown away.


Same. I used to use it as my "cybercafé" OS, since cybercafés at the time were untrustworthy (big risk of keyloggers and malware), I'd boot QNX on their PCs and browse the web securely. Used to carry it around everywhere and everyone I showed to were blown away.

I even ran the full QNX Momentics desktop OS on my home PC (a PIII 450) and it was very very impressive, way better than Linux and pretty much everything out there. Well, BeOS was also impressive with its multimedia performance, but QNX was just so much more polished and professional.

The late 90s-early 2000s was such an interesting era in computing in general - at one point I was multi-booting something like a dozen different OSes - DOS, Windows, Linuxes, BSDs, QNX, BeOS, MenuetOS.. all thanks to this fully graphical boot manager, I forget the name but it even had a built-in partition manager - and it even had mouse support! All these OSes were also quite usable, unlike all the niche OSes of today, many of which sadly can't even be installed on real modern hardware because of all the complexity. I really miss those days, it was truly a golden era of computing.


Good times indeed. knoppix was my best friend then but I remember that QNX floppy and being pissed off that I didn't have enough geeky friends who were blown away by it.


To this day the single most-impressive tech demo experience I've ever had.


I had to use QNX for realtime applications in the late 1990s before the Pentium came along. Windows, Linux and existing UNIX flavours were not an option as none of them could do the realtime thing in quite the same way that QNX could. That was the strength of the OS and I am glad I knew this before reading the article.

What I also liked about QNX was the petite size. If I remember correctly it came on one floppy disk, and that included a GUI, not that you need a GUI with QNX since the product will be an embedded system of sorts. All of the documentation was clear and, even if you had not read the manual, the overlap with UNIX meant that the system was far from intimidating as most of the commands that I knew would work fine, albeit with different options to commands.

I had not fully realised how QNX had gone from strength to strength in automotive, and I didn't even know Harmon owned them for a while.

Given that we have gone from single core, 32 bit 386/486 to today's sophisticated SOCs that are thousands of times more capable, the question has to be asked, how important is QNX's superpower of realtime goodness, particularly if it is just for automotive applications such as turning on the A/C?

Surely a modern CPU that goes so much faster can do a better job without having to care about realtime performance? Or maybe Android Auto and Automotive Linux have those bases covered? Regardless, I am sure that if you want realtime embedded applications then you hire the guys that know QNX and reject those that haven't a clue.


It's not all about the realtime. It's also about the process isolation. If a Linux driver crashes, it takes down the entire system. If a QNX driver crashes, it's not part of the kernel so it can just be restarted. It's not good to wait for 5 or 6 seconds for your car to reboot so your brakes will start operating again while you careen towards that retaining wall on the highway.


As good and reliabe is QNX, I really hope it controls only car infotainment and not brakes !


It's everywhere. It's in the ECMs that control your brakes.


Keep in mind there's a distict possibility that the Android you see as a driver is running as a guest on a QNX Hypervisor system ;)


That is such a trip to learn that QNX might be a the hypervisor to the android OS running on my car.

I've seen my car's infotainment fail and restart, but i didn't thinking what is handling it underneath.

is there a chance that QNX has a podman-type application to run containers?


Yes! Like podman, we have an OCI-compliant container solution [0]. I haven't tried to play with it myself yet, but details are at the link. [0] https://blackberry.qnx.com/en/products/foundation-software/q...


I worked with QNX 4 at uni and we built a robot system based on 2-4 networked 486/Pentium CPU cards in a rack with it [1]. We fully used the OS to make our robot system both hard real-time and completely network based using QNX's native capabilities. This gave me a deep understanding of those issues in my later career in robotics systems and I basically recreated - tediously - most of its features with UDP, TCP/IP and various IPC (inter process communication) features on vxWorks, SunOS and Linux.

One feature of the OS I fondly remember was that the most basic system calls (send/receive/reply) were implemented as about 3 inline assembler instructions each directly in the header file (qnx.h ?).

[1] https://herbert-janssen.de/paper/irini97-12.pdf


QNX used to be the golden standard of impossible things for microkernels, that in reality are actually used.

Nowadays not sure how it compares to other ones with wide field experience like Nintendo Switch Horizon, seL4 and more recently HarmonyOS NEXT.


The HongMeng kernel, performance wise, seems indeed to be in the same ballpark as sel4 or QNX, but it doesn't seem that it will be open-sourced.


Perhaps it won't be open sourced, but having read one of the papers written for it, the principles behind its advancements can easily be added to the microkernel repertoire.


What's the core concept compared to other kernels?


At its core, it's not too distinct from, say, seL4, but some of the distinctions are useful. I think Hongmeng's work on isolation classes (particularly in transferring components across classes), a performance-motivated partial alternative to capabilities, OS service coalescing and partitioning, and porting Linux drivers are valuable (see sections 4.2-4.4 and 5 [0]). It's not that these changes should be accepted wholesale, but I think they are a useful data point for alternate designs. I think the emphasis on access control (capability) performance and driver coverage are relevant for any production-grade microkernel.

I don't like the paging optimization described in section 4.5 [0]. It seems like a lot of added complexity for unequal gain.

In general, the authors make many good observations on the current designs of microkernels, particularly how the proliferation of small processes harms performance. Based on my reading of this paper and many others, I think there are some pragmatic considerations for building microkernel-based systems. The granularity of processes should be curtailed when performance is critical. Security is a spectrum, and such a system can still be more secure than the status quo. Limited kernels should be colocated next to processes again, not always across address spaces (since Meltdown), deferring to a cross-address space kernel on the harder-to-secure paths. If a process has a timer capability, and likely will for its remaining lifespan, an optimization could have a stub kernel accepting timer syscalls and forwarding the rest. Lastly, and this is a broader problem in most software, both code and state must be located in their proper places[1]. Use Parnas' criteria [2] for modular programming. If you believe in the power of the concept of microkernels, I have this to sell you; I believe it's even more basic and necessary. It's probably one of the most fundamental concepts we have on how to write good code.

[0] https://www.usenix.org/system/files/osdi24-chen-haibo.pdf [1] https://dl.acm.org/doi/10.1145/3064176.3064205 [2] https://wstomv.win.tue.nl/edu/2ip30/references/criteria_for_...


That's interesting, thanks for the explanation and the references. Of course I agree with Parna's modularization principles, and having spent a lot of time with different versions of the Oberon and Active Object system I think that a microkernel is a natural fit. You seem to be a scholar of microkernels; are you also developing microkernels?


An HNer after my own heart! If only Parnas' work had gotten more mindshare. Everyone knows about the benefits of modularity and layering, but most examples are unconvincing. Even simpler than "modularity", in a manner unifying it with the lessons of Hongmeng and Theseus (whose team wrote the state spill paper I linked), I now think of "putting code/data where it belongs". Where two units separated interact rarely. As parallelism in computing, lines that never touch won't interfere. Total parallelism is not possible for any useful program, because coordination is necessary, but the right arrangement of knots and crossings will make things go as smoothly as they can. A modular program should also be a fast program. The only real obstacle is developer headache.

> You seem to be a scholar of microkernels; are you also developing microkernels?

Nothing professional, and I haven't even gotten to actually developing. But I have a general design and many half-baked specifics. I like to push the limits of what's been done. Developer practicality is secondary to bare minimalism, especially because convenience can be built back up (if painstakingly). I'm mainly inspired by seL4 and Barrelfish.

My most radical idea is making the kernel completely bare, without even a capability system or message passing. Similar to Barrelfish, I'd have a trusted userspace process (monitor). If I place it in the same address space as the kernel, every privileged interaction adds two mode switches, which I think (for I have not demonstrated it yet!) is well worth the greater programmability of kernel functionality. seL4's use of CNodes is elegant in one sense, but in another, it hamstrings both the user processes (fine, good, even) and the kernel itself (bad). seL4's approach is undeniably a better target for formal verification, but it restricts how efficient capabilities can be. Barrelfish, which targets multicore machines in a distributed manner, makes the capability system (as the load bearing core of these kinds of microkernels) even more contorted. The kernel is the multiplexer of last resort, standing in for the hardware. The sooner the kernel is not involved, the easier everyone breathes. Instead of trying to build a framework/foundation and the building itself all at once, the framework itself is plenty valuable. The monitor gets the control of the kernel but without the dependence on hardware or the rigid interface to userspace. This partition presents a meaningfully different level of multiplexing, where the kernel and the monitor each play their own part. The monitor's view of the virtual hardware offered by the kernel is much improved.

Security and trust are not black and white, and the kernel itself should be flexible to adaptations. I could just implement seL4 or Barrelfish in the monitor instead, or diverge more and investigate the new tradeoffs. Capabilities are load-bearing here, too, so there is every reason to play around with them. How the capability system works will determine how the entire operating system works. (As an aside: I was pleased in noticing that object capabilities have a close relation to Parnas-style modules, being their interfaces. But I think what object capabilities are can be played with too.) How might capabilities be stored, or accessed, more efficiently? I think there's definitely a lot of room for improvement there. Composite offers some ideas there, though I still lean towards Barrelfish's ideas. And I imagine specialized kernels, paired with userspace processes in their address spaces (like the "true kernel" and monitor), reifying the capabilities granted to those processes. Traditional microkernel wisdom could be interpreted as requiring as little code running in kernel space as is feasible. However, I have many other parameters I wish to allow people to optimize for, not even just performance, so I offer this: the core kernel will be so minimal to the point it hurts, and the monitor picks up the slack. Then, if security is paramount, only the obviously safe, minimally augmented kernels will be exported to other processes. Programmatic generation of specialized kernels, coordinated on capabilities, even restricted to only some processes. But if willing, much more daring ventures can be tried. I even have the suspicion that one could place what amounts to Linux as a specialized kernel, as the ultimate mode of bona fide virtualization. No VirtualBox, no personality servers, or even syscall emulation. I wonder how hard the task would be. Although I should probably learn more about user-mode Linux, and similar works in other operating systems (DragonflyBSD, and seemingly future-Redox?) to just run them in user space. That's still a pipe dream for now.

Having mentioned so much about seL4, and given this thread is originally about QNX, I should mention that I don't think my dream microkernel should put so much emphasis on kernel-facilitated message passing. I really am just offering a context switch this time. There isn't even a scheduler in the "true kernel". For all of the argumentation I've seen from the seL4 team for why any form of IPC less minimal than theirs is likely suspect, I don't see a good reason to not shoot seL4's IPC in the face too. Although some care is necessary, I could make it possible for seL4 IPC to be built exactly as-is, in the aspect of maximizing register use. The other main concern of seL4's IPC, that of threading (particularly thread priorities), I find even more suspect. No threads in my kernel either! I will take scheduler activations instead, please and thank you. I think people have been misguided into believing that "threads of execution" should be supported specifically by the kernel, when in reality, they are a much higher-level abstraction. The presence of an ongoing sequence of execution is another of those concepts that must be carefully captured in our design of software, a logical concept that informs how we should write code. Kernel threading is like supposing that a person on a smartphone doesn't view the multiple app boundary crossings and plethora of UI actions as one unified whole. The entire course must be mapped out, studied, and integrated. Kernel threading gives the illusion that we can manifest threads independently of programs, but the program determines the threading. Work instead from the hardware resources, the physical cores present, offering an interface above them, and meet the program as its developers distill its abstract formulation. The kernel's task is to bring the hardware from the bottom up to the developers, because that is necessarily how developers must interact with hardware. Otherwise, we really could invent more cores and memory to accomodate all those threads. Certainly, by removing threads from the kernel, I don't claim to have solved concurrency, or priority inversion, or anything like that. I merely want the hardware to be exposed as-is, but a bit friendlier, and people can build ever more friendly abstractions as they can and will, depending on the tradeoffs.

All things should reside in their proper places. Push down accidental complexity, bring up the essential complexity, letting everything that bears the burden of supporting things above itself (chief among them are the primary multiplexers of the kernel and system services) only do so to the extent it needs to. In the kernel's case, being simply the trampoline between the hardware and the program, Liedtke's minimality principle is perfect. Putting anything else in the kernel can only be beneficial for performance, if even that, so the tradeoff is quite plain. Even trust is not gained; it may seem horrific to have a trusted userspace process such as the monitor, but really, does the first process of any operating system not have such privilege? My monitor simply has a more defined responsibility, but given that the kernel proper is naked, the overall trust has been preserved, I think. And so on, the investigation can go. In the end, I may make the edges somewhat sharper, but they were sharp to begin with, and I offer tools to dull them. But please do note if you disagree with my conclusions! This is still just my own thinking, developed without dialogue.

</rant>


I appreciate the effort that went into this rant a lot (I need to reread it after coffee has fully kicked in), and it touches some ideas I’ve accumulated over the years.

If you still feel in sharing mood, feel free to post links to interesting papers or proof of concepts in the space for further education.

EDIT: I quite like your idea of making the kernel unaware of threading, though I'm not sure how to go about implementing that. This is more radical than the other great idea of moving the scheduler and the concept of time(sharing) itself to userspace (I've seen a few talks about it on YT, I forget the name of the project that explored this avenue). So effectively ring 0 should only have to deal with enforcing capability security, while everything else lives in userspace.


I certainly can't claim to have discovered these ideas, though perhaps I am one of the earliest to propose gutting the kernel as heavily as I am (the riskiest manoeuver out of my proposals).

> This is more radical than the other great idea of moving the scheduler and the concept of time(sharing) itself to userspace

The idea of userspace scheduling has been explored widely. Hydra took the plunge, but the L4 community is still reluctant. For good reason, since this typically increases latency on a latency-critical path. This is one of the strongest motivations for optimizing context switches by increasing kernel-user colocality in the same address space.

> I quite like your idea of making the kernel unaware of threading

See scheduler activations[0]. Even seL4 has kernel threads, which I think developed mainly due to being used to it, when the alternative would be better for formal verification.

> So effectively ring 0 should only have to deal with enforcing capability security, while everything else lives in userspace.

That's the idea, but unlike seL4 and Barrelfish, I think wholly implementing the capability system is very inflexible. The capability representations are rigid, which fixes (i.e., makes static) performance and fixes policy (all mechanisms restrict policy somewhat). It defies programmability. That's why I want to move most of the work to the trusted userspace process, though for the specific architecture I'm thinking of, it could be another module in kernelspace instead.

Further reading:

[0] https://homes.cs.washington.edu/~tom/pubs/sched_act.pdf | Scheduler activations involve scheduling by physical cores instead of kernel threads. Applications will be notified when losing or gaining a scheduler activation (a context in which to execute code), such as on preemption or initiating a blocking I/O call, or the reentry from those actions. This makes user-level threading more powerful, as well as any concurrency model, since the hardware is more accurately exposed.

[1] https://barrelfish.org/documentation.html | https://barrelfish.org/publications/barrelfish_sosp09.pdf | https://barrelfish.org/publications/barrelfish_hotos09.pdf | Barrelfish is a research OS that extends seL4's capability system to multicore machines in a principled manner. It also addresses hardware heterogeneity and increased complexity in hardware by using declarative techniques broadly.

[2] https://www.usenix.org/legacy/events/osdi99/full_papers/hand... | The Nemesis operating system focuses on interactive media applications. A major limitation of many OSes is that memory management is not well treated as a latency-inducing subsystem. Self-paging means making each application handle its own page faults, clarifying time usage for memory management.

[3] See the papers I linked in https://news.ycombinator.com/item?id=45522131

[4] https://dl.acm.org/doi/10.1145/2517349.2522720 | A great overview of the technological distinctions of the L4 microkernel family, emphasizing seL4

[5] https://dl.acm.org/doi/10.5555/2685048.2685051 | A Barrelfish paper that modularizes the kernel further, allowing superb flexibility such as easily swapping out the kernel running on a core


That all sounds very interesting, but it goes far beyond the scope of my current concern with microkernels. At the moment, I am satisfied with extending a well-documented approach such as Minix with sel4 (or other candidates) in such a way that the performance meets contemporary expectations. With regard to Parnas' approach, there is still considerable scope for possible solutions, as he has formulated conceptual ideas rather prescriptions and policies. I can understand your fascination with new, previously untried approaches, but considering that Tanenbaum's book is already twenty years old, there is obviously a gap in the literature on the proven state of the art that should be closed.


This is mostly untreaded territory, true, but I think going to extremes is instructive for moderation. In this kernel or others, abstractions can be created to build back up to what we are used to, but going far afield is necessary to remake the core. I was also thinking about the process of proving out solutions, and my arguments largely appeal to the existing groundwork of seL4 and Barrelfish, though of course they are still fairly unproven. I use many other works for inspiration for smaller parts of the design, such as scheduler activations or Nemesis' self paging, and they're easier to apply to other OSes. It is great to see Linux gradually proving technologies that have long been imagined, such as increasing userspace agency in handling segfaults, and the longstanding question of modular schedulers. The nice thing about principles such as what Parnas described is that they are always worth considering. My ideas must be concretely implemented, but the true benefit will lie in how they inform our understanding of the abstract solution space. I'll be happy if my work leads to more microkernel-like work in Linux!


For folks who want to experiment and have a spare rPi:

https://carleton.ca/rcs/qnx/installing-qnx-on-raspberry-pi-4...


I'll do you one better -- a free QNX 8.0 image ready to flash on a Pi 4! Can also be adapted to Pi 400 and CM4. https://www.qnx.com/developers/docs/qnxeverywhere/com.qnx.do...


I honestly zoned out when I saw how many and what steps where required before just wget'ing a image file.

Sorry, nope.


Yeah, I get that and appreciate it. I feel the same way. We're working on changing it to be a 1-click process.


Or an albeit older version if you want to play around in a VM

https://archive.org/details/qnxnc621_202306


> Don't misunderstand us. We at Quantum have a great deal of respect for Unix. It was a major force in moving operating systems out of the 60's and into the 70's. QNX however, was designed in the 80's and will be a driving force of the 90's. Over 20,000 systems have been sold since 1982.

Things they weren't anticipating included GNU, the internet, Microsoft Windows, third-party development, the Windows applications barrier to entry, the World-Wide Web, shareware, BBSes, VARs, and the free-software movement. They didn't understand how operating systems were a winner-take-all game, so pricing your OS at hundreds of dollars was a losing strategy.

But it was 01986, so who could blame them? Their 01987 ad does try to reach out to VARs.

Still, they were certainly aware of Unix, and you'd think that would mean they were aware of uucp. They just didn't anticipate its significance. Again, though, who did?

They also don't seem to have appreciated the importance of GUIs until version 2.0 in 01987, despite the popularity of the Macintosh, the "Jackintosh" Atari ST, and GEOS on the C64. The article says that the "Photon" GUI everyone remembers wasn't until QNX 4.1 in 01994.


Their OS was fairly unique in that it was a POSIX OS that could scale up to being used on desktop/servers, was extremely stable, and could handle real-time control. From a desktop perspective, the real-time aspect also meant that using the system was a joy, everything was super snappy. They never managed to become mainstream, but in the 001980s through the 002000s that was a good enough niche that the company managed to stick around. The 1.44 MB demo QNX put together in 001999 with a full GUI, networking stack, web browser, file browser, utilities, demo programs, etc. was super impressive and I've never seen anything with that level of functionality squeezed into anything close to that size.

Of course most of this advantage has gone away, both because real-time Linux has become good enough to compete with QNX for a lot of use cases, and because QNX stopped supporting self-hosted development with QNX 6.6 in 002014. From a business standpoint of course it makes sense to focus on the automotive and other embedded markets where all the paying customers are, but from a tech enthusiast standpoint it makes me a little sad. Given the licensing cost and competition from real-time Linux on the high end, and Zephyr/FreeRTOS on the low end, I'm not sure why anyone would choose QNX for a new project today. If anyone reading this has chosen QNX for a new project relatively recently, I'd love to hear your perspective.


It wasn't a POSIX OS until much later, at which point they did appreciate the importance of most of those things.

Have you checked out Oberon? It has a full GUI, networking stack, web browser, file browser, utilities, demo programs, etc., in a similar size. It isn't suitable for real-time control at all.

I'm also interested to hear from people choosing QNX for new projects.


There was a super amazing one-man show of Real-Time-Oberon at ETH Zürich [0], used it for some (failed) Robocup try. I wonder what Roberto is up to these days...

[0] https://www.ifr.mavt.ethz.ch/research/xoberon/


Interesting, I'd never heard about this! "XOberon is loosely based on the Oberon System, and it is written in the Oberon-2 programming language. XOberon in its most recent incarnation takes advantage of the computational power of the PowerPC processor architecture. (...) The 68K target version of XOberon is free to download and use. The PowerPC version is available on request."


Not new in design but in assembly/integration: RMS Instruments (Canada) sells equipment for "real-time compensation" in airborne surveying. Analog-to-digital conversion of, primarily, magnetometry data.

Uses the Photon desktop environment.


I used ICONs in school growing up in Ontario, Canada, they were so cool. It was a sad day when Windows PCs replaced them in the computer lab.

All but a few of these computers were destroyed by the ministry of education. And without the LEXICON server that accompanied them, they're basically useless.

For a bit of fun, I ran the DOOM shareware demo using the official QNX4 port on a 486SX with 8M of ram.

https://brynet.ca/video-qnxdoom.html

I picked up QNX6 again as a hobbyist later in life... until self-hosted QNX was killed, no bootable .ISOs after 6.5. Then they killed the hobbyist license, killed the Photon desktop GUI, dropped any native toolchain support in place of a Windows/Linux-hosted IDE. Porting software became difficult, pkgsrc no longer maintained.

They are completely noncommittal as a company, nothing short of actually open-sourcing it under the MIT/BSD would convince me to use it again.. and not another source-available effort that they inevitably rug pull again.

https://www.osnews.com/story/23565/qnx6-is-closed-source-onc...


I don't get the sense you are their target market.


Nice article, interesting read.

The Neutrino 6.4 version, which was made accessible as "openQNX" to the public, can still be downloaded from e.g. https://github.com/vocho/openqnx.

Here is an AI generated documentation of the source: https://deepwiki.com/vocho/openqnx


> The Neutrino 6.4 version, which was made accessible as "openQNX" to the public

From memory: the source was made freely available to anyone who wanted to download it, but not under an open source license, under an individual non-transferable proprietary license; so, legally speaking, anyone who downloaded the source back then (before this program was terminated) is allowed to keep it and use it forever (under some usage restrictions, I forget the details), but they aren't licensed to share it with anyone else.

So this is somewhat comparable to all those leaked Microsoft Windows source code repositories on GitHub – technically illegal, but the copyright holder obviously doesn't care to try to stop it (especially ironic for Microsoft, given that as GitHub's owners, they could put an end to it very easily, if they could be bothered)


See https://www.openqnx.com/node/471

"Access to QNX source code is free, but commercial deployments of QNX Neutrino runtime components still require royalties, and commercial developers will continue to pay for QNX Momentics(R) development seats. However, noncommercial developers, academic faculty members, and qualified partners will be given access to QNX development tools and runtime products at no charge."


You are quoting an old press release, not the actual license agreement.


It clearly states that the company made the referenced source code version available for free.


Yes, but it doesn't tell you the precise legal terms and conditions under which they made it available.

Which is the whole point – legally speaking, press releases count for very little, the actual text of the license agreement is far more important.


Are you a lawyer? USA law recognizes promissory estoppel, European laws provide protection by the good faith principle. The press release explicitly states that developers would have access to download "direct from the dev team's subversion repository". The press release also clearly articulates QNX's strategic intent: making source code available to "accelerate innovation," enable developers to "get up to speed more quickly," and allow the community to "learn from QNX's extensive OS experience". The CEO explicitly stated the goal was helping developers "learn the technology". So there is even a strong fair use case for non-commercial educational study.


Are you a lawyer?

"Promissory estoppel" doesn't work that way... it doesn't mean "I don't need to read the legal fine print, I can just go by my interpretation of the press release"


> "it doesn't mean "I don't need to read the legal fine print, I can just go by my interpretation of the press release"

That's not what I said. Anyway, I don't know where you are located, but at least in my country it is no problem to download the code from github for non-commercial educational study, especially given the listed facts. I think we can leave it at that.


In the US, the fact that a given copy is/was available at no-charge is not concomitant with the right to produce and distribute unlimited copies on one's own. Those remain the exclusive rights of the copyright holder.


> In the US, the fact that a given copy is/was available at no-charge is not concomitant with the right to produce and distribute unlimited copies on one's own.

Still not what I said. Let me be more precise. QSS management has clearly stated its intentions in its press release and actions (or inactions). And obviously, neither QSS nor its legal successor saw any need to take action against the referenced repository or the approximately 200 forks. Such action would very likely conflict with the current legal situation. Owners and users of the repositories could defend themselves based on fair use (educational purposes and based on QSS's own statements), implied License (also based on QSS's original statements), equitable estoppel (because of 16 years of toleration, and because of QSS's own statements, possibly a servitude created by estoppel), and even acquiescence (due to deliberate inaction). This has nothing to do with "the right to produce and distribute unlimited copies".

17 U.S.C. § 512(f) makes rights holders liable for knowingly false takedown notices. In the case of Online Policy Group v. Diebold, a court ruled that no reasonable copyright holder could believe the content constituted an "unlawful infringement" due to fair use. Diebold had to pay 125'000 USD in damages. In the case of QSS, a DMCA takedown after 16 years of tolerance with documented knowledge would be actionable as misrepresentation under § 512(f). A reasonable rights holder cannot claim that there has been an infringement if they have knowingly stood by and watched for 16 years.


You are not representing the facts of the Diebold case accurately or the court's findings.

None of the defenses you're citing here would hold water in court.

(We're not in court here, and there's nothing more to say, so this will be my last response about this.)


Unsubstantiated claims like "none of these defenses would hold" just spread FUD. I don't see a reason why anyone would discourage HN users from accessing QNX sources under QSS's free educational access. The 2007 press release explicitly allows non-commercial downloads and studying; fair use supports personal research too.


A more up to date fork? https://github.com/onlinedj/QRV (last commit 4 yrs ago)


Thanks for the hint. vocho/openqnx has about 200 forks, and many seem to have been maintained. The original vocho/openqnx repository is the result of the MONARTIS (MONitoring Application for Real-Time Industrial Systems) project conducted in 2009 at HEIG-VD (https://heig-vd.ch/), Switzerland. The _NTO_VERSION define in <sys/neutrino.h> indicates that it was based on QNX version 6.4.1. I've created my own fork and am currently studying and minimizing it.


do you know what tool was used to generate that documentation? I would be interested in trying it out on some other repos.

EDIT: oh I see, thats what deepwiki itself is


QNX is a really cool OS (it's fast AND elegant AND extremely reliable) and QNX dude Dan Dodge gave the only conference keynote so far that I greatly enjoyed. It was basically fun stories from over 30 years (at the time) of OS development. It's sad to see QNX use, apparently, decline.


I have some awesome recordings of Dan Dodge talks.. they are definitely entertaining :) FWIW we're growing, hiring, and expanding, these days!


This series are quite interesting to understand and play with QNX 8.0

https://devblog.qnx.com/tag/from-the-board-up-series/


Thank you! :)


I worked at QNX twice as a softwarte engineering intern during my college days in 2019 and 2021 on their OS security features and testing. Can say it was an unforgettable experience and I learned a lot there. The senior devs there are amazing.


Hey fellow QNXer! =) Agreed, I regularly learn so much from our dev talent.


Is there an effort to get this stuff out under a license people would recognize and be comfortable with? Asking because the question came up almost a year ago and it sounded like you were going to look into it.


Yes there is. It's a big priority here. Changes like this take time and a careful balance though, but we're working on it!


Happy to hear it.


What a great summary. I was reminded of QNX through the Blackberry acquisition but I had forgotten it's history went back so far. (I should have remembered, I was around in those early PC days) With so many things these days having an operating system running them (including the mentioned cars, rockets and robots) QNX seems to have a bright future ahead doing what it does best, being the solid core to build upon.


It's funny.. We've spent a lot of time at universities this year, and that's the same reaction we sometimes get there when we say we're over 45 years old and have nearly 1000 staff members. "Yo, bro, I thought you were like a little startup bro!"


This is not the same Gordon Bell as the early DEC programmer and later VP.

https://en.wikipedia.org/wiki/Gordon_Bell


Around 2002 I found out that I could access my cable modem via telnet. And guess what, a QNX ran on this modem. Exciting times.


Interesting to see this a couple of days after my post. I wonder if there is any link, but in case there isn't: QNX is well worth studying, it is in so many ways an OS done right.



It's somewhat refreshing to see this OS going strong in 2024. I briefly used it for some ill fated project around 2008 and that's when I learned to appreciate its design and well written documentation (including a warning that a timer would overflow after 400-odd years of continous uptime).


A current Ada 2022 compiler is available for QNX. I think it’s used in autonomous vehicle applications.


QNX 6 was the first non-Microsoft non-Apple OS I ever used, even before Linux, and after trying and failing to pirate OS/2 Warp 4. It came on the Maximum CD with the March 2001 issue of Maximum PC alongside the “Alt OS” article in the same issue: https://books.google.com/books?id=yAEAAAAAMBAJ&lpg=PT53&dq=%...

So much '90s anime in those screenshots — super nostalgic!


I used QNX in the 2000s at NIH to run experiments! We eventually replaced it with Linux and Windows and dedicated "experiment" hardware to handle the "real time" needs.


Oh this brings back some fun memories. I worked with QNX for the ICON computer at Cemcorp and ESP Educational Software Products.

The OS was so clean but it lacked a lot of basic tooling. Back then there was no GUI or even a graphics library. We had to build or port a lot of things, including a VCS, from scratch. My editor of choice was JOVE (I couldn't get Emacs to build). I remember digging up various papers on graphics and creating our first graphics library.


Still have the 1990's demo distribution somewhere on my folks place tucked away in a box.

It was great experience, especially for those of us that appreciate microkernels.


I was involved in porting some software to Qt back when Photon was deprecated, and I always found the system very interesting. This is the first time I'm actually learning more about its history. Thanks for the great read.

I was also a huge fan of BlackBerry phones (having used Q5 and Z10 as daily drivers). The system was solid and had some really cool ideas. Too bad it didn't work out...


Until recently, QNX was the OS used for the infotainment/head units in VW/Audi including navigation, voice interaction, etc.


Who else remembers hacking on QNX from the i-opener and 3com Audrey era? ;)


Hell yeah, iopener represent. :)

I was a contractor to Netpliance Inc early in my student days. They kept charging people for service that slowly degraded to the point of clients not getting their email for months and being told to try getting a Hotmail account. Watched the share go to pennies, then the company imploded and then everyone on my contract got laid off. Important early life lessons about how loyal to be to your job and keeping your resume fresh. A priceless education you can't get in college.

Anyway, I _liberated_ an RMA'd Iopener, built a handmade IDE cable to connect to the funky pinout, added a disk and ran it at home as a music server and internet device (with a hacked Netzero dialup account, of course). Ah, those were the days.


Yep, remember the 3com Audrey, probably still have it in a box in my basement!


Used QNX for an audioserver.. got MP3 encoding and playback ported to it. Visited the headquarters and Dan Dodge instructed his team to help us in any way we needed it.

We needed the help. Thank you Dan!! We eventually ported to linux about 6 years later, but you helped our startup get up and going.


I love this. We're bringing that spirit back with QNX Everywhere -- free access and as much support in the community as we can!


Man I still remember booting up the 1.44 MB disk image demo and being amazed how well it worked at the time.


I had shareware floppy of QNX. I still remember how I admired it, but unfortunately had no use case.


It was extremely impressive that they had managed to fit the whole running OS in a floppy. I don't remember Linux ever running on a single floppy; it was at least two (bootdisk + rootdisk).


I wonder whether the QNX 4.25 source code will ever be released under a FOSS license, or at least a non-commercial license that allows distribution of modified versions. If so, it might be worth running on less capable hardware for educational purposes.


What a fun blast from the past! I fondly remember real-time at Waterloo. If nothing else, it taught you how to project-manage/time-manage so you could write a project of tens of thousands of lines of code in a four month term.


Some of my colleagues and I visited the class with the UW CSC in the Spring.. They said something like half of the students drop out in the first week after learning how much work that class is going to be!


I have a couple iOpeners that I use as text terminal. Full QNX once you root them.


The blackberry QNX implementation on their Tablet was rock solid, I had high hopes. That was the last time i knowingly used QNX.


From someone who worked on PlayBook: THANK YOU! It was a tight little machine with a great browser and awesome audio for the time. And it slid right into my back jeans pocket. Those were good times!


VW (and others) was using QNX as the base for its infotainment UI. Snappy and responsive. Got the job done.


We're in 270+ million vehicles on the road today, 1 in 7 globally (and growing!) It's awesome going to shows and reminding people that every time they turn their key (okay, okay: _push the ON button_), there's a good chance they are booting up QNX under the hood!


During the 90's, I seem to recall that the "SkyTrain" light-rapid transit system in the lower-mainland (Vancouver, BC Canada) used QNX as it's RTOS.


Love some of the juicy pieces out of the story,

> The name QUNIX was a bit too close to the name UNIX for AT&T. The name of the system was changed to QNX in late 1982 following a Cease and Desist by AT&T.

Already not as nice as in the early days.

> While RV1 was limited to just C and x86 assembly language, the company was hard at work on BASIC, FORTRAN, and Pascal compilers that would utilize common code generators allowing for the mixed-use of languages without losing optimization.

Yet another example of previous polyglot compiler stacks attempts.

> UNIX systems come in more flavours than ice cream.

That was a fun one.


This Icon was a hunk of junk. The only value it provided were to the students with any sort of curiosity about how this frankensystem worked. It was only later that it was clear it took advantage of procurement processes in the most extreme sense. A pure embarrassment of technology, grifters, and government. We learned more from the PETs, Commodores, and after that the PS/2s.


The Watcom compilers on it were pretty good. I started with Pascal, then moved to C, and finally Assembly.

I do remember compiling being slow, Turbo C under DOS was much faster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: