I don't fully understand why. The idea of microkernels is to mitigate instability/insecurity from a lack of memory safety. This is exactly what rust actually provides.
" In short, the implementation is proved to be bug-free (see below). This also implies a number of other properties, such as freedom from buffer overflows, null pointer exceptions, use-after-free, etc.
There is a further proof that the binary code which executes on the hardware is a correct translation of the C code. This means that the compiler does not have to be trusted, and extends the functional correctness property to the binary.
Furthermore, there are proofs that seL4's specification, if used properly, will enforce integrity and confidentiality, core security properties. Combined with the proofs mentioned above, these properties are guaranteed to be enforced not only by a model of the kernel (the specification) but the actual binary that executes on the hardware. Therefore, seL4 is the world's first (and still only) OS that is proved secure in a very strong sense.
Finally, seL4 is the first (and still only) protected-mode OS kernel with a sound and complete timeliness analysis. Among others this means that it has provable upper bounds on interrupt latencies (as well as latencies of any other kernel operations). It is therefore the only kernel with memory protection that can give you hard real-time guarantees."
It is. But that will still only prove whether the type system can correctly prevent memory safety problems. As many security people correctly point out, memory safety doesn't guarantee security of the programs you write in Rust. seL4 has a stronger security proof that can guarantee not only that seL4 doesn't have (for example) use-after-frees but that it also actually works right (for a formally-specified notion of "works right").
The reason you'd want to combine seL4 with Rust is to give confidence that (a) the core part of the system (microkernel) works right and that (b) the less core parts of the system (drivers, libc) are least free from nasty memory safety problems. To a first approximation, it's a compromise between a project like the OS in the "Writing an OS in Rust" series, which guarantees memory safety but doesn't guarantee that any part of the system actually does the right thing, and a hypothetical fully formally-verified OS, which is (with current technology) something that would be way too expensive and time-consuming to be practical.
Exactly. It's same reason high assurance requires source-to-object code traceability even for SPARK Ada. Of course, it's nice the tooling helps with that. Rust needs same thing plus either proven or zero runtime target. Unless it has it already.
I have plans slated for doing proofs about Rust programs in 2017, if not sooner (I'm interning at Data61 on the seL4 verification team starting in two weeks, and if I don't get a chance to work on Rust verification then, I'll start after I'm done)
Oh hell, you going to be working with the top talent. You might learn some high assurance skills after all. That makes your project 2x as interesting. Good luck on the new job. :)
Rust has long been capable of running without a runtime (that's how all OS development in Rust is expected to operate) and formal verification of the language spec is ongoing (though full verification is still probably years out, and who knows if the compiler will ever be formally verified).
Hell yeah! Appreciate the updates. My idea there was a runtime or overhead came from an early OS attempt in Rust where they couldnt get past real mode. Had to use non-Rust code for that and it was using too much space IIRC. They've gotten past those obstacles now?
A security-focused microkernel does three things (simplified): isolation in time and space of various partitions; mediated communication between them; facilitate management of hardware resources. Note that nothing on this list prevents damage within a partition, just between them. So, one still needs to secure the partition's code and data against attack if it must be trusted.