The list is mostly the kind of tactical mitigations that get bypassed a lot by private individuals and academia. Clever people write one, clever people break one, rinse, repeat. The untested theory is that they would be really hard or impossible to bypass in combination. The kind of people that could test that have mostly been smashing things like Chrome or mobiles where there's enough users to justify their efforts in terms of fame, money, etc. The real mitigation here is obfuscation of using a platform hardly anyone uses that is also harder than average to target. Then, there's some benefits to the security measures used on top of it. Obfuscation is main reason attacks aren't attempted much, though.
A better list would start with design techniques and assurance activities that led to systems with few to no vulnerabilities during pentests by well-funded, knowledgeable attackers. That's on top of what survives in the field with lots of attention. In the 80's-90's, those techniques included precise specifications of behavior or security policy, ways of proving/testing that in the code, hierarchical layering with simple coding to facilitate analysis, small kernels with most code deprivileged, memory-safe languages where possible, verification that object code matches source w/ no compiler errors/subversions, partitioning GUI's/filesystems/networking limiting apps effects on each other, covert channel analysis of entire system, secure repo's containing these artifacts w/ secure transfer to users, and option to re-run the analyses or rebuild the kernel themselves for independent replication.
Each of these techniques found or prevented many vulnerabilities in systems they were applied to. They even became mandatory requirements under the first, security certification: the TCSEC. Trusted Xenix in 1990 used some of them for that reason. Unlike often-bypassed mitigations, each of these methods still work today. Some work even better due to tooling improvements. The BSD's are largely ignoring these methods to maintain legacy compatibility with insecure architecture, unsafe code, and configuration scripts that can be just as risky. Unsurprising given early attempts at applying strong methods to UNIX, like UCLA Secure UNIX, showed the UNIX design had covert channels and such built in. You couldn't fully secure a UNIX without breaking legacy compatibility in lots of ways on top of a significant performance hit from memory safety and context switching. Led high-security projects to just virtualize UNIX/Linux on top of secure, isolation kernel. Projects that are attempting to follow some of these lessons in low-privilege architecture or language use include GenodeOS, Muen separation kernel, seL4, JX OS, and ExpressOS for mobile. EROS was an interesting older one that added persistence on top of capability-based kernel.
I figure someone should mention the methods that stopped NSA's hackers in various evaluations since they're strangely not on the list.
I would advise taking any claims from HardenedBSD about HardenerBSD with a huge grain of salt. The main code author behind HBSD has a history of his patches to FreeBSD being rejected due to very poor implementation, coding errors, poor quality and not understanding OS and security features design and reasoning and not accepting their reviews and not listening to his peers. He is however very good at getting himself and HBSD a lot of PR attention for bold security claims, but has a very poor record when it comes to quality code.
This list is also not very accurate either: his ASLR patches to FreeBSD were rejected due to quality issues, then they were applied to HBSD. The lack of mark for base sandboxing is another one, where FreeBSD had Capsicum sandbox available for few years now and a lot of base is now Capsicum sandboxed, with more and more coming with every release. I could go on here, but that should give you the picture.
Take this advice with grain of salt as well - I'm a FreeBSD developer, so I might be biased.
Sounds a bit like the grsecurity situation on Linux; someone wants to patch the system to be more secure, but in a very specific way and with a tendency to break existing programs and with questionable code quality, leading to mainline refusing to merge their patches without major reworking, which leads to the developer making the patches their own thing while loudly proclaiming that the mainline is terrible and insecure.
I've never heard of HardenedBSD before. I thought OpenBSD was generally considered the best choice for serious security, but I'd be interesting in reading up more on how they compare. As someone that's not an expert on the subject, it would help to get some extra info that explains why certain features are useful for improving security.
It's not clear that all these features impact real-world security, but maybe I'm just inexperienced and naive. For example, OpenBSD has "Most of base sandboxed", which seems like a huge deal to me. Knowing how many security issues we've seen in the last 10 to 20 years relating to each feature would help in understanding their impact a lot more.
Some immediate questions that popped up from clicking through their pages: Who uses this OS? There's a few company links, but I had a hard time figuring out what some of them even do. Maybe this is a really dumb question, but why FreeBSD over OpenBSD? Finally, are changes being upstreamed? Can we expect these improvements to eventually make it into FreeBSD?
Not only sandboxed, but many are pledge(2)'d now also.
One downside with OpenBSD is the result of lacking resources - they only support the latest 2 releases (one every 6mo) with only the most critical patches being back ported. I don't believe syspatch(8) changes this, although its certainly easier to apply kernel patches now (please correct me if I'm mistaken)
Same with packages - unless you use mTier to get binary updates, security fixes and updates for packages need to be compiled yourself. Not the worst, but depends how much free time you have to keep on it.
FreeBSD has a larger ecosystem, and seems to be more performance oriented
In a way, just not in the traditional filesystem sense; e.g., jail, chroot.
Pledge allows a program to 'promise' which calls it is expected to make. For example, a program promising only to use 'stdio', will SIGABORT if you try to open a socket, fork, exec, or anything not part of the stdio group (as defined by pledge)
That comparison doesn't seem to include OpenBSD's biggest security feature, their track record of robust code review, eliminating countless security bugs before they're ever exploited.
His observation on code review was that it did help to find low hanging fruit kind of bugs (int overflows and signedness bugs), but that bugs were still easy to find in OpenBSD, even though he felt like the code quality was much better, than in other BSDs.
If you are just reading the slides, they are too out of context.
I have wanted for a long time to see some quantification of these security features, i.e. "What percentage of attacks does feature X prevent in the real world?" and "With all these features combined, how much longer can the OS survive attack vs without them?", but I don't think I've ever seen it.
Random number generation, including arc4random() and /dev/*random:
OpenBSD: ChaCha20
FreeBSD: RC4
NetBSD: NIST CTR_DRBG using AES-128
HardenedBSD: ?
Dragonfly BSD: xor of outputs ChaCha20 and IBAA
Also on OpenBSD, major 3rd party packages often come with security improvements. For example, nginx and Apache run chrooted, and Chrome is pledged.
I don't have the time to analyze privsep in base among the BSDs, but I can say it's extensive in OpenBSD.
I would love to see pledge support in HardenedBSD. I think they're doing good work. Hopefully FreeBSD can import some of it.
Also, Dragonfly is a first-class BSD citizen and should be included in comparisons. As a recent example, the project leader, Matthew Dillon, has done excellent work relating to Speculative Execution and collaborated with other BSDs which helped everyone. Also I'm pretty sure OpenBSD has imported a number of hardware drivers from Dragonfly.
Kernel random (read_random_uio(9) / sysctl kern.arandom / /dev/*random / getrandom(2) / getentropy(3)) has been Fortuna since v10, and Yarrow before that.
Well, this does have an overwhelming number of 'the things you could want'. This serves the same purpose, while attempting to appear more credible by showing a tiny handful of shortcomings in an attempt to convince the reader that you can trust them.
Regular code reviews like what the OpenBSD folks do. Also, they appear to be back on OpenSSL again[0], and not using LibreSSL like the list claims. If they can't even accurately reflect which SSL library they are using, what else on the list is incorrect?
> Restrictions on mprotect to prevent switching pages between writable and executable.
No JIT then?
I could see a locked down server process wanting to drop access to this, but I feel like it's unreasonable to have on by default. (Also looking at you, Apple.)
HardenedBSD has per-application and per-jail exploit mitigation toggles. If an application violates an active exploit mitigation, the user can choose to either go without the application or disable the exploit mitigation for just that application.
"Adobe's Flash plugin is distributed in binary form only, and they do not provide a native OpenBSD version. Considering their security record, we thank them for this neglect."
what are some of the examples of the entries, that make DragonFly BSD far behind others?
I just finished setting up a number of simple dev/test environments for a simple jdk8-based backed service, on DragonFly 5.2.2, using default DF configs. So wanted to understand what I might need to look for hardening (I am still rather far from production, though) ).
A better list would start with design techniques and assurance activities that led to systems with few to no vulnerabilities during pentests by well-funded, knowledgeable attackers. That's on top of what survives in the field with lots of attention. In the 80's-90's, those techniques included precise specifications of behavior or security policy, ways of proving/testing that in the code, hierarchical layering with simple coding to facilitate analysis, small kernels with most code deprivileged, memory-safe languages where possible, verification that object code matches source w/ no compiler errors/subversions, partitioning GUI's/filesystems/networking limiting apps effects on each other, covert channel analysis of entire system, secure repo's containing these artifacts w/ secure transfer to users, and option to re-run the analyses or rebuild the kernel themselves for independent replication.
Each of these techniques found or prevented many vulnerabilities in systems they were applied to. They even became mandatory requirements under the first, security certification: the TCSEC. Trusted Xenix in 1990 used some of them for that reason. Unlike often-bypassed mitigations, each of these methods still work today. Some work even better due to tooling improvements. The BSD's are largely ignoring these methods to maintain legacy compatibility with insecure architecture, unsafe code, and configuration scripts that can be just as risky. Unsurprising given early attempts at applying strong methods to UNIX, like UCLA Secure UNIX, showed the UNIX design had covert channels and such built in. You couldn't fully secure a UNIX without breaking legacy compatibility in lots of ways on top of a significant performance hit from memory safety and context switching. Led high-security projects to just virtualize UNIX/Linux on top of secure, isolation kernel. Projects that are attempting to follow some of these lessons in low-privilege architecture or language use include GenodeOS, Muen separation kernel, seL4, JX OS, and ExpressOS for mobile. EROS was an interesting older one that added persistence on top of capability-based kernel.
I figure someone should mention the methods that stopped NSA's hackers in various evaluations since they're strangely not on the list.