Hacker News new | past | comments | ask | show | jobs | submit login
Porting OpenBSD Pledge() to Linux (2022) (justine.lol)
194 points by untilted on Oct 30, 2023 | hide | past | favorite | 32 comments



One of the things I feel OpenBSD has gotten right with pledge and unveil is the placement of responsibility. It's the developers who know and understands the code the best, so they are the most qualified to lock down the code.

AppArmor and SELinux is perhaps better suited for "easy" retrofitting and the rules are frequently shipped by the developers, but both are much harder to use and somewhat error-prone. I've seen more clients simply turning of SELinux or AppArmor than I have clients dedicating time to develop proper configurations.

Pledge and unveil just works, no mucking around in obscure configuration language or wondering if the error is in your configuration, AppArmor/SELinux or the code.


The comparison with SELinux is a bit off, because SELinux is very different from RSBAC and other ACL-type mechanisms for limiting the attack surface of software. SELinux is much more complex because it takes policy decisions like "this software is a Word Processor, it is not allowed to open files of the type Sensitive Document".

That is very useful for those with that particular need, for others it is unnecessarily complex. A more common use case for regular users would rather be "run word processor without network access". That's where simpler systems shine. There used to be many access control mechanisms for Linux but they seemed to struggle to retain mindshare and remain a bit obscure.

I would guess that's because so much is achieved by the old way of running software under their own uid and a chroot, so anything else already is a special case.


> One of the things I feel OpenBSD has gotten right with pledge and unveil is the placement of responsibility. It's the developers who know and understands the code the best, so they are the most qualified to lock down the code.

Most qualified, certainly. But if all application developers were responsible, we wouldn't need security mechanisms in the first place. I may be misunderstanding how pledge works, but it seems to be actually a very poor security mechanism, since you could just write software which... doesn't pledge to only use certain capabilities. It seems like a great bug prevention mechanism, but not so much for security.


> But if all application developers were responsible, we wouldn't need security mechanisms in the first place.

That's not true. I mean if developers could be infallibly perfect always, then yes we wouldn't need this kind of security mechanism.

But acknowledging that code always has bugs, it's a nice additional layer of protection. If someone comes up with a code injection bug and tries to use it to run attack code that gets blocked by a previous pledge, that's an improvement.


SELinux assumes that the operating system is the trusted downstream of less trustworthy software. By design this implies that someone downstream has to (re-)define all acceptable behaviour of the packaged blackbox causing lots of duplicated work and a fragile interface broken by upstream changes.

Pledge assumes that developers or maintainers will add pledge() and unveil() calls to the code. The software is no longer a blackbox, but given the tools to communicate intend to the kernel (e.g. i won't fork, exec, or create new sockets and will only access files in my /var/db/$name and /var/run/$name directories).. It doesn't change the intended usage of the existing APIs, but to get be able to use the tightest sandbox permissions you have to acquire capabilities as early as possible. This allows retrofitting useful pledge()/unveil() calls to existing code, get quick feedback and restructure the code over time.

An other interesting design to compare pledge()/unveil() against is FreeBSD's Capsicum. It's a fine grained capability mode (e.g. disable/keep specific ioctl() on a file descriptor) and can be used by normal FreeBSD processes as such, but the real sandbox mode is used by acquiring all file descriptors, restricting what is allowed on them, and entering the restrictive capability mode. Once inside capability mode there is no way back and you're only allowed to use existing capabilities to derive equal or weaker capabilities e.g. openat() relative to an open directory file descriptor instead of open() with an absolute path. It puts the burden purely on the developer. It's a very clean design, but correct to a fault. It's not harder to write new software to work inside it, but it's very hard to port software that wasn't written with it in mind to work at all, because it's all or nothing. As a consequence few software is written to take advantage of it.

OpenBSD's pledge()/unveil() is a pragmatic defense in depth tool. It works together with privilege separation and chroot. A good example is that new child processes intentionally are unrestricted and trusted/expected to apply their own restrictions whereas Capsicum mode is inherited to child processes.

Pledge() and unveil() are useful because they can provide additional safety and security at low cost, but they are less expressive. Porting it to Linux has the additional problem that Linux considers system calls to be the stable interface to the kernel instead of libc. The different versions of different libc implementations use different system calls on different architectures (e.g. sbrk() vs mmap()). To make matters worse some of them even depend of compile time flags (e.g. stat() vs stat64()). Each libc would have to implement its own pledge()/unveil() on top of a more flexible (aka complex and error prone) kernel interface.


I think security-wise, it's a much worse mechanism though.

SELinux has the better approach -- it's applied outside the application's control so it can't be subverted by some sort of environment manipulation. It's also modifiable, while pledge/unveil is hardcoded into the binary.

I really don't like the thought of hardcoding paths into the binary, it makes things extremely inflexible. Eg, say you have something that stores data. Under this model you can't run the program against an alternate data set because it'll refuse to work with anything but the official location.

I also think pledge/unveil has issues with process invocation. Either you can only restrict every process further than its parent, or an invoked process gets free reign. Neither is good.


There is no need to hardcode paths with unveil. With a CLI program, you could first parse arguments and then restrict yourself to paths based on that.

The same applies for pledge.


> SELinux assumes that the operating system is the trusted downstream of less trustworthy software.

Yes, it's perhaps not reasonable to compare pledge/unveil and something like SELinux. It really is different attack vectors the tools are defending against or mitigating.


> One of the things I feel OpenBSD has gotten right with pledge and unveil is the placement of responsibility. It's the developers who know and understands the code the best, so they are the most qualified to lock down the code.

You mean those developers who put in the telemetry code (or back-door or security vulnerability) in to begin with?

What do the application developers know about the user's organization's security policy?


They know how the program is intended to behave, what it's core functionality is. They have a model of how the software should work. A tool that allows them to encode that model into a policy is ideal.

The point is that they don't have to understand organizational policies, they just have to understand their program - does it use network, talk to the filesystem, etc.

If you want to enforce broader policies SELinux is not going anywhere.


The only thing you need to know to use pledge is what kernel features you access (file system, network, etc).


So what about using libraries that don't use pledge?


I thought pledge() works on the process level? Theoretically it shouldn't matter because your app is defining the rules.


They just work. They aren't pledged, though, so no guarantees can be made about what they actually do.


If they're so important, people will fork them to make them work with pledge.


As I understand, pledge()/unveil() are not declarative, and therefore difficult to review for users.



>Theo states that there are only 7000 users of OpenBSD.

While he's certainly in a position to give this estimate, I'm curious to know the factual basis of this opinion. That's a shockingly low statistic.


That is quoting an article from 2002, 21 years ago.

https://everything2.com/title/BSD+is+dying

(I've been using OpenBSD as my primary OS since 2000 so I guess I'm one of the 7000 OGs.)


Yikes, that's embarrassing for me. Thanks. I had checked the link but I didn't notice the post was 2002 as opposed to 2022.


If you click that link, you’ll see it’s a reference to an ancient and prolific Slashdot troll post (“Netcraft confirms it—BSD is dying…”).


>Theo states that there are only 7000 users of OpenBSD.

Just one metric. /r/openbsd has 17k subscribers. Not sure with DaemonForums.


OpenBSD has long since been the most active section on DaemonForums, but it's pretty skewed since there's https://forums.freebsd.org – before that opened FreeBSD was much more active, but that's been a long time.


For all those wondering about user/usage of this, it was discussed a little =https://news.ycombinator.com/item?id=38000824


Seeing this was 2022, is this actually used in the wild?


Depends on what you mean by "in the wild". AFAIK it's not packaged by any major distro, but it has a user base.


There's been a more recent discussion and indeed, a fair few people are using it in the wild as they don't need to deal with SECCOMP or whatever.


Are seccomp and/or ebpf flexible and accesible enough implement pledge and unveil as a wrapper usable to unprivileged processes (even if it requires the help of a privileged helper daemon)?



There is also a Pledge Nomad driver. https://github.com/shoenig/nomad-pledge-driver


System level security seems to be the real solution to memory safety. Declaring invariants for an entire program gives you much more peace of mind than hoping there are no bugs in JVM.

Process separation is already a success story that eliminates who classes of exploits.


The issue is one of attack surface. A program written in a memory safe language is extremely hard to exploit, especially remotely. But if you're talking about an attacker with local execution (who has already taken over a program) the attack surface is much larger.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: