Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been saying for years that containers are nothing more than an ugly hack to get around the fact that OSes are broken. More specifically modern OSes still carry 1980s assumptions about multi-tenancy, system administration, application management, state management, etc.

In the 1980s a "server" was a big expensive special snowflake. Each server had a "system administrator." The system administrator would "install" software "on" the machine.

I quoted all those words to highlight no longer valid concepts.

Boxes are ephemeral now. Systems have more than one administrator, sometimes belonging to a different organization or department. Applications are not installed "on" the system in the sense that they root into it and modify its state. If they do this makes them a terrible hassle to manage. Applications should be ephemeral things that can float freely between machines and store their state in some kind of distributed or remote data store or database, or at least can be easily migrated between hosts. Everything is mobile. Everything is commodity. Everything is liquid.

OSes are just not designed this way, so we have an ugly hack called a container that basically amounts to "let's tar up images of operating systems and just treat them like giant static binaries." Onto this is bolted a whole array of other ugly hacks to make that easy to deal with, but the concept is fundamentally ugly as hell.



I agree, though I think containers are a very useful incremental approach towards solving the problem. Declare bankruptcy on fixing the current OS, stick it in a tar, and surround it in a new encapsulating system (Docker/k8s) that has isolation/deployment properties closer to what the ideal OS would have. Over time, we can shrink and eventually remove the "old-world OS" in the tar, and develop the new encapsulating system further. We can easily experiment with new features in the encapsulating system without needing to first reinvent everything in the old-world OS. Maybe we'll eventually just shrink the old-world OS out of the picture entirely, or we'll create a brand new purer system in the image of the encapsulating system without the old-world OS involved.

Something that really bugs me about a lot of criticisms of containers is that often you get two groups of people agreeing that containers are bad for opposite reasons: one group is fine with the old-world model, and one group that thinks containers don't go far enough in squeezing out the old-world model and replacing it with something better. It makes me worried that it's very easy for the audience to think the conclusion is "containers bad, old-world plain linux server good", which is detrimental to the second group's goal.


Well said. I see a lot of value in containers as a means of distributing software, but I agree that it's heavier and hackier than it feels like it ought to be.

I recently wrote about this here: https://jonathan.bergknoff.com/journal/run-more-stuff-in-doc...


Not sure why you are getting downvoted here - these all are extremely valid points on why we feel change is necessary.

Things have changed dramatically since the 1970s (Unix) and even since 1991 (Linux) yet we are still using the same constructs.


Those are valid points,

Have you seen or heard of any alternatives? I mean, software getting more and more complicated, maybe even more complicated than necessary, however running apps natively has its own pitfalls (we've all been there). It's like choosing necessary evil and if there is a better way of doing things we should evaluate it and talk louder about it


The only attempts I've seen are advanced package managers for Linux, like the stuff used by NixOS:

https://en.wikipedia.org/wiki/NixOS

Unfortunately these tend to be overly complex and have too steep of a learning curve, and they don't address all the issues.

The closest is probably apps on mobile, but again that doesn't address all the issues and comes with a whole set of restrictions that are very problematic outside small single-user devices.


I'd say NixOS probably is not the right entry point if you are looking for a softer learning experience. NixOS is built on top of Nixpkgs package manager, which itself could be thought of as a side-effect of Nix - the DSL and a runtime for building software from source in a reproducible manner. If you start by learning how to employ Nix-the-build-tool in your daily software development practices, you will be amazed how little you need to learn in order to be productive with it, because there's essentially just one new concept - a derivation [1], the rest are just community-driven infrastructure and utility methods on top of it.

[1] https://nixos.org/nixos/nix-pills/our-first-derivation.html#...


Have you seen project atomic?

https://www.projectatomic.io/

Seems like it would align pretty well with your vision


What is your proposed solution?


Everything needs to be namespaced and there needs to be a scalable and robust model for sharing resources.

Containers do the first part but not the second. Classical OSes do the second part but not the first.

Here's what I would do if I were working on this problem:

(1) Shared libraries can be linked by name and version constraints or by specific cryptographic hash of the library. The former uses some kind of name resolution system to get a cryptographic hash from that name and version constraint. This is not terribly unlike NPM or modern Go module versioning. It would allow sharing of resources without security or "DLL hell" type issues. The OS lazily grabs libraries by cryptographic hash off a configurable list of sources that could include URLs, IPFS, BitTorrent, peers on the same LAN, etc. (1) Could also perhaps be extended to include binaries, but this would require more thought.

(2) Kill the concept of installing software "on" a machine, but in a finer grained way than containers. Segment applications and services the way users are segmented: each app or service has a home directory and a UID/GID. Get rid of /bin, /usr/bin, etc. in favor of some system infrastructure for finding binaries where they live. MacOS has some of these things but isn't consistent about them.

(3) Lock down the system a lot harder in terms of different users being able to see anything outside things they own. For example a user should not be able to "find /" at all. This would be similar to containers but a little simpler.

(4) Extend namespacing and user/group ownership to network identifiers like IP addresses and other things. I think you could do this with existing cgroups plus some user land support.

(5) Get rid of system level service management (except for truly system level services) in favor of per-user services. Each user can run their own services/daemons that bind to IPs/ports in their own namespace.

(6) Kill 'root' in favor of a hierarchical administrative permission system. This may require some new stuff in the kernel or maybe cgroups could be leveraged to implement it.

(7) Kill privileged ports. This stupid restriction is utterly antequated and serves no purpose on modern systems or networks, and it makes a ton of stuff needlessly insecure and complex by mandating 'root' to e.g. bind port 80. I wrote a rant on this years ago: http://adamierymenko.com/ports.html

(8) Implement process hibernate/restore and migration as a standard feature in the kernel (if it's not somewhere in there already... not sure). That way the 'box' can be rendered absolutely ephemeral and entire runtimes can be hot migrated between boxes.

(9) Replace numeric UID/GIDs with GUIDs or even better hashes of public keys to allow some transparent tie-in to some kind of IAM infrastructure. It's possible that this could be done by having a local mapping to numeric UIDs/GIDs to avoid having to recompile the universe and refactor code that expects UID/GID to fit in 'int' or 'long'. This way a user can log into any machine anywhere and still be the same user and all that UID/GID synchronization or LDAP or whatever trash can die.

I think that would be a start.

Some of these concepts have shown up in various places: MacOS, iOS, Android, Mosix/OpenMosix, Beowulf, various container virtualization stacks, etc. I have never seen them in one place or integrated nicely.


> Shared libraries can be linked by name and version constraints

"Grab random libraries and never update them again" leads to the same security disaster of containers.

It's crucial to have large communities AKA Linux distributions that ensure that a set of libraries at given versions work well together and receive security backports!

3, 4, 5, 6, 8: you just described seccomp, symlinks, and tools the already exists.

The problem here is not about technology. Doing better packaging and modularization was possible for decades but goes against some commercial interests.


The problem there is that we've never figured out how to make it scale. Right now everyone builds everything static or as a container because if you depend on anything upstream it will break and then you will be held responsible for fixing it. Communities don't scale, and the Apple/Google alternative of closed app stores are... closed... and also tend to suck.

Making shared libraries and other upstream dependencies actually scale is an open problem in practical computer science / systems engineering.

The fact that commercial interests often have a vested interest in things sucking is a problem all over the place.


5) Systemd support this, though common services may not be configured for this by default.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: