Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For my two cents, it discourages standardization.

If you run bare-metal, and instructions to build a project say "you need to install libfoo-dev, libbar-dev, libbaz-dev", you're still sourcing it from your known supply chain, with its known lifecycles and processes. If there's a CVE in libbaz, you'll likely get the patch and news from the same mailing lists you got your kernel and Apache updates from.

Conversely, if you pull in a ready-made Docker container, it might be running an entire Alpine or Ubuntu distribution atop your preferred Debian or FreeBSD. Any process you had to keep those packages up to date and monitor vulnerabilities now has to be extended to cover additional distributions.





You said it better at first: Standardization.

Posix is the standard.

Docker is a tool on top of that layer. Absolutely nothing wrong with it!

But you need to document towards the lower layers. What libraries are used and how they're interconnected.

Posix gives you that common ground.

I will never ask for people not to supply Docker files. But to be it feels the same if a project just released an apt package and nothing else.

The manual steps need to be documented. Not for regular users but for those porting to other systems.

I do not like black boxes.


Why I move from docker for selfhosted stuff was the lack of documentation and very complicated dockerfiles with various shell scripts services config. Sometimes it feels like reading autoconf generated files. I much prefer to learn whatever packaging method of the OS and build the thing myself.

Something like harbor easily integrates to serve as both a pull-through cache, and a cve scanner. You can actually limit allowing pulls with X type or CVSS rating.

You /should/ be scanning your containers just like you /should/ be scanning the rest of your platform surface.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: