Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's the thing I like about self-contained binaries (Of Go or any other sort). Just

    FROM scratch
    COPY this-or-that
    LABEL prometheus.port=9100
    LABEL prometheus.path=/metrics
    EXPOSE 3001
    EXPOSE 9100
and nothing breaks.

Only feeble component is CA bundle for SSL-related stuff as that by nature is changeable.



This is just moving the complexity to your build process.


That's the beauty! The build process is exactly the same for in and out of container stuff. In both cases result is either binary or binary + static files (nowadays, as it is mostly ops stuff, binary gets embedded static files).

It's not more complex by any stretch



Why bother with a container at that point? Doesn't it introduce as many problems as it solves?


Now you are back to the “it works on my machine” issues. Seen a couple of times where a precompiled binary works okay on one OS but the behavior changes when ran on another OS.

In my case, the bottom bid contracting firm that delivered the code had special logic for windows but otherwise wouldn’t happen on unix based machines.


> Now you are back to the “it works on my machine” issues. Seen a couple of times where a precompiled binary works okay on one OS but the behavior changes when ran on another OS.

Sure, but containers don't protect you from that - you're still exposed to differences in the host kernel. For that kind of issue you'd want a full VM rather than a container.


By that logic, we'll need to ship bare metal, to avoid differences in processors.

You've gone from being exposed to OS and kernel differences to being exposed only to kernel differences. For most apps, that's acceptable. Others will need to ship VMs. Some will even need to ship bare metal.

It's a tool, not a dogma.


What "OS differences" are you exposed to if you're shipping a static binary?

> It's a tool, not a dogma.

The idea that containers are the only or right way to do service orchestration is absolutely dogma, enforced by kubernetes and friends.


> What "OS differences" are you exposed to if you're shipping a static binary?

I dunno, probably stuff like ulimits and selinux rules that you don't think about until it burns you. Not to mention whatever idiosyncratic configuration your customer might perform.

> [It's] absolutely dogma, enforced by kubernetes and friends.

The dogmatic people are wrong to be dogmatic, but in the context of this particular conversation, I would like to point out - meaning no disrespect - that it is you who is advocating the more maximalist and less pragmatic view.


> The dogmatic people are wrong to be dogmatic, but in the context of this particular conversation, I would like to point out - meaning no disrespect - that it is you who is advocating the more maximalist and less pragmatic view.

How so? The thread started with talking about having a statically linked binary (not a specific one with special requirements, but a generic one), and described putting it in a container with a config to expose some ports from it. That seems very much like a dogmatic everything-must-be-a-container position. I'm not arguing for never using containers, I'm arguing for using them where they make sense and not using them where they don't.


From my perspective, what they said (across several comments) was, "I like this workflow, it has some advantages, some of my customers want containers, and in our case it doesn't add much more complexity," which I took to be a mix between a pragmatic and aesthetic position, and in the comment I responded to you said, "it didn't solve all of the problems we can identify, so why bother?" I took that to be a maximalist position.

If I'm misreading you, then I apologize.


Sometimes customer wants containers; most of our (ops/orchestration) runs outside of container but containerising it (for same reason, one static blob) is easy

But the code I nicked the example from was actually our internal k8s/docker deployment testing/debug app, so, well, it's in containers by design.


Because I get other levels of isolation such as at the network, filesystem, and syscalls and I can run many instances of the same binary on a host much like a VM, but vastly lighter-weight.

No, containers are not a problem and they don't add any additional problems.


The same reason you'd use one to begin with, primarily isolation of processes. That doesn't go away just because the binary is statically compiled. But you don't have to, of course, plenty of people don't.


Does it solve any problems? How is a statically-linked executable liable to break from changes in an OS it doesn't even use?


Badly designed apps can break in all sorts of fun ways without docker

- Check for processes called "sleep", exit if such process exist (or try to kill it).

- Reset $PATH to default value then expect to find a _very_ specific ancient version of system utilities there.

- Create files in /tmp with fixed names. Fail if they already exist. Forget to delete them on failure.

- Walk entire filesystem searching for something. Crash on broken symlinks.

- Enumerate network interfaces. Crash if more than 7 are present.

- Hardcode _both_ specific user name and associated UID.

- Put temp files all over place, including into application directory

- Ignore $HOME and use homedir from /etc/passwd, then create lock/config file under homedir. And you want to run two instance of this app in parallel.


Maybe you are using kubernetes or there are multiple things running on the box you want to keep isolated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: