That's the beauty! The build process is exactly the same for in and out of container stuff. In both cases result is either binary or binary + static files (nowadays, as it is mostly ops stuff, binary gets embedded static files).
Now you are back to the “it works on my machine” issues. Seen a couple of times where a precompiled binary works okay on one OS but the behavior changes when ran on another OS.
In my case, the bottom bid contracting firm that delivered the code had special logic for windows but otherwise wouldn’t happen on unix based machines.
> Now you are back to the “it works on my machine” issues. Seen a couple of times where a precompiled binary works okay on one OS but the behavior changes when ran on another OS.
Sure, but containers don't protect you from that - you're still exposed to differences in the host kernel. For that kind of issue you'd want a full VM rather than a container.
By that logic, we'll need to ship bare metal, to avoid differences in processors.
You've gone from being exposed to OS and kernel differences to being exposed only to kernel differences. For most apps, that's acceptable. Others will need to ship VMs. Some will even need to ship bare metal.
> What "OS differences" are you exposed to if you're shipping a static binary?
I dunno, probably stuff like ulimits and selinux rules that you don't think about until it burns you. Not to mention whatever idiosyncratic configuration your customer might perform.
> [It's] absolutely dogma, enforced by kubernetes and friends.
The dogmatic people are wrong to be dogmatic, but in the context of this particular conversation, I would like to point out - meaning no disrespect - that it is you who is advocating the more maximalist and less pragmatic view.
> The dogmatic people are wrong to be dogmatic, but in the context of this particular conversation, I would like to point out - meaning no disrespect - that it is you who is advocating the more maximalist and less pragmatic view.
How so? The thread started with talking about having a statically linked binary (not a specific one with special requirements, but a generic one), and described putting it in a container with a config to expose some ports from it. That seems very much like a dogmatic everything-must-be-a-container position. I'm not arguing for never using containers, I'm arguing for using them where they make sense and not using them where they don't.
From my perspective, what they said (across several comments) was, "I like this workflow, it has some advantages, some of my customers want containers, and in our case it doesn't add much more complexity," which I took to be a mix between a pragmatic and aesthetic position, and in the comment I responded to you said, "it didn't solve all of the problems we can identify, so why bother?" I took that to be a maximalist position.
Sometimes customer wants containers; most of our (ops/orchestration) runs outside of container but containerising it (for same reason, one static blob) is easy
But the code I nicked the example from was actually our internal k8s/docker deployment testing/debug app, so, well, it's in containers by design.
Because I get other levels of isolation such as at the network, filesystem, and syscalls and I can run many instances of the same binary on a host much like a VM, but vastly lighter-weight.
No, containers are not a problem and they don't add any additional problems.
The same reason you'd use one to begin with, primarily isolation of processes. That doesn't go away just because the binary is statically compiled. But you don't have to, of course, plenty of people don't.
Only feeble component is CA bundle for SSL-related stuff as that by nature is changeable.