Even after the bash script era, I don’t think the configuration management landscape gets enough discredit for how bad it is. I never felt like it stopped feeling hacked together and unreliable.
E.g., Chef Software, especially after its acquisition, is just a dumpster fire of weird anti-patterns and seemingly incomplete, buggy implementations.
Ansible is more of the gold standard but I actually moved to Chef to gain a little more capability. But now I hate both of them.
When I just threw this all in the trash in my HomeLab and went to containerization it was a major breath of fresh air and resulted in getting a lot of time back.
For organizations, of the best parts about Kubernetes is that it’s so agnostic so that you can drop in replacements with a level of ease that is just about unheard of in the Ops world.
If you are a small shop you can just start with something simpler and more manageable like k3s or Talos Linux and basically get all the benefits without the full blown k8s management burden.
Would it be simpler to use plain Docker, Docker Swarm, Portainer, something like that? Yeah, but the amount of effort saved versus your ability to adapt in the future seems to favor just choosing Kubernetes as a default option.
Yup. K8s is a bit of a pain to keep up with, but Chef and even Ansible are much more painful for other reasons once you have more than a handful of nodes to manage.
It's also basically a standard API that every cloud provider is forced to implement, meaning it's really easy to onboard new compute from almost anyone. Each K8s cloud provider has its own little quirks, but it's much simpler than the massive sea of difference that each cloud's unique API for VM management was (and the tools to paper over that were generally very leaky abstractions in the pre-K8s world).
To quote an ex coworker: all configuration management systems are broken, in equal measure - just in different fashion. They are all trying to shoehorn fundamentally brittle, complex and often mutually exclusive goals behind a single facade.
If you are in the position to pick a config management system, the best you can do is to chart out your current and known upcoming use cases. Then choose the tool that sucks the least for your particular needs.
And three years down the line, pray that you made the right choice.
Yes, kube is hideously complex. Yes, it comes with enormous selection of footguns. But what it does do well, is to allow decoupling host behaviour from service/container behaviour more than 98% of the time. Combined with immutable infrastructure, it is possible to isolate host configuration management to the image pre-bake stage. Leave just the absolute minimum of post-launch config to the boot/provisioning logic, and you have at least a hope of running something solid.
Distributed systems are inherently complex. And the fundamental truth is that inherent complexity can never be eliminated, only moved around.
with EKS and cloud-init these days i dont find any need to even bake AMIs anymore. scaling / autoscaling so easy now with karpenter to create/destroy nodes to fit current demand. i think if you use kubernetes in a very dumb way to just run X copies of Y container behind an ALB with no funny business it just works.
I have to say I hate ansible too (and puppet and cfengine that I have previously used). But it's unclear to me how containers fix the problems ansible solves.
So instead of an ansible playbook/role that installs, say, nginx from the distro package repository, and then pushes some specific configuration, I have a dockerfile that does the same thing? Woohoo?
I think the major important difference is that a dockerfile can’t really break after you get your deployment artifact, whereas configuration management can fail on your underlying nodes if they aren’t crafted perfectly and cause post-deployment failures.
Other issues like secrets and environment management is something I find way more annoying using a tool like Chef.
Try doing a chef policyfile bootstrap that gets some secrets using its own built in chef vault. You can’t do it without wild workarounds because the node isn’t granted access to secrets until it becomes a registered node, and it doesn’t register until a chef client run completes successfully. It’s a really dumb catch-22 design.
The solution is “just use a big 3 cloud secrets vault or Hashicorp vault” and that’s fine but it’s really strange that the tool can’t handle something so simple on its own.
You use docker to create a thing on your laptop that you know is good and works, then you send the Dockerfile file in to the system and that thing is a static blob of bits. Ansible/puppet/chef/cfengine modify a live thing from one state to another. Sure, you can use qcow vm disk images and vm snapshots to achieve the same thing, but it’s a lot more cumbersome and feels slow and yuckier, and no one packaged it up into a neat little tool that got popular (which is to say, Vagrant is awesome but slow, so docker won out).
E.g., Chef Software, especially after its acquisition, is just a dumpster fire of weird anti-patterns and seemingly incomplete, buggy implementations.
Ansible is more of the gold standard but I actually moved to Chef to gain a little more capability. But now I hate both of them.
When I just threw this all in the trash in my HomeLab and went to containerization it was a major breath of fresh air and resulted in getting a lot of time back.
For organizations, of the best parts about Kubernetes is that it’s so agnostic so that you can drop in replacements with a level of ease that is just about unheard of in the Ops world.
If you are a small shop you can just start with something simpler and more manageable like k3s or Talos Linux and basically get all the benefits without the full blown k8s management burden.
Would it be simpler to use plain Docker, Docker Swarm, Portainer, something like that? Yeah, but the amount of effort saved versus your ability to adapt in the future seems to favor just choosing Kubernetes as a default option.