This just seems like sensationalist nonsense spoken by someone who hasn’t done a second of Ops work.
Kubernetes is incredibly reliable compared to traditional infrastructure. It eliminates a ton of the configuration management dependency hellscape and inconsistent application deployments that traditional infrastructure entails.
Immutable containers provide a major benefit to development velocity and deployment reliability. They are far faster to pull and start than deploying to VMs, which end up needing some kind of annoying deployment pipeline involving building images or having some kind of complex and failure-prone deployment system.
Does Kubernetes have its downsides? Yeah, it’s complex overkill for small deployments or monolithic applications. But to be honest, there’s a lot of complexity to configuration management on traditional VMs with a lot of bad, not-so-gracefully aging tooling (cough…Chef Software)
And who is really working for a company that has a small deployment? I’d say that most medium-sized tech companies can easily justify the complexity of running a kubernetes cluster.
Networking can be complex with Kubernetes, but it’s only as complex as your service architecture.
These days there are more solutions than ever that remove a lot of the management burden but leave you with all the benefits of having a cluster, e.g., Talos Linux.
> Kubernetes is incredibly reliable compared to traditional infrastructure.
The fuck it is.
> It eliminates a ton of the configuration management
Have you used k8s recently? to get it secure and sane is a lot of work. Even if you buy in sensible defaults, its a huge amount of work to get a safe, low blast radius deployment pipeline working reliably
Like if you want vaguely secure secrets, thats an add on. if you want decent non-stupid networking, thats an addon, Everything is split horizon DNS.
Thats before we get to state management, trying to play the pvc lottery, is not fun. which means its easier to use a clustered filesystem. Thats how fucked it is.
> there’s a lot of complexity to configuration management on traditional VMs
Not really, you need at least terraform to spin up your k8s cluster in the first place, its not that much harder to extend it to use real machines instead.
It is more expensive, unless you're binpacking with docker.
> cough…Chef
Chef can also fuck off. Although facebook use it on something like 8 million servers, somehow.
> Networking can be complex with Kubernetes
try making it use ipv6.
Look what the industry needs is a simple orchestration layer that places docker containers according to a DAG. You can have dependencies, and if you want a plugin system to allow you to paint yourself into a corner.
Have some hooks so we can trigger actions based on backlog
Leave the networking to the network, because DHCP and DNS are a solved problem.
What I'm describing is basically ECS, but without the horrid config language.
It was clear they didn't know what they were saying when they think the main reason for kubernetes was to save money. Kubernetes is just easy to complain about.
> Does Kubernetes have its downsides? Yeah, it’s complex overkill for small deployments or monolithic applications. But to be honest, there’s a lot of complexity to configuration management on traditional VMs with a lot of bad, not-so-gracefully aging tooling (cough…Chef Software)
I have a small application running under single-node k3s. It's slightly (but not hugely) easier to work with then the prior version that I had running under IIS.
The problem is that some Kubernetes features would have a positive impact on development velocity in theory, however in my experience (25 years of ops and devops), the cost of keeping up often eats up those benefits and often results in a net-negative.
This is not always a problem of Kubernetes itself though, but of teams always chasing after the latest shiny thing.
Also a old man from VMS/Sparc days, I'm still doing "devops" and just deployed a realtime streaming webapp tool for our team in a few days to k8s pods. It was incredibly easy and I get so much for free
Automatically created for me:
- Ingress, TLS, Domain name, Deployment strategy, Dev/Prod environments through helm, Single repo configuration for source code, reproducible dev/prod build+run (Docker)...
If a company sets this up correctly developers can create tooling incredibly fast without any tickets from a core infra team. It's all stable and very performant.
I'd never go back to the old way of deploying applications after seeing it work well.
> If a company sets this up correctly developers can create tooling incredibly fast
I find that it has its place in companies with lots of micro services. But I think that because it is made "easy" it encourages unnecessary fragmentation and one ends up with a distributed monolith.
In my opinion, unless you actually have separate products or a large engineering team, a monolith is the way to go. And in that case you get far with a standard CI/CD pipeline and "old school" deployments
But of course I will never voice my opinion in my current company to avoid the "boomer" comments behind my back. I want to stay employable and am happy to waste company resources to pad my resume. If the CTO doesn't care about reducing complexity and costs, why should I?
In my example it was a simple CRUD app, no microservice. It could just as easy been ran by scping the entire dev dir to a vm and ensuring a port is open. But I wouldn't get many of the things I described above and I don't need to monitor it at all.
You had PR merge and automatic release before Kubernetes too, and it's not that hard to configure.
If one has a small project where a few seconds of downtime is acceptable, you can just setup a simple Github action triggered on commit/merge. It can scp the file to the server and run "sysctl restart" automatically. I have used this approach for small side projects (even with external paying users)
And if you need a "no downtime" release, a proper CI/CD pipeline can handle a blue/green switch. I don't think you would spend much more time setting that up, than Kubernetes from scratch unless you have extensive experience with Kubernetes.
You're not expecting them to set k8s up from scratch, just as you'd not expect the dev team to set up the datacentre power or networking from scratch for the server in your "scp and sysctl restart" scenario.
Typically, a k8s installation is looked after by a cross-functional Platform team, who look after not just the k8s cluster but also the gateways, service mesh, secrets management, observability and other common services, shared container images, CI/CD tooling, as well as platform security and governance.
These platform services then get consumed by the feature dev teams (of which there could be anywhere between half a dozen and multiple thousands). To deploy a new app, those dev teams need only create a repo and a helm chart, and the platform's self-service tooling will do the rest automatically. It really shouldn't take more than a few minutes for a team with some experience.
Yes, it's optimised for a very different scale of operation than a single server at a managed hosting provider. But there are plenty of situations in which that scale is required, and it's there that k8s shines.
Too opened ended of a question, but in 'old days' it would be a ticket for a new vm, then back and forth between dev and infra to setup the host, deploy the application etc...
If you had a really good team, hours. At most companies, days to weeks. At worst, months.
With a well managed Kubernetes, around 5-15 minutes. Not a theoretical time, I have personally had thousands of devs launch that quickly on clusters I ran.
Kubernetes is incredibly reliable compared to traditional infrastructure. It eliminates a ton of the configuration management dependency hellscape and inconsistent application deployments that traditional infrastructure entails.
Immutable containers provide a major benefit to development velocity and deployment reliability. They are far faster to pull and start than deploying to VMs, which end up needing some kind of annoying deployment pipeline involving building images or having some kind of complex and failure-prone deployment system.
Does Kubernetes have its downsides? Yeah, it’s complex overkill for small deployments or monolithic applications. But to be honest, there’s a lot of complexity to configuration management on traditional VMs with a lot of bad, not-so-gracefully aging tooling (cough…Chef Software)
And who is really working for a company that has a small deployment? I’d say that most medium-sized tech companies can easily justify the complexity of running a kubernetes cluster.
Networking can be complex with Kubernetes, but it’s only as complex as your service architecture.
These days there are more solutions than ever that remove a lot of the management burden but leave you with all the benefits of having a cluster, e.g., Talos Linux.