Hacker News new | past | comments | ask | show | jobs | submit login

Managing self-installed k8s is a pain in the ass right now, but not much more than trying to manually install Linux without using e.g. the Ubuntu installer. Would that put you off using Linux?

Speaking as a daily Linux user since 1996, there are so many parts of the system I still don't understand, but it's not necessary for me to e.g. understand exactly how the font catalog and text rendering works to write a document. The same is true here.

There are quite a few companies in this space trying to improve the self-hosted experience, just give it some time

As for all the tools you supposedly need, the only one you actually need is kubectl, and it's more than enough to accomplish everything. New ecosystems always endure this abundance of "helpful tools" before consolidating on a few important choices.




You're talking about Linux desktop, no? It's apples and oranges because I assume we're talking about maintaining production systems here.

k8s is a mess IMO. It's a giant abstraction and you're forced to interact with the whole thing all at once. If you've done any kind of sysadmin/ops work on production systems then k8s makes you feel like you're re-learning everything all over again. That's a huge red flag to me. When a technology tries to reinvent every layer of the stack instead of gracefully integrating with existing tools then I immediately question the motives and/or competence of the creators.

I'm not sure how to express this next point without a touch of vitriol so I'm just going to say it: the way containerization is being promoted for general purpose use is ridiculous. We're living in a world where a 8 core 16G server is dirt cheap. The amount of WORK an application server can do with those resources is insane (if built well). The idea of slicing/dicing that mono application into 8 tiny pods with 3.33333 allotted millicores and a bunch networked cross traffic -- I, I... just don't get it. Why?

The vast vast majority of business critical applications can serve all of their traffic with a single server running on a modest (by today's standards) VPS. Add a little redundancy and auto-scaling and you're good to go.

When I see small startups trying to maintain the k8s black box I just facepalm myself into unconsciousness.

/rant, sorry


I don't understand all this angst. In a well run infrastructure, Kubernetes basically just boils down to a new config file to describe some proxy rewrite rules, and how much RAM your thing needs.

If you want to run VPS-sized pods full of untracked mutable state with their own ghetto logging / monitoring / deployment / configuration management baked in, nothing is stopping you, but by the time we're "maintaining production systems", and we discover that crumby little $5 VPS has been touched by 7 people over 2 years, nobody knows what is on it any more, turns out the default logrotate configuration was shredding audit logs, and nobody has a clue how to reinstall it without literally starting over, things start to look a little different.

- "I wish I had a list of all the jobs that were supposed to be on this VPS"

- "I wish I didn't have to setup random log archiving cronjobs for all these apps"

- "I wish I knew what directories I need to back up"

- "I wish I could split this one job off the VPS without having to rewrite and retest its config"

- "I wish I could run a separate deployment environment on the VPS without having to set up a whole new VPS"

- "I wish I could rotate this API key but I haven't a clue where it is stored or what is referencing it"

- "I wish I could give the intern access to deploy the web site without letting him also take a copy of the HR database"

- "I wish I could deploy this new job version and roll it back if I fuck things up"

- "I wish I could run this ancient proprietary tool that needs 32bit libs from Debian 0.4 without installing them globally and potentially bricking the whole VPS"

- "I wish I didn't have to go install and configure a bunch of monitoring plugins every time I deploy a new app"

You can set all that up by hand of course, after all it does represent years of fruitful low-intellect busywork for idle hands, or you can come up with your own self-assembled collection of third party tools to do it for you, investing the research, installation and configuration time that entails, and accepting the cost of having created an ad-hoc system only you understand, or you could simply pick one big framework that does it all at once and try to make it work.

The nice thing with the one-big-framework approach is that you also get a common language and operational model understood by thousands of people you can hire on the open market. They understand how your configs are laid out, they know how to discover mutable state and query the health of all your existing jobs. And by the time they leave the company, you can understand all that about the work they've done.

This is only addressing Kubernetes as an operational methodology. Containerization as an architectural style is an entirely different topic.


I would never run VPS-sized pods because that negates most of the advantages k8s provides (as an infra abstraction). I would not run k8s in the first place.

> I wish...

This right here is a big part of the problem. "I wish". "What if" is another one.

In the real world you don't have all of these problems initially and often times never at all. They trickle in as you grow. And in our business an absolutely essential property of good tooling is that it grows with your needs.

This talk of "years of fruitful low-intellect busywork for idle hands" for "self-assembled collection" of tools is the antithesis of the Unix philosophy. I like the Unix philosophy. It's worked out very well for us so far.

> a common language and operational model understood by thousands of people you can hire on the open market

We've been doing just fine with sysadmins? The community standardizes on a set of tools and people learn them. Do you really believe that pre-k8s it was some kinda combinatorial wild west without any shared knowledge? Come on. Come. On.

k8s environments aren't all that standardized either btw. There's just as much duct tape and glue as everything else. CI/CD configuration, service meshes, multi-cloud provider configurations, etc. All of these interfaces bring their own quirks and pitfalls. And your k8s nodes still need to be optimized just like before. The control plane is only one part of the picture.

All I'm saying is this... an experienced sysadmin with a mature config management system and AWS/GCP/Azure will get you a perfectly maintainable infrastructure for most use cases.

There are big organizations where k8s makes sense b/c you'd end up building a worse home grown version of basically the same thing. I'll give you that. But I'm seeing k8s pitched as the default infra solution for SMBs and the like. That's where the angst is coming from.


If you have so many wishes, maybe it wasnt the right tool.


If GNU/Linux was unusable without various hacked-on helper frontends, none of which was comprehensive and none of which even tried to standardize, then yes that would be a red flag. In practice, you have things like Ubuntu putting a nice frontend on, but you also have things like the Arch folks, who explicitly refuse to provide an installer because the "manual" process is totally reasonable. So as an outsider, my understanding is that there are plenty of different k8s distros, and a lot of helper frontends (and no clear winners among them); are there people who find vanilla/handcrafted k8s okay? Is there an "Arch K8s" distro? Or even, is there an Ubuntu of k8s (successfully hides away the details and produces a clean working system that never requires a user to leave the comfort of the GUI)? Unfortunately, I don't have the domain knowledge to tell whether any given helper scripts are going to bite me down the road, so this is a genuine question from someone who'd like to get into k8s but can't see which option is sane.


> are there people who find vanilla/handcrafted k8s okay?

Yes, we do.

> Is there an "Arch K8s" distro?

Kubernetes the hard way is pretty much Arch

> is there an Ubuntu of k8s (successfully hides away the details and produces a clean working system that never requires a user to leave the comfort of the GUI)?

Kubespray etc.

> so this is a genuine question from someone who'd like to get into k8s but can't see which option is sane.

It completely depends on how much money your company is willing to spend on it. The larger the organization, the more it makes sense to go for "the Arch way". If you are a tiny start-up with single digit Devops folks, stick to managed offerings like GKE.


Basically most of what runs on k8s, including most of the k8s control plane, lives in a container. So for the most part, the only material concern about the host machine and OS is what kernel it is running, because that may limit what kind of jobs you can run in the containers.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: