This is another case of "I have never encountered and don't deeply understand the problems this tool was built to solve, thus the tool is totally unnecessary and the people who use it are part of a cult".
This is the same kind of flawed reasoning you see in the front-end world where a bunch of people complain that they do all their work in jQuery so React must be a cult.
Pasting what I wrote in another comment:
The goal isn't "ease of deployment", the goal is "infrastructure as code" so that application infrastructure can be managed in a way similar to application source code (e.g. PRs, blame, code reviews, CI, rollbacks etc). This helps ops people because it allows them to think about infrastructure as abstract resources rather than as a collection of individual machines with specific designations. With k8s, individual machines become a homogenized resource that do not need specialized provisioning depending on the application they will host.
I think the crux of the problem is that everyone encounters the problem Kubernetes solves. As the GP states, Kubernetes gives you
"infrastructure as code" so that application infrastructure can be managed in a way similar to application source code (e.g. PRs, blame, code reviews, CI, rollbacks etc). This helps ops people because it allows them to think about infrastructure as abstract resources rather than as a collection of individual machines with specific designations
Who doesn't want that? Of course you want that.
But will the investment of time and effort pay off for your organization, and if so, how quickly? That's the hard question to answer. It depends on scale, personnel, the types of workloads involved, how easily your tools and practices can be updated, and presumably many other considerations. From my personal experience it seems like in practice the answer to this question is so murky that the deciding factors turn out to be social, including the personal risk aversiveness of the people making the decision, people's loyalty to the company versus their own resume, and whether leadership cultivates a hyperoptimistic growth mentality of making 10x or even 100x decisions (i.e., make decisions assuming the company will be 10x or 100x bigger in a year.)
The problem, then, is helping people compare the cost/benefit of Kubernetes compared to their current practices, for their own organization.
If you only have a couple of servers you probably want to think of them as individual machines rather than abstract resources. A lot of equations simplify when you set x to 1.
My point is that if "infrastructure as code" is your sole requirement, Kubernetes doesn't seem like the first choice. Adopting Kubernetes is not a small task, but it seems to be the go-to answer for a lot of the HN crowd.
Don't get me wrong: it's a great tool for some things. But IMO, for 80% of projects it's completely overkill.
+1. Infrastructure as code is exactly that, code. For AWS it is Cloudformation template code, Cloudformation service and some CI/CD on top, like jenkins or ansible. Or Terraform for unlucky ones. K8s is container orchestration, like AWS ECS, totally different beast.
On AWS, Cloudformation is far more reliable and lean (less LOC) than TF. No corrupted state issues, all resource properties are supported, parallel (fast) resource creation just for starters. And TF sales pitch about "multi-cloud" is nonsence, resources are too different between different clouds.
I'm not particularly interested in limiting myself to using only the tools that large companies have deemed worthy.
There are plenty of tools out there that can get the job done at the scale that the vast majority of businesses operate in with lower operational and cognitive overhead than Kubernetes.
That’s how you get every other startup thinking Heroku is the solution and then two years later they realize they might need to invest in building out their own self-documenting, automated architecture. More than happy to upvote you as it means more work opportunities for me down the line.
Infrastructure as Code - If you're not doing infrastructure as code, how do you know who is taking what actions on your infrastructure? How do you know your tests are running on an environment that represents production? How do you know the tester or dev hasn't fixed and not commited?
CI/CD - Do you have a quicker way to create test environments than just running kubectl create ns?
Resource Utilisation - Sharing servers to save money. Obviously you can use VMs, but do you want to do nested VMs on cloud?
I'm not sure most people should run k8s, but in a world where you can use GKE, I can't really see why not. What offers a better solution?
I'm in the situation that Kubernetes does solve the problem I have... deploying application stacks across varied configurations for purposes of testing at a PR level. Working on software that is customized and delivered to multiple downstream clients is hard to test against. Yes, there are other options, but K8s is a very good fit, and the best option we have.
YMMV... I've also been in a scenario where I just did separate pushes to dokku boxes behind a load balancer. There's plenty of room for in between.
Right now, I feel kind of like I'm treading water though.
Well how else do you build webscale software? What if later you want to deploy microservices. Premature optimization seems like a good idea with infrastructure because you have room to grow.
> This helps ops people because it allows them to think about infrastructure as abstract resources rather than as a collection of individual machines with specific designations. With k8s, individual machines become a homogenized resource that do not need specialized provisioning depending on the application they will host.
This is a very true statement. I'm looking at it with a bit of a different view as well: We're currently setup very classically. Ops maintains terraform, VMs, config management. Some developers have taken over some highlevel cluster management via the configuration management. This works very well.
As long as you're pushing 3 applications around. Onboarding an application into a config management solution can easily take 2 - 4 engineering weeks on the ops side. And the work will always be on the ops-side because that's where the required expertise of the configuration management lies, which is a hefty chunk of specialized knowledge. If you're looking at a ramp-up from 3 applications to 5, 8, 20, like in our case, that's ... ugly. In our case, that's actually planned, because we've been bought due to our experience in this context. Yey.
That's a huge investment if you look at time alone, it's like 80 engineering weeks at worst. That's a year of nothing else all of a sudden. A situation like that magnifies the operations team as a bottleneck even worse.
And that's IMO where the orchestration solutions and containers come in. Ops should provide build chains, the orchestration system and the internal consulting to (responsibly) hand off 80% of that work to developers who know their applications.
Orchestration systems like K8, Nomad or Mesos make this much, much easier than classical configuration management solutions. They come with a host of other issues, especially if you have to self-host like we do, no question. Persistence is a bitch, and security the devil. Sure.
But I have an entire engineering year already available to setup the right 20% for my 20 applications, and that will easily scale to another 40 - 100 applications as well with some management and care.
That's why we as the ops-team are actually pushing container orchestrations and possibly self-hosted FaaS at my current place.
Hm, guess that got a bit longer. You hit a nerve somewhat.
So you think application delivery was less "as code" before Kubernetes?
From my standpoint, Kubernetes is a lot more manipulating state and a lot less code than what pretty much anything it replaces. In fact, almost nothing is "as code" until you introduce third party products such as Helm into the mix.
But that matters little, since the point of using it is not technology but standardization. It has the potential to commodify cloud infrastructure. A bit tongue in cheek perhaps and a far from accurate technical statement, but Kubernetes looks more and more like what Openstack should have been.
To be fair to the author a lot of folks I know are jumping on K8s and their use case mostly looks like that of the blog author. In those specific case you indeed are joining a cult.
K8s is meant to reduce devops work and complexity. Most businesses do not reach that level of complexity and will never need K8s.
It's a cult as much using source control system, build system, and editor (IDE) is.
e.g. you can always code, and zip-up files, and build directly using command-line (we still can do it, right?), but when you need to deliver that automation bit, and most importantly hand-off to the next person what you've done - e.g. codified for real, then it becomes a must to have a system like this.
You can have infrastructure as code without running a second containment layer on top of the containment layer your cloud provider runs (which describes many k8s deployments).
Infrastructure as code is a much older practice, not something that Kubernetes enables. Kubernetes can be criticised for being too complex for most things that benefit from IaC.
I mean, good on the author, but this isn’t what Kubernetes is really for.
Kubernetes is basically a way to run a Java-like application server that can run things other than Java. If that sounds like an appealing prospect to you, the complexity of Kubernetes may be a good fit.
Kubernetes is complex because sometimes you need to be able to do complex things. Sometimes you operate at a scale where spending 12 hours writing a deployment script is ok, because it will save you hundreds of hours in the near term. Kubernetes expects you to write a bunch of custom integrations to tie your k8s clusters into whatever ITSM / ITIL process you use.
But complaining that running a blog on Kubernetes is too complex is like complaining that a semi is a terrible vehicle because it’s hard to park at the grocery store.
Kubernetes is the new Java Application Server for people who didn't realize that Java Applicaiton Servers were a terrible idea.
Despite a long track record of failure individuals are trying to introduce the complexity of J2EE onto kubernetes. It doesn't need to be that way. Kubernetes can be very simple and it has been up until recently. Once the Enterprise Architects got their hands on it and decided everything needs to be a plugin and nothing should work out of the box the complexity started to creep up.
You should be able to run your small blog on kubernetes without requiring a team of consultants to set it up or manage it. Just waving your hands and saying well it needs to be complex to scale is a total lie.
This is the cycle of life. Zawinski's Law is a powerful force.
The same thing happens with ticketing systems: $old_ticketing_system is way too complicated and bloated, so let's jump to $new_ticketing_system because it's small and easy to understand. Oh, but we miss $feature_1, so let's ask for that. And $feature_2, and $feature_3. Continue until $new_ticketing_system becomes way too complicated and bloated, at which point you find $newer_ticketing_system, which is great except that it's missing $feature_3. Oh, and $feature_4. And 1 and 2, come to think of it. Oh geez, now $newer_ticketing_system is also coming apart at the seams, time to migrate to $even_newer_ticketing_system. . .
This strikes as the exact opposite of reality. Java App Servers were specifically built for vertical scale. You just paid $80,000 to rack 30 CPUs and now you need to a way to optimally utilize all of them so we have a deployment model for sticking multiple applications in a single multi-threaded runtime. That was a pretty decent concept for 2005 and was pretty successful. The concept of packing code into archives (jar/war) has proven to be pretty durable.
Kubernetes is explicitly about managing horizontal scale where the hardware is abstracted away.
Odds are someone / some people will create some kind of simpler solution with an easy default setup within the next ~5 years. Maybe as a wrapper over Kubernetes, or maybe as something new and interoperable with it.
Maybe it'll involve a bunch of "serverless" buzzwords or some newly invented buzzwords. That's how things usually go historically. A lot of value can still be extracted if you're careful to ignore the cult-y bits. Containers, serverless/FaaS, and Kubernetes can be pretty great if you're the plane pilot dropping the cargo on the island rather than the cult living on it, and future stuff will probably be even better.
Oh they have and are; but what is the business case open source such a wrapper? Wouldn’t you just host it on your own hardware and basically be another Heroku? This kind of software is complex enough it would take corporate backing (either an industry group or VC backing) so it’s probably not getting built unless there’s a business case.
Even Google only released K8s because they thought it would push people towards GCP — which it didn’t because it was too easy for AWS to implement a similar service using Google’s open source tech. The open source freemium model for infrastructure tech is pretty much dead as a result of this kind of activity.
The wrapper can still be something self-hosted. There will always be SaaS/PaaS/IaaS abstractions out there for just about everything; even MySQL. The idea would be for someone to be able to very easily self-host something that's as simple to use and configure and interface with as AWS EKS or Google GKE.
I have no idea what business case there'd be to open source it. Maybe some open source devs with former experience at a big company will create one just for fun after they leave the company? Who knows. It'll probably happen eventually, by someone, either way.
Would you do your day job “just for fun” after you just quit?
The reality is that most of this technology — especially around infrastructure — has become so complex at scale that the tech strategy and the business strategy are the same thing. So infrastructure software has to match your business architecture which is largely dictated to a technology org. Which is why “tech ops” these days is largely just “ops” — the technological complexity is a reaction to increased business sophistication, not the other way around.
Me? No. But some people seem to like to do that. And some people do seem to be genuinely very passionate about infrastructure and such. Kubernetes can be applied to a lot of different business and technological architectures, I think, and a simpler alternative could be similarly general.
> Odds are someone / some people will create some kind of simpler solution with an easy default setup within the next ~5 years.
I know it's not the same but Docker Swarm is pretty great if you just want to deploy some container images on a single host or a cluster - this guide covers setup + traefik + swarmpit ui https://dockerswarm.rocks
Or perhaps somebody will evolve Kubernetes itself into becoming simpler?
I know that's a pipe dream, but really, why does it have to be? What would have to happen for people to actually work on making existing things simpler and better factored rather than reinventing the wheel?
My personal theory is that it's largely because that kind of work simply isn't being valued highly enough. Reinventing the wheel is a much lower friction path to take and has a higher chance of being rewarded highly. It shouldn't be like that, though.
Do we even need k8s for a personal blog? What problem does it solve for someone coming from docker or a VM?
k8s is a building block one can use to provide a simpler service, and if you want to convince anyone it needs a refactoring, maybe provide some specifics?
The problem with making things simpler is compatibility. If you make something simpler, but don't stay compatible, it might be better to just find a new name for your simple version.
Have you tried it? Is it any good? I've been looking for a simpler Kubernetes, although I don't know how much simpler it can be in practice and still do the same things.
I think you’re getting it backwards here. Kubernetes was explicitly built because the existing solutions were not robust enough to enable containerization at Google’s enterprise clients. Docker existed, and there were plenty of quick and easy ways to deploy your blog from a docker container and get it working. Those still work today.
The “running my blog” use case is a Docker use case. Kubernetes was designed from the ground up to enable transparent integration between containerized apps and ITSM platforms. I have always viewed it as more of a scaled application framework than a hosting platform.
The main problem is that smaller shops are adopting the fads of very large tech companies, but the large tech companies usually adopted those tools to deal with the kinds of scale that the smaller shops just don't have.
I’m not sure I agree. K8s’ primary benefit is that it strings together the IAAS abstraction into a single api.
It allows you to deploy multiple replicas, automatically setup a load balancer and handles maintaining the link between the LB and the backend. While also replacing any failed replicas.
Completely valid points, and I agree to all of them. Alas, as other commenters have pointed out, the issue is devs at smaller companies deploying smaller products buying into the idea that they need k8s. I believe that it is the community's duty to educate these devs on what k8s is and when it is needed.
They are probably scared that when they need to change jobs, the next company will require "5 years of Kubernetes experience". So they convince everyone at their current company to jump into a complexity clusterfuck to see how it works "in production" and can put it on their resume. This is how the entire IT industry works today.
I am fucking appalled writing config files is a noteworthy skill is 2019. So should you.
K8s is writing config files just like Python is writing Python Syntax.
If you don't understand the underlying mechanism, either with Python or K8s yaml files, you're going to have a very bad time.
Somewhat ironic side note - Asking folks to write K8s config files is exposing too much complexity for some developers I work with. And I kind of get it. Properly setting up a service with changing environment variables, secrets, ingress, API Roles, AWS IAM roles, and horizontal autoscaling can get a bit nuts.
Yeah, fully integrated “DevOps” at scale is a pipe dream. You will always have some segregation of dev and ops because the scope of knowledge is so different, especially today where “Ops” often means “expert in XXX cloud vendor’s product portfolio and how our operating model uses the features”.
What we call “DevOps” is really a delicate balance of giving the dev teams enough rope to hang themselves while child-proofing the gallows.
I don’t think DevOps leaders are claiming DevOps should be fully integrated so much as there should be a culture of collaboration and empathy, shared metrics and incentives, and preference for end to end automation... rather than antagonistic “throw it over the wall”, “I’m a dev and am too important to be paged” behaviour, etc., which has nothing to do with skill specialization.
Good contracts lead to good collaboration. Kubernetes provides the foundation for a solid end to end contract for managing complex systems automatically. It’s incomplete, but extendable.
You realize that you're commenting in a thread for a blog post that's 100% about configuration, right? There are articles like this popping up on the front page here every few days.
I've never said that K8S is just about writing config files. I've said that it is appalling that wiring config files is still a technical "skill" that warrants articles and discussions in 2019.
What's even more ridiculous is that most people here probably don't even see any alternatives. I've had this conversation several times and it inevitably reveals the unwavering (and irrational) belief that manually entering cryptic text somewhere is the only way to make reusable configurations.
I really hate the term "Configuration as code" because it's not code. For most of us, code means something that can be stepped through. For many, that means stepped through in a debugger. Descriptive languages have almost never provided that facility and we don't have it with Kubernetes or Docker Swarm.
Yes it's great to capture your configuration in version control. But if at the end of the day I'm staring at a config file in one window and a log file in another and waiting for enlightenment to grab me, that's not scalable and it's rigid. It also pisses me off to no end.
It's essentially the Frameworks vs Libraries debate all over again. I'd much rather have something imperative.
Declarative systems create a perverse incentive to keep things the way they are because it's difficult to reason about how changes affect the system, and it's virtually impossible to explore those effects. There are no guideposts you can use to apply Local Reasoning, and so there is no pressure to organize this 'code' in a manner that supports it. So as the system matures, everyone is working off of memorization. There are too few bite-sized chunks that can be learned a bit at a time. You are locked into your current way of thinking and you've locked out anyone who can bring fresh perspective.
It doesn't take a genius to see this will end badly. Again. It just takes anyone with enough distance to have perspective.
There is a use case where its really the best solution I've seen so far, say you need to cluster a long running stateful service. Its written in C so making it stateless is absolutely non-trivial. So simply load balancing won't work. Docker swarm could work but compared to Kubernetes stateful sets who is actually using docker swarm or other clustering technology for scaling stateful services.
I wish we were more willing to offer on-the-job training.
But I've worked with too many people who claim to be Senior or Lead developers but can't actually explain what they do.
I've been tempted a lot lately to try to think of a software team like a sports team. Coach, assistant coach, trainers, and physical therapists all about making you think about your abilities at a different, sometimes philosophical level.
The Surgical Unit idea of Brooks has always bugged the hell out of me. I've known enough nurses to know that you don't want to put surgeons in charge of more than one life at a time and then only for a couple of hours, and letting them interact verbally with those people is a fucking disaster half the time. Not unlike some highly decorated software developers I know. They're brilliant as long as they don't actually have to help people.
The head game in software has been overlooked for far too long and to everyone's detriment. Users as well as producers.
If we had the training part right, this FOMO anxiety would be classed as a disorder.
Context is always key with these kinds of reactions. Few people feel that hex editors are unnecessary, they probably don't know they exist. The reason you see this sort of thing with Kubernetes is that for whatever reason its hype over-extends its problem domain and many times people who do not need them receive the suggestion (or insistence) to use them. If someone were to tell you to use a hex editor to edit your JavaScript, you might very well reach the conclusion that hex editors are a cult, and useless. Someone might then point out that there are actual completely justified use cases for them. That's what I see happening here: whether it be indirect (tons and tons of blog posts and articles about moving to Kubernetes), or direct (an employee insisting that the company's infrastructure be moved to Kubernetes), or a mix (starting to see Kubernetes experience as a requirement for jobs that probably doesn't need it), all of a sudden you have the backlash against the perceived "Kubernetes for everything" culture (which in turn looks like a weird straw man to people who actually know what its for).
i'm guessing that writing dyson in Nim is a tacit acknowledgement of that: if this were something geared toward production ecosystems, it would be in golang like kubernetes? although there is the helm luafication, so perhaps dyson is part of a fringe of non-golang k8s auxilliaries.
another way to implement this is with a 'static CMS' where there are still static pages except built into a situated deploy. the 'cultish' (cultic? anyway) aspect of k8s appears to be to phrase all the things in terms of k8s constructs rather than using k8s constructs as a foundation and abstracting out.
i learned about 'rollout' from the CI portion of this post, although initial attempts to search for a comprehensible description of it fail.
Honestly dyson is just something I wrote for myself to see how difficult it would be to write. I don't expect anyone else to use it. The tool is also a punny name, because you'd need to terraform a dyson sphere before you(r apps) can live in it.
Well, maybe it isn't _only_ what it's for, but I have thought of doing largely the same and "bundle" a bunch of assorted projects onto a single 3-node cluster, including my blog.
The only thing that's really prevented me from doing so is that I have my own micro-PaaS (https://github.com/piku) that makes it trivial to run a bunch of different apps/services on the same VPS, and the added complexity isn't really necessary.
But since I deal with k8s practically every day at customers, attrition might be compensated by not having to switch tooling.
I mean, that’s kind of my point. It’s just not designed for that use case.
Someone could build a simplified fork / derivative of Kubernetes designed for this purpose. That would be pretty rad actually, but it would cease to be Kubernetes because the complexity is the point.
> The only thing that's really prevented me from doing so is that I have my own micro-PaaS (https://github.com/piku) that makes it trivial to run a bunch of different apps/services on the same VPS, and the added complexity isn't really necessary.
How does it fare in production? I've got a tiny app with two containers (a frontend an a batch job) - it seems like a decent use case.
My website (https://taoofmac.com) is exactly that (web and batch workers) and has been running on it for almost 3 years now, but bear in mind that it does not use containers - it merely deploys relatively isolated services in virtualenvs (or equivalent).
Since I use CloudFlare (hi jgrahamc!), it's been peachy.
Kubernetes is basically a way to run a Java-like application server that can run things other than Java. If that sounds like an appealing prospect to you, the complexity of Kubernetes may be a good fit.
I do not agree with your assessment of Kubernetes. It is not equivalent to something like WildFly or TomEE. Kubernetes runs/manages application servers (and not just java once), along with a whole host of other things devops things at Scale... Kubernetes is great for setting up a blog, and the 11 other applications the author is trying to run.
Oh I get the author’s point, but her use case was “basically a Heroku replacement for easy deployment”. It’s just the wrong use case for Kubernetes and it is well known that deploying to Kubernetes is a bit of a nightmare.
And why is "ease of deployment" not something that someone should expect from K8s?
I've been sitting on the K8s sidelines for a bit as things iron out, and I've been deploying it on bare metal on a test bed over the last few days with the intention of using it as IaaS for some of my own apps.
It seems to be what it's meant to do. Keep my app running on infra following the rules I set.
The goal isn't "ease of deployment", the goal is "infrastructure as code" so that application infrastructure can be managed in a way similar to application source code (e.g. PRs, blame, code reviews, CI, rollbacks etc). This helps ops people because it allows them to think about infrastructure as abstract resources rather than as a collection of individual machines with specific designations. With k8s, individual machines become a homogenized resource that do not need specialized provisioning depending on the application they will host.
Except Kubernetes has sucked all the oxygen up in the industry and has had a subset of adherents that trash the alternatives such as Heroku, Cloud Foundry, etc.
Anyone who suggests k8s as an alternative to Heroku is wrong. Heroku is a product that manages infrastructure so that the programmer doesn't have to, k8s is a solution for operational engineers that want a code-driven approach to managing their own infrastructure. Suggesting k8s as a replacement for Heroku is like suggesting docker as a replacement for EC2.
And yet, they both fight for the same budget. They're absolutely alternatives in that sense. Not necessarily the right tool for the job, but alas, that's not stopped the wave. And eventually, even Heroku will be based on K8s, as they see the writing on the wall.
Also note that most adherents view K8s as a replacement (more accurately new API) that replaces EC2. the fundamental unit of computing becomes the Pod, not a VM.
I'm starting to feel that the whether or not you need Kubernetes is closer to old conversations related to whether or not you need a dedicated DBA. Can your organization survive and recover with hourly/nightly backup restores to recover from an incident? Is your replication so complex that you really need a guy to ensure that's never getting into a bad state? Worse case scenario can most people working on the project restore the database to a valid state if something does go wrong? I feel like these questions have similar representatives in whether or not Kubernetes is right for an organization.
Kubernetes also makes more sense if you look at it as a common way for an organization to run applications among disparate teams with a shared operations infrastructure. It provides a standardized model for things to work the same enough to work the same. If you are only delivering one kind/whole organization thing and it all looks and quacks like a duck maybe you should just deliver a duck instead of putting a duck hat on Kubernetes and asking it to quack.
I don't think there is anything wrong with designing an application that would transition easily into Kubernetes but I feel like many of the proposals/PoC I have seen in the last few years are either fresh systems that get consumed by Kubernetes complexity or are poor replacements to systems that already exist and only seem to serve as resume padders for the team architecting the replacement which gets a viking funeral as soon as they leave. Often the latter case is because the underlying architecture and goal of the system is pre-Kubernetes and doesn't fit the model well of having mostly stateless/replaceable pieces.
There isn’t a dividing line per se; every operating model is going to have different breakpoints. In general though, I would consider Kubernetes an “enterprise technology”. If you’re a startup you’re going to be better off paying AWS/Heroku for one of their more managed services than hiring someone to build / manage a Kubernetes cluster.
A note on this - AWS (or any other host's) managed k8s will not reduce the need to understand what's going on under the hood. It's still k8s, you're just not running the daemons to make it work (which is, arguably, the easy part).
Kind of implicit in the other responses here are use cases that call for a full cluster and all the complexity that goes along with it.
But running a single node cluster is totally valid. I use one at home to run arbitrary containers (a DNS-over-TLS proxy, VPN server, radius server, network AP / switch manager, etc.). I don't use load balancers and just use host-based persistent storage, which removes the vast majority of the complexity.
I've seen a lot of people get wrapped up in the complexity of wanting to be able to have a persist storage-backed process go down on one node and come back up on another, and that's not unreasonable, but that's a lot of stuff to figure out early on.
If I weren't using this as a single node k8s instance, I might use it to manage VMs, which might be easier to understand but much heavier and more work to maintain. With what I have now, I've got a folder of YAMLs that defines everything to run on my node, and I'm able to easily put all their persistent data in the same top-level dir for easy backup.
I think the perception of k8s might change over time once people realize that it provides a lot of value even if you completely rule out the tougher stuff to do on bare metal (like load balancers and shared persistent volumes).
Yup, meant to say >= 99.9%. Of course most businesses operate more like what Cloudflare makes explicit with it's "100%" SLA. Architect for 3 nines and then just pay the penalty when the 3 nines architecture doesn't hit 4 nines.
I think devs often make bad decision makers because in some sense tech is often an addiction rather than a pragmatic choice.
The cycle of picking a tech, jumping ship to it, religiously evangelising it, riding the wave and then jumping ship to the next related tech is typical in my opinion.
I try hard to correct for this bias but sometimes struggle with exactly the same thing. There's just something about wanting to have a uniform "world-view" with fewer explanatory variables that never stops being motivating.
Part of the problem is the hiring process (plus attitudes seen on here).
Your resume needs to have lots of fashionable buzzwords rather than pragmatic good enough / keep it simple choices. You must keep on learning (lots of things rather than mastering any one thing). I can write a really nice site in standard Django with some JQuery, and it will take me half the time that adding React to it will. But adding React will make me much more employable and get me a better wage.
It's seems like at some point around five years ago the three tier architecture with it's division of labor vanished over night. I'm not saying things were perfect back then but I've never seen any demonstrably objective reasons why it was replaced.
I went from having to be mindful a few configuration items which arose from deploying my war to different environments to slogging through configuration hell in the Terraform and AWS world. I've been learning way more about Ops than I ever cared to know while at the same time becoming a -10x developer in terms of shipping business value.
The real trick is to make your site load so fast people swear it's magic. I use a combination of serving things from ram and https://instant.page to do this with a fairly boring plain old HTML rendering on the server app. I even have a Progressive Web App out of it too.
Honestly, very true. After doing some brief work for a financial services company, the one thing they were consistently surprised at was how fast the application ran!
Yeah, duh. I render simple HTML templates on the server and serve them as browsers expect them; not with a thousand lines of JS for topping.
The "Pages not preloaded" page notes that it excludes addresses with query strings just in case they run some action that you might not want to trigger on hover. You can override the default behavior if you know it's not an issue.
I'm reminded of a post from a few years ago where someone's website had a table of items with [delete] links and would take database actions based on GET requests to those URLs. Who cares? It looks the same to a human browsing it.
And then it got crawled by a search engine which followed all the links to see where they went.
But if you're not doing anything unusual like that, I don't see how prefetching HTML would cause any problems.
Rails with turbolinks or Django/Laravel + pjax is good enough for most purposes. When Kubrrnetes first appeared it was laughable if you used it for anything less than provisioning a massive fleet of servers. Now it's something you sprinkle on your corn flakes.
Yep. We just started implementing it at my place. I had only just started and wanted to say that it seemed like overkill, but it was under way when I started and bringing that up in my first week didn't seem like a good way to start.
Top be fair it has reduced our server costs a bit (after maybe 6 months of developer time). I am unconvinced it will be worth the hassle.
We are in Spain, no San Francisco, so a fair bit lower than that. IF the startup goes well and we need to scale maybe it will be worth it. And it does give us the advantages of high availability.
Though one comment I saw about Kubernetes on here a few weeks back concerned an old schooler like me. The guy suggested that if something goes wrong, just kill the pod and let kubernetes bring up another. Apparently that's the way you are supposed to do things. Something seems really wrong with that approach to me. Just throw resources at the problem with very little understanding of why things went wrong.
It seems to me that the requirements for personal infrastructure and professional service-grade infrastructure have drifted so far apart that essentially, if you know one world you don't (automatically) know the other at all.
Tbh, I have no real world experience in this, so it might just be my own delusion. However, I've recently started getting into self-hosting some of the services I use. I'm using a simpler infrastructure than what OP described and while it is the right choice for me and a useful skill to have, I feel like it absolutely won't get me anything in the sysadmin/ops/etc. job space. I've actually considered adding more "enterprisey" tech to it (like Ansible or comparable stuff) just to make it more sexy for recruiters.
The cycle of picking a tech, jumping ship to it, religiously evangelising it, riding the wave and then jumping ship to the next related tech is typical in my opinion.
It is typical for devs.
Meanwhile ops have to support every half-arsed tyre-fire technology until the end of time, because a dev wanted to try it once, and now it’s in prod with users relying on it.
Kubernetes is in a sense the pushback against that “do what you want, as long as k8s is up, what you run in your pods is your problem, not ours”.
It's typical for web application devs. There is a huge ecosystem of software developers outside of web services who are much less fad-happy and much more focused on using established tools to produce useful, reliable systems.
Webdev is where the money is. It's where people with a CS degree or programming experience are most likely to find a way to put food on the table. Everything else requires more expertise and, aside from the most specialized of applications, pays less money. So as it is, webdev is the center of the universe, and RDD is table stakes for being considered a professional in the field.
DBEs as well. We're constantly getting new database back ends for apps to the point that the Ops DBAs support some 9 backend database solutions. Granted, that probably falls under WebAppDevs for the most part.
Being old doesn't mean good. From my experience, using j2ee or spring to make a web app is grossly overcomplicated (I have heard of but not yet used spring boot). Asp.net is fine but anyone who is paying $$ for that is probably a dumbass
> “do what you want, as long as k8s is up, what you run in your pods is your problem, not ours”
Until somebody cyberattacks those pods and steals all personal data of your users because the devs didn't bother to apply security patches. But hey, it's not your problem. You are not responsible for the pods. k8s is still up.
True. I own all 24 clusters from a management perspective plus own the core OS container they use. I rebuild the OS container, patch, and upgrade the clusters quarterly. I currently have to manually check to make sure they're not using some third party OS container and reject it if they do. I'm working on a PodSecurityPolicy that enforces that so I don't have to manually do it any more. They are fully aware of this because I'm part of their process, attending their scrums and adding lifecycle bits to their Jira backlog. It was initially a shock to them and pushback happened but since I "own" the environments, and could provide good reasons for it, and showed them it didn't adversely impact their workflow, they seem good with it. I can't say they aren't complaining about it among themselves though :)
But hey, it's not your problem. You are not responsible for the pods. k8s is still up.
But that has always been true. If a dev leaves a SQL injection for example in the code and it got penetrated, absolutely no one would blame the sysadmin for that.
In the case of sql injection the responsibility indeed weighs more on devs. But often it's a grey area. What about upgrading openssl lib for example, or patching Struts framework (see Equifax hack)?
My interpretation of DevOps is that it's one team with shared responsibility and not "shove your stuff in that pod and don't bother me."
I think one root cause is that the two demands Dev usually have for Ops (keep the system protected and up-to-date and keep the developed software working in a well-defined environment) are sometimes directly conflicting - and developers don't always seem to realise this can be the case.
E.g. you could imagine some extreme case in which dependency X, version N has a critical vulnerability - but at the same time, the developed software relies on exactly version N being present and will break horribly on any other version.
You'd need Dev and Ops to actively work together to solve this problem and no amount of layering or containerization would get you around that.
I worked for someone who had mindset that whatever technology developers wanted was always good and ops should just shut up and put up with it because devs are the ones that make the money for the business.
And I've worked at companies where the devs where expected to know their place and not question ops, because ops was seen as the serious adults in the room keeping things running and devs where seen as easily distracted children chasing after the shiniest thing that most recently caught their attention. Made perfect sense when I was on the ops side and was super annoying when I was on the dev side :)
You jest but a dev chucking an insecurable thing over the fence to ops is very common. I will bet that’s how there are so many open MongoDB’s out there.
Having worked in both DevOps (or Ops as we called it in 2002), there really was a belief that the developers were stupid, and they'd burn the whole place down if we gave them any leeway. As a developer, I've seen DevOps as a frustrating gate at times. The only things I think can fix this divide are communication and built trust. (and probably less assumed malice)
> Having worked in both DevOps (or Ops as we called it in 2002), there really was a belief that the developers were stupid, and they'd burn the whole place down if we gave them any leeway.
I've been on both sides of this divide myself, but have spent the last fifteen years or so as a developer. In my experience, developers will burn the whole place down if we're given the chance.
We're focused on writing code, and it's boring to write the same code over and over: we want to write new code, in exciting ways, and we are surprised when it fails in exciting ways.
We're focused on delivering features; our incentives are all about getting it done, not about getting it done well (our industry doesn't even have a consistent view of what's good or bad: note that C/C++ are still used in 2019) or supportably. Some organisations really try had to properly incentivise developers, but I've not seen it really work yet. DevOps is an attempt to incentivise developers by getting us to buy into ops. I've read a lot of success stories, but not seen a lot of success with my own eyes.
I do my best to be diligent, I do my best to wear my Ops hat — yet I still fall down. I don't think that it's unavoidable, but so far I've not avoided it, and I've not seen others avoid it either.
Smaller companies I've worked for don't really seem to suffer from this problem although once companies are larger and have separate teams (and, perhaps more importantly, managers who are incentivized in different ways) this problem always seem to arise.
I've seen this in a 50-person volunteer group. The devs turned up every year with a proposal to throw away and completely rewrite what they'd done the previous year. No incremental upgrades -- a complete rewrite every time.
This worked great when several other business systems relied on their vanity toy, and invariably the API would change with every release.
There's a balance to be struck between 'never change anything because it's always worked' and 'new shiny every week'. In my experience it's an absolute nightmare getting people to agree where the line is, and on top of that, get management to buy-in and push-back when either side oversteps.
The company I work for went even further. Ops doesn't support anything in public cloud past the basic connectivity to the corporate network. Everything created in public cloud is the product/dev team's responsibility. Kubernetes? Not their problem.
> The cycle of picking a tech, jumping ship to it, religiously evangelising it, riding the wave and then jumping ship to the next related tech is typical in my opinion.
To balance this with a counter example from the quieter group of people not "hot for the latest tech":
I'm a "dev" and i've never had this problem, however I work for a small company, where everything I make and deploy I also have to maintain in some form or other. This gives me a strong bias towards operational simplicity and trying to essentially eliminate dev ops... New tech which is both complex and opaque in solution without clear cut advantages is basically repulsive to me, because trust and reliability without constant attention and tweaking is important.
Eliminating dev ops is doing it right! The whole entire point of devops, as it was originally formed, was that making developers bear the load of operations would encourage them to simplify and automate it.
- non downtime deployments (yes you can have a downtime, but everytime you deploy an app?!)
- schedule more than one thing (no company has a single product that only has a single binary or at least nearly no company, there are some unicorns tough)
- some kind of automation (this is complex, no matter what you use)
Oddly, a lot of small companies really don't need this. If your customers are mostly businesses in a limited set of time zones, having a maintenance window outside of their business hours is probably easier.
> I think devs often make bad decision makers because in some sense tech is often an addiction rather than a pragmatic choice.
I also think this has a lot to do with how devs spend our time: with the tech itself. Whether your application is running on Kubernetes or a box in your garage matters to precisely zero customers as long as it performs well, but as developers we spend our whole day dealing with various APIs and technologies, so we develop an outsized sense of the importance of those things.
Which is why I think it helps a lot to work in domains where shipping software is not the core business, just a cost center to keep real business going on.
One quickly learns that business has a complete different set of priorities and dealing with software as little bonsai trees is not one of them.
I think jumping on new tech and marketing yourself is a good decision for a developer as its a good way to increase their compensation and market value. If you're a developer stuck maintaining a Java spring app at some unknown company the best way to make a shift is to pick up Go or something and move to a startup. Else your career will stagnate.
The best way to get promoted at many companies is to write a framework. The best way to get noticed is to write an open source framework. And so on.
Is that true, or are you inadvertently comparing the cost of living between SF and other major cities?
Hip technologies are being used in SV, and they have to pay tons of money just to keep the talent pool large and circulating.
Older technologies are used in other cities, and there the market forces aren't so crazy.
But a good Java dev can make plenty of money in SV, and a Go developer will make a competitive salary by Dallas standards but not by SV standards (and probably have a harder time finding a new job).
I am currently working in NYC (living in NJ), with a total comp that is more than 3X what I was making when I left Dallas. Based on the market there, I would still probably be making 40% of what I do now had I stayed, and the company would not have been as good.
For the record, I have been using a JVM language as my primary work language since early 2012.
It’s not so simple. For eg if your skill set is in demand you can easily trade up to a better company than if it wasn’t. This is true in my own career. Also the bar to entry would be lower. So for this reason if you’re breaking in right now learning React is better than learning Spring.
Java will be around forever, and becoming an excellent Java developer will absolutely remain highly lucrative for a very long time. That's its reputation at the companies I"ve worked for over the last 10 years (all startups). Go is more of an anti-language imho. Among myself and similarly minded colleagues, I would say its main attraction is its lack of features and it seems a haven for people that are grumpy like me. I always get a chuckle when I see it framed as "hip" because its just never felt like that to me. Elixir is Hip. Rust is somehow Hip. Go, just not in my experience.
Range of employment options. Possibly salary, though that's more variable. There are some jobs keeping the lights on with legacy tech long after it is done being the hot thing, but typically with any particular stack it's a shrinking numbwrt of jobs often with shrinking average real pay unless it hits a phase where the decline in people able to do it exceeds the decline in work.
If you are riding out the last few years of your tech-focussed career (whether that's before retirement or before moving out of hands-on tech into, e.g., management) that's maybe not so bad, but if you planning being in tech for a longer period it's potentially extremely career-limiting not to adapt to current market focus.
> Range of employment options. Possibly salary, though that's more variable.
I'm not sure this is true. Most of the shops I've been in don't care about whether you know this or that language or library. You're expected to learn that as you need to. Most of what I've seen cut people from interview loops is missing fundamentals.
From a programming perspective, possibly. As an Ops Engineer, I'm having a hard time shifting jobs. Where I work now, it's heavily siloed so I can't shift into a CI/CD team because it's a different team or the Product Engineering team because they don't do Unix administration, automation, or Kubernetes (other than the deployment aspect). I focus on automation with shell scripts and Ansible plus Tower to get Infrastructure as Code going. I took on the Kubernetes role, and am the single point of failure for the 24 clusters I manage. And now management is asking what support contracts we have for Kubernetes (me, it's just me and asking questions in various places on the 'net). Add in that I'm taking courses for the CI/CD toolset and implementing them on my homelab. But I still can't get a bite on shifting jobs.
The job might not be as interessing as riding every tech wave, but on the plus side there are plenty of tech waves that you save yourself from riding on.
Plus one gets to rescue projects that ended up betting on the wrong waves, getting back to boring old tech.
I mean that Kubernetes is the NoSQL, CoffeScript, BigData, Grails, SOAP... of 2019.
It is a bit unfair for Cobol, given that its latest revision is from 2014, and while verbose as it might be, it supports most of the nice features of any modern multi-paradigm language.
SOAP was\is a pretty stable technology that did exactly what it promised to do, without too many releases or breaking changes for about 10 years.
Even today it has a good utility for the situations it is designed for...
RPC over a well known standard format, for tightly coupled endpoints, that require metadata, enforced schema, security, and perhaps transactions.
big problem for SOAP is it was the default for web services for a long time, when in reality a big shift happened in about 2008 where webservices were most likely NOT going to fit into those constraints. just my 2 cent
Confirmation bias (where you've spent some time on k8s or whatever, and now you just want to cash in on your time loss, objective criteria be damned)
Generational churn (where you find yourself in a field where everything has been said and done, and you just need a new buzzword on your resume to start over; this goes hand-in-hand with corporate IT longing for fresh and cheap staff and their stack in need to look sexy)
Big media (where extremely large infrastructure runs on k8s or whatever, and gets disproportional airtime, because cloud providers want to sell you lots of pods, and people not checking whether the proposed arch is a good fit)
Decision fatigue and opportunity cost aversion is a big factor too, I think.
When comparing consumer products where there's lots of choices, I find myself finding an OK options and 'falling in love' with it - when I reflect, it's basically a way of cutting through all the reviews and deciding that one is the best and I've no reason to regret buying it or need to do any more trawling through reviews and comparisons. I'll just buy this one and be done with it.
Not just devs, it's really a management problem all round.
Management (top bosses) often seem want the latest ie. Big Data . It doesn't mater it'll cost a fortune and you'll get better results on a single server.
And if the devs are out of control and pushing for %tech% and getting it, that's management at fault. To be a good manager you need to understand what your employees are doing. I've met too many that don't.
I've been around enough to see a couple iterations of this. Being able to spot when something is about to fade away and something else come into focus is a valuable skill for consultants. I suppose it's necessary for tech. progress but, man, a lot of money gets spent chasing the new thing.
I think that larger reason is that devs who don't act like that or at least don't pretend to be like that are considered less capable by many. Somehow pragmatical decision making that is seen as not being passionate.
The other side of the coin is that there are very real improvements in newer tech and companies, in my experience, are only willing to support continuing education that is directly related to the tech stack that they are using.
So a developer that doesn't want to deal with already solved problems and who wants to advance their knowledge is incentivized to push for jumping to the newest tech.
I've suspected this to be the case almost everywhere I've worked. Another reason it happens is that anyone questioning the adoption of a new tech risks looking like they don't understand it.
However going the opposite way (sticking to one reliable tech stack and refusing to change even when something better comes along) could be just as damaging to a business.
How then, do you build a culture where people are open-minded to new tech without feeling obliged to jump on every bandwagon? I don't think I've ever seen an organisation get the balance quite right.
Actually trying all promising new technologies is another full time job, or at least take 20 hours a week.
Most developers just don't want to be left behind, so they pick up whatever is trendy at the moment. It's completely rational, because knowing what is trendy getting you hired.
However, implementing what's trendy, without carefully weighing pros and cons*, is what's dangerous.
I think maybe it's easier to blame an external framework in hindsight, than to take the blame for some smaller solution that you personally created in-house.
It's an interesting mix of comments on this post, where half are talking about the typical "new tech switcher trope" and the others "I use it at X/It solves my problem X". The first is expected, the later shows smoke and a bit of fire around Kubernetes. I use Kubernetes and I usually don't like new tech, but Kubernetes solves so many real/hard problems with dev and ops (enough that i'm willing to work with the problems it creates for me). In my experience, it's the real thing.
Kubernetes is our one shot at having the universal vendor-neutral cluster interface. The fact that it's time consuming to do simple things directly against it doesn't surprise me in the same way I'm not surprised that writing todo app directly against POSIX abstraction would be time consuming. It's a great way to learn how these interfaces work though.
Knative[0] pushes in that direction from the side of "complicated" Kubernetes. It's still far away from easy, but I expect that the solution will look like this -- a software that uses Kubernetes base to provide high-level primitives. Helpful cloud provider will give you a cluster with such thing already installed, as Google already does for Knative with the Cloud Run offering.
Microsoft allows you to publish a web application from Visual Studio project to Azure.[1] It's very simple, but more much opinionated. It's a great trade-off for an individual developer who needs to focus on functionality. In the context of this discussion, there's an important distinction -- it's not an interface, it's just a feature. It's tightly coupled to Azure from one side and to Microsoft dev stack from the other.
Oh yeah, as a C# developer I definitely am familiar with app services.
But many organizations would rather not directly pay the costs of app services and instead indirectly pay the costs by making their developers tool around with Kubernetes.
The simplest technique for a blog, IMO, is using a static site generator. Deploying static assets is simple, and you have your pick of generators/languages.
They're not included in the base binary, and are instead provided as plugins. You can find a list of them here [1]; they've got AWS, Azure, GCP, DigitalOcean, vSphere, Docker, OpenStack, maybe a couple others.
These posts about kubernetes are so ridiculous. Installing kubernetes on multiple servers is not difficult, and understanding the components is pretty straightforward if you've ever worked on a distributed system.
If you don't want to be in the "cult", dont use it. Meanwhile i'll be writing service and deployment yamls and avoiding all the proprietary expensive aws bs.
So what? It wasn't designed to make it easier for hobbyists to deploy their weekend projects, it's meant to provide ops-engineers with an infrastructure-as-code abstraction for distributed applications.
Exactly. The point of the first meme in the blog post isn't "kubernetes is so overcomplicated why would anyone deploy their blog on it", it's "your blog is nowhere near complex enough to merit running on kubernetes".
Everyone here is blaming the truck company instead of the person that bought a flatbed to transport a 2-lb box
The fundamental flaw with Kubernetes is that the UI is so bad. The abstraction is leaky and the naming is confusing. It certainly didn't stop git adoption.
Now imagine how hard it is for executives that don't know anything about technology to aid in making long-term strategic decisions that depends on this!
> Now imagine how hard it is for executives that don't know anything about technology to aid in making long-term strategic decisions that depends on this!
Executives that don't know anything about technology shouldn't be aiding in making strategic decisions that depend on this. They might make decisions on the advice of others (including, hopefully, executives who do know something about technology—CEOs may make the decisions in some cases, but with CTOs and/or CIOs innthe loop, and if they know nothing about technology, that's as bad as if your CFO knows nothing about corporate finance.)
I don't know what it actually means to be a CTO or CIO, but I think a lot of them have been spending the last 15 years working out strategic visions and reading articles about tech trends.
It's really hard to know this stuff unless you are down in the weeds every day.
People just responding to the title. The author was successful in migrating! Mostly an article about how immature GitHub actions are now that it’s just out of beta.
Edit: As noted below github actions seem to still be in beta. The original point stands.
AFAIK Github actions are still in beta. I had some time set aside today to set up some build actions for a project I'm working on, but was roadblocked by a "request access to the beta" page. Do agree with your main point re: article though.
step 1) keeping a service up and running is hard. we have all these issues and it seems like we are struggling to do simple things
step 2) only if there was some magic tech that could solve all these issues. and have a cool name. and we could put it on out resumes... drum roll: K8Sssssss
step 3) bro. it’s working. i don’t really understand what it’s doing but look at all the containers we are running. and the config... super configurable. we’re devops we can figure this shit out right?
step 4) what do you mean we have to update the k8s version we’re running on? we barely got this one working. ahh... the beta tools we were using got a bit more polish... makes sense....
step 5) sob silently when you realize that the work k8s has supposedly saved you now goes into maintaining the k8s cluster. reminisce about the good all days when you could just xcopy deploy your app.
epilogue) in the age of the cloud, k8s make zero sense to me. use the abstraction provided by your cloud and focus on writing your crappy app. you’re not google or amazon. you don’t have to solve the problems they do and you’ll probably never have their scale. oh? you have thousands or bare-metal servers and looking for a solution that can help
manage them? you can also afford a dedicated ops TEAM to manage them? (dave jumping on the latest tech trend does not count as a team). go ahead!!!
Yesterday I caught up with an old friend from where I grew up, for the first time in 12 years. He said many good things about Kubernetes.
He's working for a Swiss bank, and managing an ops team. In his case, I think k8s makes sense.
Then there's me. Doing remote work, struggling to sort out visas, wishing I could have his kind of life and stability, wondering how it all went wrong. To get a job like his requires experience in Kubernetes, Oracle, etc. I can't get that kind of experience with side projects.
"dave jumping on the latest tech trend" is probably trying to build up his résumé, rather than actually help the company. As someone who needs to build up his own résumé, what do you suggest to gain experience in this kind of technology? I don't want to intentionally deceive the company to let me try it when I know they don't need it. But I know that I need to learn it somehow.
there is more than one type of dave. there is the “jump on it, run it in production and make it the next guy’s problem dave” and there is the “play with it to learn what it is in a safe context/env dave”. you can definitely stay up-to-date with tech without betting the farm on it.
2nd thought is that as an employer you don’t want one trick ponies. You want people that understand the fundamentals and can learn and adapt. i will take a person that has good fundamentals, is curios and constantly learning over a person that knows a technology every day of the week.
You do realize that one of those cloud services you can use is k8s, right?
GKE is pretty amazing. They manage the k8s control plane for you, offer worker node scalability and you can use a decent, intent based, automatable API for declarative deployments. No need to mess around with VMs or proprietary lambda/serverless stacks.
yes. i realize, but having a 3rd part manage the control plane for you != you managing the control plane. again, this comes down to delegating the work to someone that does this for a living. You could say you're using k8s at that point but you're definitely not operating a k8s cluster
But is anyone arguing that using k8s => you must be running your own control plane? This seems like a straw man. You're not saying 'running your own k8s makes zero sense', you're saying 'k8s makes zero sense'.
What's the cheapest you can run a k8s cluster in the cloud? I've been looking to spin one up in AWS, but it looks remarkably expensive for running personal projects.
If you go with a provider that provides the control plane for free, which is the way Google Cloud and Digital Ocean do it (and probably many others), a single node cluster is actually a valid cluster. It won't have redundancy / High Availability which is Kubernetes' raison d'être, but it works well. If so, Kubernetes is no more expensive than non-Kubernetes. In the case of Digital Ocean, $10/month.
Ignorant HA requirements, will this work fine with a single beefy node in home network or would I be better off running multiple vms on that single server which then run separate Kubernetes nodes?
You can get a single-node k8s cluster running super easily with [Minikube](https://github.com/kubernetes/minikube). The more recent Docker for windows/mac actually comes with a kubernetes distro that piggybacks off the docker vm.
Consider just running Minikube on your laptop. It’s pretty realistic and won’t cost you a penny. Except maybe in electricity, it seems to consume an enormous wattage just to exist...
K3s is fantastic. Lately I've been using K3d (which is the same thing, just in a docker container - much like Kind) It's super easy to spin up a cluster, and spin it back down with nothing really to clean up.
I’m building a startup that provides hosted, shared Kubernetes clusters starting for 0$/month. https://kubesail.com - I agree with everyone on this thread that using k8s for a blog is like building your own house from scratch - but the analogy breaks down when everything underneath the Kube api is managed and setup for you - at that point is just becomes a standard, open cloud API :)
I'm not very familiar with kubernetes, what do you get out of it that you don't with dokku? I really like dokku and use it on my personal server for all my half-baked personal projects.
If you just/first want to practice actual, multi-node k8s on your local Mac (or Windows), I've just completed this: https://github.com/youurayy/hyperctl
Same here, I run K8s both locally and on GKE for a project. My GKE cluster is just 2 nodes that have 2vCPU & 3.75GB RAM each. Performance is great and it has saved me an insane amount of time. I have also created an open source project that does one thing - updates your deployments :) https://keel.sh. Previously I tried several different hosting options but nothing is easier/more convenient than k8s for me.
Hi, my shameless plug (I am the creator of webhookrelay): https://webhookrelay.com/v1/guide/ingress-controller, using it both for services that are running in GKE and on minikube locally. It's cheaper than allocating a LB IP for backing services that don't get much traffic like Grafana and similar things.
One possibility (especially for "home Kubernetes" case) is not exposing the services to the outside world at all and using ZeroTier to access them https://www.zerotier.com/
It's L2 mesh VPN, and I believe you can even use MetalLB with it with some minor trickery.
You can, of course, set up WireGuard or OpenVPN for yourself, too, but from my experience zt is the simplest for accessing the boxes behind NAT as you don't even need to set up any servers with real IPs.
DNAT. You map one/more ports from your router exposed on internet to ip:port of the local app.
However, http/https ports are already used on routers to offer an admin web GUI. It’s technically possible to circumvent this with some ad-hoc firewall rules, but it depends if the router admin UI let’s you do that.
Exactly, they aren’t exposed outside. That’s why you can “potentially” add rules to route request from the outside to an internal host:port, even 80/143. On the LAN you would still able to connect to router admin.
Assuming you are running on physical hardware how are you managing storage? I've tried Rook but it seems somewhat buggy and overkill for my requirements (rsync would do).
I'm using K8S (with rancher's k3s) at home too ! My main reason is portability. When I need to unplug one of the Raspberries or move all services to somewhere else, I only need to change the storage layer.
So you're running bare metal k8s at home? What do you use for storage? That's my biggest question in how to move frok minikube at home to a true cluster.
Do youbuse minikube at home? If it's a real cluster, I'd like to ask what you do for storage. I currently run minkube but would love to move to a real cluster setup.
To simplify or encapsulate the complex and save resources was a common sense thing in the past. The opposite is the rule now and its like an orwellian and kafkian nightmare. today, working in software development is getting paid to do meaningless sh!t and knowing its wrong.
Kubernetes solves several problems. The big question is whether YOU HAVE those problems. If no, then K8s is indeed questionable/not needed.
I have a single question for the author of the article:
Have you ever worked for a company where you must manage/deploy to N servers where N > 100 and applications are written in M programming languages where M > 3 ?
F.D. I work for Codefresh a CI/CD solution with built-in Kubernetes support
I use Docker Swarm for my personal projects. It works, and when it doesn't, I can hit it with a hammer and knock all my sites offline for a few minutes and no one will notice. Then it comes back and I can forget about it for awhile again.
I'm looking at switching to Kubernetes. It will be more effort to set up and maintain, it will cost more in terms of infrastructure, and I have to write more automation to handle side effects. The benefit would be that I then know how to do this when I inevitably have to set up Kubernetes for my day job (I'm a consultant).
Automating the installation and configuration of apps on a raw VM would be far easier and make more sense for personal projects. I don't do this because I already know how to do this.
I once set up an Elasticsearch cluster in Kubernetes.
My conclusion was that it was totally redundant because in the end I made a few super-nodes that each run one super-pod (which is basically just one node from the elasticsearch cluster)
I was confusing to think about the nodes of the cluster. is it a kubernetes-node? or an elasticsearch-node?
after I was done, I felt terrible, but it was already working so I had to shrug it off.
It is also kind of funny to think that this is all running in virtual machine which provides one kubernetes-node with runs a docker container which runs one elasticsearch-node using the Java Virtual Machine.
Kubernetes is for when you need to run thousands of containers on a large cluster and you want to be able to do it both locally (on-prem) and in the cloud. It's for cases where you already have operations staff who can manage it on-prem for you. It's for cases where you have a dozen teams and you want any developer from any team to be able to spin up a fresh instance of software from any other team for dev, testing, etc., all on one pool of shared hardware. It's not for your blog, and it's not for your startup of 10 people where you can just use purely managed services on AWS or Google Cloud.
(That said, there's obviously nothing wrong with using a personal blog as a playground to learn new technologies.)
Thank you for posting your experience. I am starting on the same path. I am at the point where I got the deployment working and a pod setup. Now I need to be able to access my service from the Internet using a DNS name and SSL certs. Before reading your post I didn't realize how many supporting characters would be needed to do this. It feels daunting and am grateful you posted your scripts and configurations.
I'm running AWS k8s. Did you look into AWS k8s, and if so any reason why you didn't pick it?
edit: 2nd question. How is service discovery handled in your setup?
AWS K8 either requires managing the master(s)/etcd yourself or paying amazon to do it via EKS. Amazon charges $150/cluster/month for the masters/etcd in EKS. As I understand it, Google and Digital Ocean charge $0/month for the master(s)/etcd.
So if $150/cluster/month is an amount that matters to you then you shouldn't use EKS.
I'm using DigitalOcean for O(money) optimization reasons. I also wanted as "vanilla" of Kuberenetes as possible while having it be managed. DigitalOcean seems the least bad pick for this.
What do you mean? That term is so overloaded it could mean anything.
I got my answer on the service discovery. Its basically done for you by the spec.selector portion of the Service definition, where you tell it which Pods to include in the service.
I'd recommend GitLab Auto-DevOps, although I'm sure there are others in this space, which provides a Heroku-like experience on Kubernetes. That way you don't have to learn Kubernetes in one big bang. You'll still probably need/want to learn Kubernetes eventually so you know what GitLab is doing for you and some of the tradeoffs involved but it's not a roadblock to getting stuff done, and starts you off with a fairly sane configuration.
I'd suggest to read "Kubernetes: Up and Running" book, it's short enough and cover basics of k8s. Also it's important to have hands-on k8s cluster, so Google Kubernetes Engine would help a lot about spinning up a cluster.
I had to get up to speed on k8s at work this year and from that experience I would recommend to use minikube at first. Only go to GKE or such once you outgrow minikube. Doing minikube first also makes for a nice way to develop an application. Minikube for local development and testing, then deploy to GKE for QA & prod.
I did so in the beginning unf 2018 and after reading around and talking to some cloud people, I got the impression that the best idea is going right for serverless unless you have a real need for the stuff tha k8s gives you over it.
The other suggestions are great; it boils down to run kubernetes and use it. I like minikube for my local system, but it won't help learn how to run Kubernetes itself.
I use Kubernetes extensively and the greatest benefit to me is that it became an implicit standard API interface for any Container type workload.
This means I can launch my clusters on AWS//GCP, on prem or any other cloud provider and use the same high level objects to deploy my distributed application.
I will agree that a lot of people go overkill with Kubernetes and you absolutely don't need Kubernetes to deploy a simple web server. As usual use the right tool for the job.
Calling it a cult after growing up in one is a bit insulting. But, yes, I don't run much on my personal K8s clusters. But, I've lost count of how many problems Kubernetes has solved for folks I've worked with. It might not solve the simplicity use cases but, that's not the intended design of the tool to begin with.
I was trying to get airflow (https://airflow.apache.org/) up and running in the cloud. It has a few moving pieces that I didn't want to worry about. So I used a helm chart to install it - https://github.com/helm/charts/blob/master/stable/airflow
Maybe this is overkill but it was reasonably quick to get up and running, scaling workers will be trivial, and the Airflow Kubernetes Operator opens up a lot of options for tasks. The vanilla install was pretty quick. Understanding the configurations took a bit more time.
I would like to hear other thoughts about simpler ways to deploy this that don't require Kubernetes.
There aren't simpler ways to deploy cluster software. Kubernetes might be overkill for a blog, but it's not overkill for a lot of cluster software. And with Helm charts it's really easy to run it.
I gave up at Dyson running Terraform running Kubernetes. He's right - dev tooling has become a cult. Give me PHP+LAMP on a VPS any day over this madness.
Dyson, Terraform, Helm. I don't quite understand why to add so many additional tools for simple deployments? Just use kubectl?
And if you really have a use case for using all those tools then it doesn't seem like your deployment is that simple anyway so is it that surprising that the configuration seems complex?
So the author was previously using dokku. Dokku was cool, and was basically a docker-based heroku clone. The keyword here is heroku, because dokku basically brings away a lot of the operations work that is needed to actually run a service, at a small scale, with reasonably good results. Dokku was cool, but it didn't really scale beyond one node (unless you have multiple machines running dokku, of course)
So with dokku gone, the author still wants the features of heroku, and still doesn't work to peform all the operations work. So the author goes to Kubernetes and realises that there's operations work to do.
Guess what? Somebody still has to do the Operations part. If you don't want to do pay for some managed hosting (like, as per previous keyword, heroku) you'll have to do it yourself. Have fun.
Now, kubernetes is complex and the learning curve is steep. Kubernetes is designed to scale up to hundreds of nodes. Needless to say, complex tasks require complex tools.
Complexity has to be paid somehow. Either you pay in money (Heroku) or you pay with your own time and efforts.
Once again, there's nothing new under the sun: there's no free lunch.
I know the author has other goals (a desire to learn/practice kubernetes, and keep host their blog the same way as the other services they manage), but I'd like to point out here that if your end goal is a blog which auto-deploys on each push to master, Netlify has a fantastic free tier. I use Gatsby/Netlify, and operations/deployment is a 100% non-issue, and the site is very fast.
I'm aware of Netlify, my blog is not powered by a static site generator though. It's a Go app that flings out HTML, JSON-feed and RSS/Atom: https://github.com/Xe/site
My team maintains a platform that runs in datacenters across four continents, a private cloud. 1000s of VMs
We use VMware and have its API hooked up to Ansible as well as Consul and monitoring.
We then make Ansible playbooks available for devs to create services which takes around 5 minutes to run. Since it has consul and monitoring hooked in we automatically create entries in DNS and can auto scale.
We have services running in docker (without kubernetes), consul is ultimately responsible for, based on service metadata, hooking containers to the ingress traffic through the Fabio load balancer.
I would like to understand what we are missing by not using kubernetes, every time I think about it and talk with developers I feel it isn't worth it.
My team would have to maintain an extra layer of complexity that can get us called at night for no clear business gain.
Sorry what do you mean by that? Consul, Ansible, Prometheus and other parts of the stack we maintain all have their upstreams.
If you mean running a managed Kubernetes in the cloud, would love to that but for our workflows we have to run on prems for most of the infrastructure. It's a lot of data and compute. A bit like saying google should use aws for their internal services.
I mean assembling the stack yourself, basically, especially since you're adding features by creating particular arrangements out of the ensemble. There are always more edge cases. It's easier to let someone else handle them.
You'll note that Google don't manufacture their own RAM, don't operate hydroelectric plants, buy CPUs from Intel and so on. That they could plausibly do any of these things themselves satisfactorily, possibly even better than their suppliers can, doesn't make it the right choice. There is an opportunity cost for doing so.
Right and I would love to leverage Kubernetes instead of our own plumbing of the OSS solutions, I just feel that operating it on prems requires a lot of overhead that simpler tools arranged together do not have, also, when they fail it isn't catastrophic for the most part (except for Consul).
This is quite a bit like comparing moving your stuff into an apartment vs. building a house to move your stuff into. If you don't need the flexibility and stability of owning the structure then it's best to rent just what you do need. If you do need a house, then it's worthwhile to compare the complexity and costs of different methods of construction.
Remember when Linux was hard to install and run as desktop ( multiple screens , acpi , sound , winmodems ) ?? We all have had this colleague that gets frustrated and installs windows after 10 minutes , And spends the rest of his life saying that Linux is of fanatics , geeks etc .
Well this post it's literally the same .
the difference between K8S and something like Swarm - viz far simpler (and nicer for 80% of the deployment usecases out there) is the way that the community was built up.
I really wish Docker had done a better job with engaging the community.
Kubernetes is seriously not that complicated and after using deployutil on Google cloud and cloudformation on AWS: it is a great abstraction and a massive improvement over dealing with existing cloud deployment solutions.
The Heroku/App Engine model seems so obviously superior for 99% of apps that it's puzzling how much more press all these full-time-job platforms like k8s, AWS serverless, etc get.
There are layers of architecture you use, and then layers abstraction you add all by yourself :
doku over dyson over terraform over kubernetes over digitaloceans
This is the same kind of flawed reasoning you see in the front-end world where a bunch of people complain that they do all their work in jQuery so React must be a cult.
Pasting what I wrote in another comment:
The goal isn't "ease of deployment", the goal is "infrastructure as code" so that application infrastructure can be managed in a way similar to application source code (e.g. PRs, blame, code reviews, CI, rollbacks etc). This helps ops people because it allows them to think about infrastructure as abstract resources rather than as a collection of individual machines with specific designations. With k8s, individual machines become a homogenized resource that do not need specialized provisioning depending on the application they will host.