Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For everyone asking about a "microvm orchestrator" - the implication here is that the public cloud (where the vast majority of people reading this will be deploying too) is the orchestrator.

Public cloud like AWS and GCP does all the networking, volume management and everything else for you - for free. That's arguably one of the reasons for using it in the first place. There is no need to put something on top. That's kinda the whole point here.



People abandoned Packer+Terraform style workflows in favor of containers. Why? Why change back?


I think there are few thoughts here.

a) I don't necessarily agree with that statement - witness HashiCorp's recent raise:

https://www.sdxcentral.com/articles/news/hashicorp-scores- 175m-funding-round-5b-valuation/2020/03/

b) The vast majority of container users run their workloads on public clouds like AWS and GCP which run on top of virtual machines to begin with.

I'm not stating you need to use terraform (I definitely don't) - what I'm saying is that you can use the underlying orchestration provided by the cloud of choice - be it the networking or the volumes. There's no need to replicate that on top as container workloads do. When people who speak of kubernetes/container complexity - this is what they are complaining about.

If you look at a lot of the configuration mgmt software out there - chef/puppet/ansible/salt/terraform/etc. it all involves configuring a linux system whereas with unikernels it's literally just one application and instead of having to plumb your network together or ensure that your workloads or 'stateless' vs 'stateful' you just use the underlying cloud facilities. That's the magic. It actively removes complexity from the provisioning.


Doesn't that just make it an Operating System?


I think this thought is extremely on point.

A lot of older unix abstractions have been heavily broken when we all started running tens, hundreds, or thousands of vms on the public cloud. Users are relegated to IAM. Most databases can't even fit on a single server. Many webapps are load balanced to begin with.

The classic monolithic 'server' named Mars or Jupiter outgrew itself cause we all were consuming so much software. So part of what we are asking here is why do we have 2 layers of linux? We have the underlying one for the hypervisor but all these legacy abstractions are still present in the guest.


Because hardware is easier to safely and completely virtualise. It's a relatively small surface compared to the entire OS API.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: