Hacker News new | past | comments | ask | show | jobs | submit login

I associate to how IBM ran everything in virtual machines on their 360 architecture. And how you typically assign memory to virtual machines today reminds me of how you in MacOS < 10 had to do that on a per program basis before starting it.



Sounds like interesting prior work for the problem at hand (granted, docker is not VM, but I guess they needed to solve the same kind of problems regarding ipc). Do you know of any literature about IBM results?


Not really. Maybe start here if you are curious: https://www.ibm.com/support/knowledgecenter/en/SSAV7B_633/co...

Look for service machines.


Thanks!

So it seems they prefer to run groups of applications as services rather than totally isolating each program. This makes sense : docker bundles could be the next level of meta packages. You could say "I want to write some react-native code" and have the jvm, react-native and android sdk pulled at once and ready to use.

Regarding ipc, they take a good part of this page to describe their z/VM networking features, so I guess it's indeed something needing solving. The interesting part is that docker networking already allows tcp networking, and mounting volumes could help sharing sockets or regular files, kind of like the "single system image feature" they mention.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: