Hacker News new | past | comments | ask | show | jobs | submit login

Hrrm OK... when was that? 2009 or earlier I suppose. I was using it in 2010 with few hiccups.



We started building a large-scale datacenter automation system in 2003, and by late 2005 it was deployed on most machines but it became apparent that achieving the high density of job packing we wanted was going to be impossible by relying on post-hoc user-space enforcement of resource allocation (killing jobs that used too much memory, nicing jobs that used too much CPU, etc). Sensitive services like websearch were insisting on their own dedicated machines or even entire dedicated clusters, due to the performance penalties of sharing a machine with a careless memory/CPU hog. We clearly needed some kind of kernel support, but back then it didn't really exist - there were several competing proposals for a resource control system like cgroups but none of them made much progress.

One that did get in was cpusets, and on the suggestion of akpm (who had recently joined Google) we started experimenting with using cpusets for very crude CPU and memory control. Assigning dedicated CPUs to a job was pretty easy via cpusets. Memory was trickier - by using a feature originally intended for testing NUMA on non-NUMA systems, we broke memory up into many "fake" NUMA nodes, and dynamically assigned them to jobs on the machine based on their memory demands and importance. This started making it into production in late 2006 (I think), around the same time that we were working on evolving cpusets into cgroups to support new resource controls.


Interesting history, makes sense. (I thought your name was familiar: you are the author of the cgroups.txt kernel documentation! Do you still get to work on this stuff much? What is your take on the apparent popularization of container-based virt? What are the kernel features you would like to see in the area that do not yet exist?)

Was there a reason you guys didn't open source this many years ago?


I left the cluster management group over three years ago, so I've not had much chance to work on / think about containers since then.

This code grew symbiotically with Google's kernel patches (big chunks of which were open-sourced into cgroups) and the user-space stack (which was tightly coupled with Google's cluster requirements). So open-sourcing it wouldn't necessarily have been useful for anyone. It looks like someone's done a lot of work to make this more generically-applicable before releasing it.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: