Hacker News new | past | comments | ask | show | jobs | submit login
If you love automation, why is your development environment manual? (sidekicksrc.com)
137 points by timruffles on May 8, 2013 | hide | past | favorite | 113 comments



>If you love automation why is your development environment manual?

Because I usually do it once in 5 years :-)

My setup: Linux, Kubuntu LTS. Entire system is backed-up with rsync, so I have daily snapshots for last 6 months. There is also weekly backup to cloud and external hdd.

I have primary 250GB SSD which usually sits in my workstation. Whan I travel I physically move SSD to laptop (takes 10 minutes).

If something went wrong and my primary SSD would die: I just boot from Livecd, copy 200GB, setup bootloader and reboot. No need to 'reinstall' system from scratch.

Every six months I take one afternoon and update all my tools (IDE, OS, apps...). I dont use automated updates except critical stuff such as browser.


It's a lot different when you're working with certain teams of mixed technical aptitudes. An automatically assembled VM image I can distribute means much less time supporting our less technical members of the dev team (front-end, designers, biz dev guys).

If it's just you, absolutely, don't bother. But it gets much more difficult to support a variety of platforms and configurations as your team size grows. This is where vagrant+chef/puppet really shine.


> An automatically assembled VM image I can distribute means much less time supporting our less technical members of the dev team

Absolutely. At my previous work we had 'standard' VM with all necessary tools such as Visual Studio 97. I just dont understand if it brings so much benefits if all you just need is different version of Ruby.


I've found this too, as an individual. When running Windows I used to have to reinstall everything about once every 2 years. Took about half a day each time. I've reinstalled everything once in 4 years on Mac OS, from Time Machine, so I just left it running overnight. I don't know how long it actually took but I got up the next day to find my computer just as it was. Reinstalling things is annoying, but just doesn't take up enough of my time to be a problem worth fixing.

Then again, maybe if I had the ability to quickly set up a new development-ready VM, I'd come to find all sorts of unexpected uses for it? I've certainly found duplicating Parallels VMs handy a couple of times. But I have to say that I'm still skeptical that the time spent learning Chef and trying everything out would be worth it for me.

If I were looking after 5 (10, 20...) programmers then my back of the envelope calculations would look a little different.


I hear what you're saying, but I don't think the time spent learning Chef (or Puppet) is ever wasted. It's sort of like mastering a unix shell or getting good at automated testing: initially there is this steep learning curve where you are definitely less productive, but over time it pays huge dividends.

Have you ever had to install multiple versions of some package on the same machine? Have you ever had to help a colleague debug a weird issue that was particular to their setup? Have you ever had to work on the same project from multiple platforms and some component didn't work on one of them? Have you ever had a bug only appear in production because of a platform subtlety?


I don't think the time spent learning Chef (or Puppet) is ever wasted.

Exactly!

It might not look like it right now, but skills in using these tools are moving from cutting edge to essential job/contracting skills in the near future.

I don't know much about the Puppet community, but I do know that within the Chef community, the learning curve is flattening. The big thing of 2012 was that we're very close to getting these VM+Chef+Berkshelf setup in a way where you can get onboarded on an existing open-source application without knowing you are using Chef under the cover. How deep you want to dive in is up to you from that point.


Ditto. Backing up your dev environment is less time-consuming and more reliable than scripting it.


I don't know, I script production provisioning anyway. Dev provisioning is free, I just change the hostname to the VM.


Have you ever tested restoring from your daily snapshots? :-) Seems like a good setup, but to be rigorous about it you should probably also schedule a full restore from backup every once in a while to ensure that your backups are actually working correctly.



I think an important point in this post was more about not trying to setup a dev environment in your base Mac install (or similar). Use Virtual Machines.


In that case I just find it better to wipe MacOS and replace it with Linux.


It's not always a good solution for your entire team. I run Linux on my desktop, but have found that the touchpad drivers (even the experimental ones) just really suck on Linux+Macbook Pro to the point where it's either external mouse (not always possible) or decreased productivity.

I have a feeling this is going to devolve into a "well, just use vim and a tiling WM or X other thing", but the point is that just replacing Mac OS with Linux isn't always going to be good for your whole team.


When you're targeting Linux, it's easier in the long-run to just use Linux. The base system isn't the same between the two because Mac OS X uses BSD versions of important things like SED, grep, etc. There are subtle differences that most Mac OS X users I've met don't know and can lead to weird problems like "why doesn't X work; it works fine on my mac!!".

If everyone's on Linux, the dev and productions systems are more similar, so less scripting issues can arise. I personally use Arch, other coworkers use Mint or Ubuntu, and only one developer uses Mac OS X (others switched because of above problems; he couldn't switch because of EFI problems). Even devs new to Linux (from Windows) love Mint, and all is well and good in the world. Servers run Debian or Ubuntu, so I test my changes in a VM because Arch has noticeably newer software in the repos, but that's because I can't stand Debian as a desktop system (package manager too slow).

Also, the Mac OS X user uses his Mac like a Linux box. He only ever touches his trackpad to use non-keyboard friendly apps (uses Vimium for Chromium, tmux/vim for terminal/editing, etc). Our devs not familiar with the keyboard generally use Sublime on Linux, which is a fantastic editor as GUI editors go.

The trackpad drivers aren't that bad, but may require some configuration in the xorg conf (I got multitouch working, but disabled it because I find it annoying).

We don't standardize on a single dev platform because each developer is different. The developer understands that using Linux is going to be easier in the long-run, but he/she must understand that his/her choice of OS must not negatively impact productivity. We've had a few developers try starting on Windows, but we'd refuse to support any dev environment besides Ubuntu Linux, and when they ran into problems, they switched to Mint/Ubuntu.

I've never seen a developer be more productive on Mac OS X than Linux. In fact, those gesture usually end up slowing him/her down from what should be easy using the keyboard (and all that switching from trackpad to keyboard is not productive).


There are advantages to using VMs even if you've installed Linux as the base OS on Mac hardware.

The VMs provide isolation for testing and developing the server components. (I'm assuming you are developing server code). Through synced folders, you can still use your favorite $EDITOR on the host machine -- vim, emacs, sublime text, whatever. Onboarding is faster, and you can manage multiple node setups.

Further, you have the additional advantage of being able to replicate staging or production systems on your local box. This reduces time spent round-tripping to a staging server.

This lets you standardize the server code without requiring all developers to standardize on editors/ide/etc.

Personally, I am more productive on OSX than on Linux. But that's because my actual dev environment is tmux+vim. I let OSX and Mac hardware handle the stuff that it does much better than Linux: being able to talk to anyone, anywhere, with anything. (Networking) Being able to reliably suspend and wake.


> I've never seen a developer be more productive on Mac OS X than Linux.

Completely anecdotal and not at all useful in the context of this discussion. I've seen some of the top minds in the Python community using Mac OS with extraordinary productivity, as well as some of my fellow developers at Pathwright. I myself run Linux, but think it's foolish to suggest that I'm automatically more productive on it just by virtue of it being Linux.

I feel like you're missing part of the point, though. Even between different Linux distros (or versions of Linux distros) there exist differences in the packages. Some of these differences can lead to unexpected breakages. If you've got a smaller team, one or two guys are going to end up fixing everyone's disparate environments as things break unexpectedly (or you need to bump a package version or install something new).

The Vagrant (or VM) setup in general lets everyone use their preferred OS/Distro, while having the same. exact. environment to run the product you're working on. It also cuts down on support time, and your dev environment will not be just "kind of close" across your team, i'll be "really close".


As someone who uses both OSX and Linux, I am more productive on Mac. Seriously though, get over your grudges - use what makes you happiest.


If productivity is key, get a ThinkPad, install Linux on it (they're very Linux-friendly), and learn to use the TrackPoint exclusively. Watching people use MacBooks, even with all those fancy multitouch gestures, is like watching a slow-motion movie. And they are always amazed how quickly I can do stuff on my ThinkPad.


TrackPoint is amazing; I hesitate to switch from ThinkPad's because of it.


How is the battery life?


Why do you need battery life for development? Most of the time you'd probably be in the office anyway.

Even so, ThinkPads have pretty good battery life, and on some models the CD drive can be replace with an extra battery. I've heard of some that can go nearly two working days before recharging.


I don't work at the office most of the time. I know thinkpads have good battery life generally, but I know there have been some battery life issues with linux in the past.


Thinkpads have great battery life built in, or you can get extended batteries, or slice batteries. You can even stick a battery in the drive bay.

You can also easily turn off ports you aren't using, at least in linux.


I run ArchLinux on a ThinkPad X120e. I get 3-4 hours of battery life out of a 57Wh battery.


The problem with VM's is each developer often customizes the, quite a bit. It does make adding someone new faster but unless you have rapid turnover I don't think it's worth the overhead.


That's why you manage the VMs with a configuration convergence tool such as Chef or Puppet. At that point, the VM doesn't matter. I tend to treat mine's as throw-aways, and have Chef rebuild the specific VM for the specific project. This lets me experiment with things and then reset it back to a known, good state.


I think the general idea of running a vm with all your dev setup in is great, but I've often found the small niggles (such as speed of switching quickly between browser and vm terminal, the fact you need a fairly good computer to run even quite a modest server with a gui and if you don't use a gui, you're limited to 256 colours) all a bit more trouble than the benefits.

My solution is to set up my perfect dev environment in linux (I use Fedora) on a partitioned usb pen drive. Then either boot from it (if i'm on a crappy computer) or boot a vm direct from the usb if the computer is more competent. It's also quite nice if there's an emergency and you're away from your computer -- you can just rush into an internet cafe, plead/bribe them to let you boot from your usb then have everything, ssh keys inlcuded ready for urgent repairs!


I think the point here is that it took you time to set up that dev environment.

It shouldn't matter where your environment is (a VM, a laptop, etc), you should be able to recreate it easily and reliably. This is what the automation tools are for. I prefer salt stack and have a salt config for my work laptop. If something happens to the laptop or I get a new one I just need to run salt, wonder off for an hour and when I get back my entire environment is set up. I also use it with vagrant so that when a new dev joins the team they can run one command and have a copy of the project setup in a vagrant box, databases and all, ready to go.

You could set up a similar thing so that if you lose that USB key you can have your prefect setup reinstalled with minimal fuss.


Would you be able to share at least some of your current salt config? This is one idea I've been thinking a lot about lately, but never managed to get beyond the obvious package names, which is trivially installable with one command on a good Linux distro.


very good suggestion. At the moment I keep an LVM snapshot, but that is a bit cumbersome. I hadn't heard of salt - thanks! Will look into it.


"such as speed of switching quickly between browser and vm terminal, the fact you need a fairly good computer to run even quite a modest server with a gui and if you don't use a gui, you're limited to 256 colours"

This implies you are running a terminal emulator in X on the VM and doing your coding there?

I use VMs for development all the time, and a headless VM with SSH access is perfect. It's almost identical to doing development locally.


the options as I see it are either do everything over ssh to a headless vm as you suggest which definitely has its advantages or working inside a full-screened vm/ booted environment. Perhaps not the best justification, but I prefer the latter because I can include my gvim setup and use the mouse for resizing columns, browsing code and using NERDTree. I miss that when running vim over ssh.


You can edit and browse on your host VM. Just adjust the VM's network settings and use a shared host-guest folder for the code (what exactly this ends up being varies based on the virtualization).

You really can make it a near exact match for a local development environment. Bridge your ports to your expected ones on the host machine, share the code folders, and SSH in if you need to punt a dev server when you make changes (or set your Editor/IDE up to do that for you). Browser can still point at local:8000 or whatever you expect.


Bring your configs with you - http://dotfiles.github.io/

Vim should be able to do everything that GVim can do, if you add a few lines to your configs. I'm not sure what specifically though.

I bring my entire development setup with me when using a server, and preparing it is as simple as cloning Homeshick (or Homesick if you have Ruby), cloning your dotfiles repository, and installing any applications that you want to use.


My primary development environment is a VMware virtual machine with Ubuntu LTS getting 2GB out of my MacBook Air's 4GB of RAM. I usually run a database, a web server, a couple of browsers and PyCharm (Java) in the virtual machine and another browser with many tabs in the host OS.

What I wanted to point out is that everything works perfectly and I can instantly switch between applications and in/out of the VM just fine.

Backing up the VM is of course as simple as copying ~40GB to an external drive or the desktop at home (gigabit ethernet ftw) and the nice thing is that I can have multiple versions of it on various machines.


Why are you switching between a browser and your VM terminal? I run my VM headless -- all the files I need to edit are in shared directory.


You don't run any commands on your VM? Such as compiling/restarting an application?


You can run those through your console on the host machine like normal, with the added SSH piece. Or you could set your editor/IDE up to do that after changes (if required).


Well, I suspect SSH is going to be faster than using the VM terminal in any case. But I'd set it up to watch the directory and compile/restart automatically, if you can.


I don't want dev to be exactly like stage and prod.

Yep, you read that right. I deploy to Debian servers but develop on Linux Mint or OS X. The key is that the environments are very similar but not identical. The payoff comes when you break yourself of the habit of relying on accidents of deployment instead of building a general-case solution that works in multiple environments.

You scan /proc to look at running processes? That's great, until you're on a machine without /proc. Better to spend an hour learning how your dev platform (Python, Java, whatever) abstracts that away for you. Trying to send an email by shelling out to /usr/bin/sendmail? Oops! That's broken in lots of places; better learn how your dev platform handles it!

The big win comes when you upgrade your stage and prod environments to a newer distro - or a different one altogether - and your stuff keeps working because you'd relentlessly whittled away all the OS-specific dependencies.


You describe a pay-off when upgrading stage and prod. But if dev, stage, and prod are identical, then you simply upgrade dev first, find the things that broke, do the same for stage, and then it's no big deal when you upgrade prod.


It seems odd to have an environment that you are 100% in control of, and then relish the times when it does unexpected things.


This strikes me as an example of an unnecessary/premature optimization. Unless you know that your product will need to deploy on an unknown variety of server environments, there's no advantage in taking the extra time to 100% abstract away the server environment.

And really, how trustworthy is that abstraction? You've basically outsourced responsibility for your server environment to your language community. What if they create a bug that is exposed on Debian, but not on OS X?


Yeah, for certain things your abstractions can either add a good bit of overhead or increase the amount of code you need to maintain. No real point unless your product really needs to run in a mixed environment in prod.


Your don't need to give up your GUI editor to do this. Vagrant sets up a shared folder automatically in the virtual environment under /vagrant that maps to the folder where your Vagrantfile is located on the host machine.

I do all my development now using Vagrant running Ubuntu VMs but I still do all my editing with the same Windows editor I've used for years running on the host machine.


Or, you know, ssh -X. Oh, but wait, they're trying to get rid of that in Wayland, "because nobody uses it" . . .

I suppose there's always RDP or VNC, if you can stomach their limitations.


That has never been accurate. Wayland has planned X support for a long time, it just won't be in the core


There are at least two other ways to develop with a GUI.

One is the tried and true Xwindow forwarding. Usually conveniently handled by ssh now, but in the olden days (I'm talking early/mid 90s) we used alternatives.

The other is VNC in. This has been extensively discussed by the "I use my ipad as my development machine" crowd. I've had pretty good results with VNC over the ... decades.


This is fine for some scenarios but I've had a series of problems which drove me away from this kind of setup.

Default CarrierWave (Rails gem) settings try to clean up the tmp files after file uploads. This is a good thing normally, but to Ruby I am on a Linix box, so it tries to `unlink` which fails catastrophically because the mounted drive was NTFS.

Then there are NPM modules. It is kind of funny because Node.js and NPM are actually pretty good on Windows for development purposes anyway, but if I want to run it from my VM it picks up my OS as Linix and tries file system operations like symlinks. I couldn't even install Express.

These are just a few of the things I ran into.


The symlinks problem can be mostly resolved by adding this magic incantation to your Vagrantfile:

      # to allow symlinks to be created
  config.vm.customize ["setextradata", :id, "VBoxInternal2/SharedFoldersEnableSymlinksCreate/vagrant", "1"]  
That said I have had times where npm via a putty window refuses to install modules. However, I also have npm installed on my Windows box so I can just drop down into a cmd window when that happens and run the npm command there.


You can use Samba shares (win<->nix) or NFS (nix<->*nix) if you don't have Vagrant, or whatever it is :)

That being said - don't! As others said, X protocol is network transparent, let's you display windows on a server from everywhere. That means you can open a GUI tool on VM and see it's window(s) on your desktop, with your window manager decorations and basically indistinguishable from an app on your main system. It's possible on Windows thanks to xming (http://sourceforge.net/projects/xming/) which is a lightweight X server for Windows.


This works nicely when you mainly just using a text editor to edit files in a scripting language and doing any automation stuff from terminal.

If you find yourself using a big IDE it can sometimes be a pain. Most IDEs will let you do remote debugging etc but will still build on the local machine and do use the local JVM to enable some features.


It's worth noting that Jetbrains IDEs seem to be getting support for remote debugging in VMs, and specifically Vagrant support. It's already in PyCharm[1], and hopefully it'll be available in their other IDEs soonish.

<insert standard disclaimer about how I don't work for/with Jetbrains, just a happy customer, etc>

[1]: http://blog.jetbrains.com/pycharm/2013/03/how-pycharm-helps-...


PyCharm user here. I wasn't aware of this. Thanks for the info!


Works in Rubymine and PHPStorm as well. Little bit fiddly setting this up Rubymine, have to let the IDE install the debugging gems IIRC.


I also prefer to edit on my local machine but I still prefer terminal vim, I'd rather not have the environment import my settings because I work in a team and don't want to fight with them over the one true config.


If your team is checking in to the central repository such personal things as config files for development environments, you've got bigger problems. Sure, occasional snippets (such as a .dir-locals.el for style), but whole configs? WTF?


The environment is shared for a consistency, if I'm sshing into the environment to do my work then that means I need the environment updated with my config and that either means that we need to share configs on the vagrant user or setup the environment to provision settings for every user of that environment which sounds unmaintainable. It's easier to just work locally through the share directory.


Use can easily do something like this for one-off edits "vim scp://remoteuser@server.tld//path/to/document" or use netrw for a more robust solution to using your local vim to edit remotly. http://www.drchip.org/astronaut/vim/index.html#NETRW


Or you can just edit within the shared directory between your local machine and the environment.


How do multiple monitor setups work with vagrant? (I've had trouble with big IDEs and X transport before, with things just slow enough to irritate me.)


Vagrant VMs are headless. So when you 'vagrant up' a box you won't see a GUI window or anything you'd expect to see when launching VMs with VirtualBox.


The author missed a golden opportunity to discuss battery life. Your laptop will run longer if you SSH/VNC/whatever into the virtualized development machine if you don't host the virtualized image on the laptop.

Also I can SSH/VNC/Xwindow/whatever into a server machine with performance stats far beyond any currently imaginable laptop. Its like owning a laptop from 2023 today in 2013.


The latency is a pain. I like being able to work from anywhere by tethering my iphone5's LTE connection.

Easier and more reliable to have everything working locally (With local dev environments using vagrant/virtualbox) and use the internet connection for syncing git changes or doing research.


There's the Mosh project (http://mosh.mit.edu) which seems fit for your use case. I don't have personal experience with it but it looks promising.


wow looks very cool. never heard of it...

http://mosh.mit.edu/#techinfo

How Mosh works Remote-shell protocols traditionally work by conveying a byte-stream from the server to the client, to be interpreted by the client's terminal. (This includes TELNET, RLOGIN, and SSH.) Mosh works differently and at a different layer. With Mosh, the server and client both maintain a snapshot of the current screen state. The problem becomes one of state-synchronization: getting the client to the most recent server-side screen as efficiently as possible.

This is accomplished using a new protocol called the State Synchronization Protocol, for which Mosh is the first application. SSP runs over UDP, synchronizing the state of any object from one host to another. Datagrams are encrypted and authenticated using AES-128 in OCB mode. While SSP takes care of the networking protocol, it is the implementation of the object being synchronized that defines the ultimate semantics of the protocol.


Well you can use vagrant with AWS as well :-)


I fully agree with this. 'Before', the dev team used a mix of, well, I'd rather not say. It was just not very pretty and eventually everyone got their work done. A while back I built a vagrant+puppet configuration for everyone and the team has since transitioned to using that.

It's nice to have at least semi-parity with the production environment. This is possible because we can utilize most (not all, due to no central Puppet server) of the same Puppet modules which we use (or I built), in the VM.

Essentially what you gain is not worrying about if a developer will break their machine/VM and go 'Whoops, can you fixor it?'. Additionally, you no longer have to worry about such things as php/node/ruby version issues with the dev/production side of things. We've gone from having sometimes issues with certain code not running the same as in development, to just throwing it up on the staging environment through deployment tools and it just runs!


obligatory xkcd: http://xkcd.com/1205/


"xkcd #1205 says I don't need it."


We do this using Vagrant and Puppet and actually just blogged about it yesterday: http://blog.serverdensity.com/many-projects-with-vagrant-and...


I've heard of Salt Stack[1], though I've never used it. From what I can tell, it can be used to do the same thing.

[1] http://docs.saltstack.com/


The Salt Stack team also has a plugin built specifically for provisioning environments in Vagrant called Salty Vagrant:

https://github.com/saltstack/salty-vagrant

(I haven't used it, but I have used -- and love using -- Salt Stack.)


It's funny that this article proclaims to combat yak shaving while it actually promotes it.


I take your point, but I would say that the article really encourages shaving the yak one time, instead of using scissors to beat it back occasionally.


There is something to be said for running your software on multiple environments. There is also something to be said for stubbing out pieces of your stack for testing and development purposes. There is often a lot to be learned. That said, I've used the VM approach on projects with high turnover. It is usually not the same as production deployment, so it does take effort, but depending on the team it might be worth the effort.


I tried this on a recent couple projects and it failed miserably on both of them for completely different reasons. On one, the architecture of the project dictated really really fast network connections and the overhead from the VM choked the project. That's probably an unfair example because this was actually a toy project that was never meant to run in a serious environment (hence the requirement of a huge bandwidth virtually 0 latency network connection). The other failure however is more serious. Our company (and at least one project) dictates a Windows based development environment, and the abysmal performance of NTFS combined with a massive project code base meant it was taking 2+ minutes just to run a status check on the version controlled code.

I think given a Linux host environment this might work better, or a smaller code base, but at least in these two situations it was a failure.


Note that the recent release of VMWare support for Vagrant may at least partially mitigate the problems in your second scenario.


I prefer having a VM built with Vagrant (and in our case puppet) which runs the same configs as our servers while having my development tools (editors etc) installed on the local machine. That way all my apps run at native speed, and the server replicates (almost exactly) the production environment.


Issue: What about VPN? If you're connecting to your corporate network with OpenVPN, and you are restricted to one connection (common sense security policy) then you need to get your infrastructure team to issue you a separate key for each VM that you're running that you want to be able to access a tunnel and get on your corporate network.

I realize that this isn't an issue for 99.9% of startups, but in the non-startup corporate world where there are security implications to keeping everything in a publicly accessible code repo (healthcare, government, education in some states) then you've got another complication.

The only thing I've been able to work up so far is sharing a folder on my desktop through to the VM and running commits from my desktop.


I would say that you probably shouldn't be doing VPN to your development boxes in general. I'd see this as a security risk. I generally don't let anything 'production' touch directly anything which might not have the same security standards as the production gear.


Using NAT-based networking for the VMs should neatly solve the VPN issue.


Vagrant + Chef is great!

Vagrant + Chef + Berkshelf is better!

You can define the cookbook dependencies in the Berksfile and version it along with Vagrantfile. Berkshelf is like bundler for your cookbooks. It assembles cookbooks from a variety of sources and loads them into your Vagrant-managed vm. Check it out at http://berkshelf.com

To help get things going in that end, I've been working on Kamino (https://github.com/hosh/kamino). It's intended to generate a Vagrantfile+Berkshelf file for use in your project. Right now, it only supports a basic Rails stack.

Pull requests welcomed :-)


The first reason is, people who build tools mostly don't think that the tools will or should be used as components of other tools for more in 'automation'. The result is that it's tough to build on such tools.

The second, and more specific, reason is that location of one of my most intense love/hate dilemmas, Xerox PARC that pushed the first 'graphical user interface' (GUI) instead of what came before, usually typing text into a command line. Command lines are mostly easy to automate. GUIs are mostly a total PAIN to automate.

I'm wanting to automate, willing to automate, waiting to automate.


Cloud development like www.nitrous.io (previously action.io) is the future. I've played around with it am quite impressed. Basically no setup for standard platforms like rails and node - choose your dev environment, install your dependencies if not already there and get coding. It's a Linux environment so your dev is same or close to production. And the IDE is quite nice. Caveat: Yes, you have to have a connection. But, for those few precious unconnected moments, I don't want to be coding anyways.


Any comments on what people use to manage the chef cookbooks (and why)?

Librarian or Berkshelf seem to be the two main contenders to make updating cookbooks similar to updating gems with bundler.


I've tried both Librarian and Berkshelf to manage cookbooks for my Vagrant VMs for use with Vagrant's chef-solo provisioner. They both have nearly identical core features. Both can grab cookbooks from the Chef community site, local path, generic Git repository (which includes getting a single cookbook from a subdir) or Github. Their syntax is nearly identical. The only difference that I can think of is that Berkshelf has pessimistic version matching support (a la Bundler), while I'm not sure that librarian-chef does.

Where they differ is in where they put your cookbooks, and what additional features they offer. Librarian-chef puts your cookbooks in a "cookbooks" folder in the project root, while Berkshelf keeps a global cache of all the versions of each cookbook you've installed outside of the project. Both provide support for uploading your cookbooks to a Chef server, librarian-chef through plain knife and Berkshelf through it's own upload command.

Berkshelf also features scaffolding support for Vagrantfiles and new cookbooks, and generally has extra features specifically for rapidly iterating on cookbooks themselves. It also provides a vagrant plugin to manage bundling/uploading cookbooks to a Vagrant VM, since they need to be copied out of the central repo and made available.

Overall, I've found that I like Berkshelf better. It feels a little more idiomatic, and I like having my cookbook versions shared between projects rather than duplicated everywhere. If you have any specific questions about either, I'd be happy to answer them.


I've never tried Berkshelf, didn't even know about it. librarian-chef works great for our needs; I doubt it's worth investigating alternatives.

I will note that the librarian-chef also has a vagrant plugin: vagrant-librarian-chef.


Berkshelf is worth investigating simply because there's a lot of momentum behind it, even though it came out after Librarian. By "momentum", I mean that since the session introducing it en mass at the Opscode Community Summit in 2012, it's being adopted by the community.

No hard numbers, but the main thing is that Berkshelf could not have really happen without enough standardized community cookbooks. By putting the cookbooks into a global directory, it works more like Ruby gems and Bundler: instead of focusing on your customized (and probably divergent) cookbooks loaded into your project directory, you focus only on the few cookbooks that are specific for your needs. It comes out of a bigger theme in the 2012 Summit, that of creating "library cookbooks" that gets assembled by your "application cookbook". If it's a library cookbook, you don't necessarily need it in your project library any more than you need to import a Ruby gem into your Rails directory during development.

Test-kitchen is also worth looking into, though as far as I know, that still uses Librarian under the covers. Berkshelf support for test-kitchen is pending (http://tickets.opscode.com/browse/KITCHEN-9).

In summary: Berkshelf is part of the overall trend in modularizing these cookbooks and developing tighter engineering discipline.


I guess I'm still confused -- that's pretty much exactly why and how I use librarian-chef. After all, it's explicitly modelled on Bundler


I used Berkshelf and it works nicely! The only confusion comes when actually deploying via Chef solo and I forget to ensure the cookbooks are actually present.

Haven't tried Librarian - only because I saw Berkshelf first and it integrated nicely with Vagrant.


Last time I looked Berkshelf didn't even support Chef-Solo, or at least Knife-Solo since it stuffs the cookbooks in your home dir.

I'd be interested to hear how people were setting up their local chef environments.


Berkshelf works nicely with Vagrant when you are using the chef-solo provisioner.

If you're looking to use knife solo for setting up production environments, perhaps one thing to look at is Vagrant 1.1+ support for EC2 provisioning (assuming you're using AWS).

You can also have Berkshelf install the declared cookbooks into a directory with `berks install -c $cookbook_dir` and then using knife-solo. Seems awkward, but I'd rather have dependencies managed in some way, rather than resolving the dependencies by hand.


I've often thought that it would be nice to have a script that rebuilt the software installation on a developer's workstation from scratch, completely automatically, using only configuration files and scripts checked into version control somewhere. Such a process could run every night while the developer sleeps. This would ensure that it's dead easy to bring a new developer fully up to speed on a team.


I know a couple guys who do that. Their Steam app is also managed by Chef.

We got our dev setup partially automated like that. Even the partial setup was better than what I heard -- took a week to set up a laptop with the tools and the whole platform. When I was onboarded, it took about day, more or less hands off.


if you use an ide you can do something similar, but use the vm client directly. i do this all the time, and virtualbox's seamless mode means that you can mix windows from client and host on the same screen (except ubuntu unity?).

it's particularly good when different clients have different OSs. and you can even do hardware development - i have tested usb drivers in a vm client that talk to hardware connected to the host.

the only drawback is initial startup time (particularly pulling latest updates after install) and archiving the vms (they're large, so fill up a laptop ssd). i export to ovas on my backup system and then wipe the vm. another worry is that virtualbox has been flakey recently (http://www.acooke.org/cute/VirtualBox1.html http://www.acooke.org/cute/UbuntuonVi0.html) - but ovas can be imported into other hosts...


For OS X tool see http://boxen.github.com/


Well, but the article was actually speaking out against Boxen a bit. Essentially, you're still shoehorning all of the dev stuff onto your local machine with Boxen. It's much better than installing it all manually, but still not a separate dev environment.


Boxen is rad. We've started using it for new machines at Shopify and it's been really helpful.


How about the best of both worlds? A common VM image with CLI tools and dev server inside the guest OS, with VirtualBox folder sharing to permit a GUI IDE and the git repo to live on the host OS?


This is what we do now - It gets to be a bit of a pain later. I'm looking forward to switching to Vagrant in the future (On the backburner for now as I have other things on my plate) - Vagrant allows you to easily nuke and re-setup your entire environment, and change things easily for the next go around - while if you're using a VM image you'd need to re-image every time you make changes that you want to stay persistent.


How is this different from Boxen (http://boxen.github.com/)? You have exactly copied the central idead around it and made a blog post.


The two ideas are very different, but complementary.

Boxen is about getting your MacOS environment set up with tooling.

The author of the original blog posting is talking about how to automate the creation of a production-like environment in which to do your development.

The creation of that environment requires a few tools installed on your physical Mac, which could be managed via Boxen. For example, you might use Boxen to automate the installation/upgrade of Vagrant or your IDE of choice.


I have used Vagrant in the past, and I had to stop using it. Shared folders were just too slow. 2 seconds to refresh a page was too much when you're trying to be productive.


Huh, that's interesting. How old is your hardware?


Points towards generative programming?


has anybody tried Vagabond (https://github.com/chrisroberts/vagabond) to do the same thing without a VM ?


No no no, I don't like automation; I like automating things :)


Couldn't this be shown without a blasphemy?


Where is the blasphemy? The terrible restoration of the fresco?


Shown in the context of this article.


I re-read the article and still don't know what you're talking about.


Monoculture used to be a dirty word.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: