Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Devbox – Easy, predictable shells and containers (github.com/jetpack-io)
515 points by robrich on Aug 25, 2022 | hide | past | favorite | 185 comments
Devbox is a command-line tool that lets you easily create isolated shells and containers. You start by defining the list of packages required by your development environment, and devbox uses that definition to create an isolated environment just for your application.

In practice, Devbox works similar to a package manager like yarn – except the packages it manages are at the operating-system level (the sort of thing you would normally install with brew or apt-get).

See it in action: https://youtu.be/WMBaXQZmDoA




Looks really good. A layer over Nix that makes it tolerable sounds like a godsend. I’ve had a TODO item for “somehow use Nix for our dev env” at Notion since 2019, but I keep kicking the can down the road because its better to tolerate a 500 line setup script than learn all the Nix stuff. Of course we could use Docker, but then on Mac our dev process would be 3x slower.

Congrats to Daniel and the team! Excited to see what’s next after this.


Nix absolutely needed a wrapper like this to be used 'in the real world.'

When you think about bringing it to production (eg getting dev teams to migrate to it), Nix goes from a genuinely interesting idea to an "oh, that's cute" experimental toy because no one is going to spend hours learning Nix's weird DSL. It's simply not approachable in its base form.

I spent hours converting my devboxes to NixOS and managing my dev environments with home-manager and I still don't have a clue how any of it works. Errors are opaque and annoying to debug. Dev environments constantly break and change in ways that belay the "reproducible" nature of Nix. If someone actively interested in Nix can't easily grasp it, how does anyone expect it to catch on in the real world?


My theory is that we need better learning resources (that’s why I’m working on https://mimoo.github.io/nixbyexample/), better tooling (debugging or starting a project from scratch is hard), better integration with rust/js/etc. projects based on convention (and not configuration: I shouldn’t have to point to more than a Cargo.toml or package.json and nix should do the rest)


This looks great. As someone who started learning Nix in the last few weeks this is definitely needed and will prove invaluable to future learners. I'm still completely lost on a lot of things but if I work something out that isn't covered on your site I'll try and put together a PR for it.


That’d be great!


The problem is boundless complexity. Learning resources won’t help anyone to learn and manage boundless complexity.


Are you really claiming Nix has boundless complexity?


> Nix goes from a genuinely interesting idea to an "oh, that's cute" experimental toy because no one is going to spend hours learning Nix's weird DSL. It's simply not approachable in its base form.

Here's an opposite opinion:

> My hot take is that Nix actually has great syntax

> In particular, the way Nix handles record syntax is exemplary and more languages should copy what Nix does in this regard

And even more related to this discussion:

> A lot of the times, when people say they hate "Nix's syntax", what they more likely mean is that they hate the domain-specific languages of the Nixpkgs overlay system and/or the NixOS module system

https://mobile.twitter.com/GabriellaG439/status/156300116656...


When there are issues with debugging breakages of such environments, will a wrapper be sufficient? Does devbox have extra handling for such scenarios?


The opacity of Nix errors is largely (entirely?) constrained to the module systems (notably NixOS, Nix-Darwin, and Home Manager). (This is unfortunate, because these are actually some of the best parts of the Nix ecosystem.)

The issue is essentially that configuration options all get merged into a global namespace, but there are no facilities to track where they came from. So when configuration mismatches of certain kinds occur, you get an error in some library code that's trying to merge or coerce two incompatible values, and nothing pointing you to the two places where the conflicting values are originally set.

(This kind of error, the most common mostly-useless error message, is typically easily debugged by searching for the relevant options in your configuration and in the source code of that collection of modules. But that is still backwards and a chore, and deserves a real solution.)

Anyway, the package definitions and builds in Nixpkgs don't use any such module system in any way. So this tool is not wrapping the functionality that is associated with opaque error messages. :)


I think you're confusing Nix, nixpkgs and NixOs.

(Also Nix and Nix-flakes are two different things, like Javascript and React.)


I think this attempt at clarification is just proving xvector’s point.


It's not an attempt at clarification. It's best if you start with the problem you're trying to solve, then work backwards towards a solution.

"Nix" isn't really a thing, no more than "Linux" is. It's a collection of tools and languages and frameworks people use to solve various very different problems.


See, this is why I would like to use a wrapper.


>Of course we could use Docker, but then on Mac our dev process would be 3x slower.

Doesn't devbox depend on Docker, though? I figure any performance losses from Docker would happen with this too.


The dependency on Docker only exists for when you want to turn your shell into a container – but it's not otherwise used when you're just running a shell locally.

When writing javascript there's often a desire to have "isomorphic" or "universal" applications. Write the code once and run it in _either_ the client or the _server_.

Devbox is taking a similar approach to the development environment: declare it once, run it locally as a shell, and when you're ready, turn it into a container without having to re-declare it. It's only the latter functionality that has a Docker dependency.


It's actually not since BuildKit could be run as a standalone daemon, and you can run the result via containerd which is a dependency of Docker. Might write a PR over the weekend(s).


Hmmm, I'll be keeping an eye on this one then!


Only for the build step (e.g. making a docker image). The dev step runs entirely on your machine.

I got that from the README btw, no guarantees :)


If you like the concept you might also be interested in bob[1]. Same idea for build isolation crafted into a build system/task runner. (I'm the author).

[1] https://github.com/benchkram/bob


I just set up a docker dev environment on Mac and I would not call it slow anymore. If you were referring to volume mounts, they are 10x faster now. Definitely fast enough for our work.


> they are 10x faster now. Definitely fast enough for our work.

They are definitely faster and great improvements.

But when I can use Linux through qemu and compile my companies Haskell application in 45s rather than 3m30s...

The choice is obvious to use Linux.


Depends how much memory your container needs to build…


> they are 10x faster now

What changed?


Virtiofs


> Of course we could use Docker, but then on Mac our dev process would be 3x slower

Out of interest, what in your dev process makes using containers so impactful?


Docker on Mac and Windows runs in a VM, not natively.


3x slower is not regular VM overhead range.

Using the dodgy file share volume mounting, maybe for particular file access patterns.


They probably want their source code on their "real" filesystem which makes Docker painfully slow.


> Of course we could use Docker, but then on Mac our dev process would be 3x slower.

Not in my experience.

If you set up volumes for node_modules or any folders where where dependencies are stored you get the same performance on Mac as anywhere else.

For rust, which I use, I'm able to get better performance on Mac under docker than using rust tools natively. See https://purton.tech/blog/faster-rust-incremental-builds/


Having a deterministic environment, but that is actually running on your laptop is so much better than working inside a docker container. Inside a container the file system is very slow, and you lose all of your other tools and shell aliases.


This is a cool project and yay nix, but:

> Inside a container the file system is very slow

Because you are not using "a container".

You are using a container that happens to be running in a linux VM on your OS X laptop. It's not the containers fault, it's the entire virtual machine that you are running because you're trying to run technology built on top of one operating system while running a completely different one.


This. Put another way: it's slow only because you're on macOS and not on Linux.


That's why Linux needs more love.

I find it sad that many people use Docker, but the majority of them run Linux in a VM inside an expensive proprietary platform to do so, sometimes without even realizing it.

Meanwhile, the Linux desktop ecosystem is deteriorating and should be used more and receive more financial support.


Totally fair.


Overlay copy-on-write filesystems are slow. I think most container runtimes use them.


compared to full-VMs-pretending-to-be-containers everything is fast.

it seems like everybody here is using Macs for development to the point that if you don't say you aren't, you are assumed to be on a mac. Windows with WSL2 is actually pleasant to use, I can recommend trying it out. while WSL2 is a VM technically, the level of integration makes it basically native (if you use the VM disk for your workspace, which you should.)


> if you use the VM disk for your workspace, which you should.

Yep, if you try to build something that is stored on /mnt/c you'll have the same horrible performance.


It's not the COW filesystem that is slow, it's bind mounting the host filesystem into the VM. This ends up using something like 9p or sshfs.


What I really want is a tool like docker compose that builds on top of a tool like this (instead of docker)


Assuming you want the service management aspect of docker compose look at user-mode process supervisors like supervisord. You can install it in your nix environment and config all the dependent services it should manage. It's a little more crufty vs. the simplicity of docker compose but it also has a lot more power and flexibility. There are tons of other options to explore too like systemd user services or skarnet's s6 suite.


What I really want is to be able to do things like spin up an isolated postgres instance to run my unit tests against, with zero config on my part except specifying a postgres version in my "docker-compose.yml" equivalent.


+1


File systems inside the container aren't slow unless you're running on Windows or some other virtualization platform that makes it slow.


So, that’s what this is?


I haven't run devbox, but the README says that it requires docker to be installed. This makes me believe it isn't running the programs on my system (macos), but rather in the Linux VM.


It will use Nix to build a shell on your local machine directly, so you can run the program on your system directly (not in a VM).

It also has some early functionality where you can turn that shell into a Docker container, so you can run the same shell + program in other environments or in the cloud


so an abstration on top of docker?


It uses the Docker CLI to build container images out of the Nix shells it creates, if you ask it to export one to a Docker image for you. Otherwise it just uses Nix locally, no virtualization.

I wonder if it takes this approach because there's some issue with using Nixpkgs' dockerTools on macOS— those tools let you create Docker/OCI images without even having Docker installed.


https://nix.dev/tutorials/building-and-running-docker-images... asserts that you'd need a remote builder or cross-compiling. So it seems possible (unless maybe there are still enough cross-compile issues?)


Can cross builds hit the main binary caches or is that awaiting Hydra support for content-addressed derivations? Are there big caches for cross-compiled packages?


I'm guessing awaiting based on how much rebuilding I had to do yesterday.


docker is not virtualization either


1. Yes it is— there are many kinds of virtualization other than machine virtualization. Containers are a form of operating system virtualization.

2. Docker on non-Linux also requires machine virtualization; the `docker` CLI connects to a `dockerd` daemon running on a virtual machine.

The question I posed earlier is motivated by understanding this and knowing that on macOS, the binaries that devbox runs on your local machine are completely different (of a different architecture, even) than the binaries that devbox will pack into a Docker image. Since it uses Nix to build them, it either needs to run Nix in a virtual Linux guest (which it does using the guest that Docker for Mac sets up) or have its Mac-bound Nix build Linux binaries.

Using `docker buildx` obviates the need to set up devbox's own Linux VM to target as a remote builder, or, alternatively, cross compile the required Nix packages. But it comes at the cost of adding a dependency on Docker even though devbox doesn't use the Docker runtime for its shells


No, it just creates images compatible with docker.


This is shelling out to nix-shell (https://github.com/jetpack-io/devbox/blob/97c19c370287e203bb...) which means that it will only support bash AFAIK. I've seen a lot of discussions around using nix-shell for dev environments, but read somewhere that is not really made for this purpose originally, rather for just building packages, bash being only one of the limitations.

I tried to experiment myself with nix-shell, but I think it doesn't provide separation on the machine on which you run, it's not a chroot nor a docker container. If you are interested in some level of separation to make sure that a dev environment is "safe" to run on your machine without making side effects to things external to the project, then I'm not sure nix-shell would be able to help, but I would be happy to learn there is an option to do otherwise.


I have been avoiding nix for a while since I had a bad time with it several years ago, however recently used it for a small Haskell/Latex environment for literal programming and found it worked really well. I’m probably going to invest time into learning it now, as the online docs seem to have gotten better.


I have been using nix myself quite a lot, but mostly in already isolated environments, both a vm and GitHub codespaces.


It sounds like the improved docs have made a difference already, and that's great to hear. Could you name anything else that you think made your more recent experience so much better than your prior one?


External blog posts have helped, it’s easier to Google ‘nix-shell for <foo>’ and copy working code into your local environment. Honestly I couldn’t write hello world in Nix, but I can scrap together bits that work now and hopefully (according to the Nix ethos) will always work.


This. One of the reasons I use docker dev environments is to keep all my sensitive stuff separate from dangerous bugs and malicious external packages/modules in my projects.


Nix can build Docker containers, VM's, ISO images, etc.

I build ISO's from nix all the time to run special compute nodes. The machines boot from the ISO, so every boot they get the same, sane environment.


Yeah I know, my whole comment was about nix-shell and about this project posted here, not generally on nix.


nix-shell would happily run inside of any container, vm, etc.


I run nix in a vm all the time. The point of this project seems to be to avoid this approach and use local dev natively, which would be a godsend. I am pointing out that there are a bunch of things that are not really supported or ideal with the chosen nix-shell based approach.


Don't let perfect be the enemy of good comes to mind :)


Even with old `nix-shell`, you could always use `nix-shell ... --command fish` or whatever it is to launch a different shell. Historically other `nix-shell` wrappers, like `direnv`' Nix integration, have also supported non-bash shells well.

> [I] read somewhere that [nix-shell]'s not really made for this purpose originally, rather for just building packages, bash being only one of the limitations.

Yeah, nix-shell was originally made for debugging Nix builds. The first capability it gained was setting up the build environment of a given package, which equips you with the same compiler, linker configuration, etc., as the package would get if you ran nix-build, or when the package is built on CI/CD or whatever. It even loads the bash functions that the build system invokes so that you can experiment with manually adding steps before and after them and things like that.

But it's gained other capabilities since then, like `nix-shell -p`, whose purpose is a more general try-before-you-buy CLI and magic shebangs. It also has two descendants, `nix shell` which is just about letting you set up software, and `nix develop` which is more oriented toward setting up whole development environments and all the env vars associated with it. Anyway I think that's mostly trivia; it doesn't pose any problems for devbox afaict.

> I tried to experiment myself with nix-shell, but I think it doesn't provide separation on the machine on which you run, it's not a chroot nor a docker container

That's true, and that's really the beauty of it: you can set up complex toolchains and use them as if they were simply part of your normal system, but without worrying that they unwittingly depend on parts of your base system that may be unique to you. Likewise, they don't require any permanent changes to your normal, global environment at all. If you've used Python before, you can think of nix-shell like a generalized venv.

> If you are interested in some level of separation to make sure that a dev environment is "safe" to run on your machine without making side effects to things external to the project

Nix can provide sandboxing for builds, for proper packages. So if you want to make sure your environment is complete, adding to your Nix shell development environment to make a complete package may help you.

But the purpose of shells like this isn't to protect you from running `rm -rf /`, if that's what you're after. It doesn't protect you from dogecoin miners in your `npm install` hooks, if you're just using Nix to provide `nodejs` and then running `npm install` as usual.

What something like this does do is allow you to use all that software without installing it. So if you open up a new session, none of that stuff will be loaded.

Nix can generate container images and VMs for you, though, and that is also one of the things `devbox` can do, if your isolation concerns have more to do with security (disallowing access to your system) than 'purity' (disallowing dependency on your system, not installing things to your system).

I hope that makes sense :)


This all makes sense to me, thanks for the comment. I am aware of the --command option, but I didn't manage to do everything I wanted with it, but honestly that was a while ago. I was discouraged by people telling me that this wasn't the "right way" because tons of things in nix-shell assume bash, but honestly I don't know the details and I should try again.

> But the purpose of shells like this isn't to protect you from running `rm -rf /`, if that's what you're after. It doesn't protect you from dogecoin miners in your `npm install` hooks, if you're just using Nix to provide `nodejs` and then running `npm install` as usual.

This is absolutely fair. I was mostly saying what I wish I could have: isolation (as in can't write outside of the current directory) together with the ease of getting packaged without installing them that nix-shell provides, without the overhead of docker or a vm. I don't think it's impossible to build although I appreciate that it may be out of scope for this particular project.


Looks really nice! I'm wondering though whether this is luring me into a dangerous complexity cliff.

It seems easy to start using, but will I run into some issues a few months down the line like a package that's not available through Nix, or some Nix issue, and then I'm back to dealing with Nix's complexity only that I got there while staying clueless about Nix.


There are also some rough edges that you need to take care.

For instance, my nix installation `locale` was not `UTF-8` (after the first `devbox shell` run), but all my tools and code needs the locale to be UTF8. When you quick check `locale` after `devbox shell` it keeps returning to your host shell locale instead of the nix one, so you can easily be fooled by which layer you're really running. `which` command also loses its purpose.

Then you need an extra step to fix the locale: https://nixos.wiki/wiki/Locales

Then you find other similar issues that need to be taken care, like conflicts with `rbenv`, `poetry` and so on.

It's an awesome idea, but with `docker`, `podman` or fedora's `toolbox` I have some more certainty on what it's running.

With `docker` you can `docker inspect` or something like that. `devbox shell` tries to merge the nix layer with my own shell in a not predictable way.

Anyway, nice project indeed.


    A consistent shell for everyone on the team

    Declare the list of tools needed by your project via a
    devbox.json file and run devbox shell. Everyone working
    on the project gets a shell environment with the exact
    same version of those tools.
Why not just use a Dockerfile?


Dockerfiles are repeatable but they are not reproducible - they don't give you the same version of everything every time you run them.


You're right, but it depends on workflow. IMO they should be pulling the docker image itself and not building the container from the Dockerfile.


It depends on other external circumstances, but other benefits for nix would be: - they could pull less data (nix configuration is smaller than docker binary blob), - nix is entirely deterministic (docker depends on whenever that image was built), - nix implicitly its components for if there are many of these dev shells, and also with your main system. Yes, Dockerfiles can also be chained but if the later levels of these are rebuilt, the the earlier ones better have new enough versions installed (i.e it's a can of worms and the entire chain needs to be rebuilt anyway)


One can absolutely lock versions in their Dockerfile. I can see that the design principle of DevBox is to pin the versions. At the end of the day we all need to consider versioning (i.e., the image version) the versions (i.e., package versions) anyway.


Reproducible Docker is fighting an uphill battle, Nix and it's ecosystem tries to be reproducible by default.

Dockers ecosystem aims more for repeatable, not reproducible.

> At the end of the day we all need to consider versioning (i.e., the image version) the versions (i.e., package versions) anyway.

The granularity of pinned versions and feasibility of having a culture of everything being pinned versus needing to know a crazy amount of things that need pinned a big difference.


This is where version pinning comes in. Most base images use specific versions of packages.


Smaller docker images for one:

https://grahamc.com/blog/nix-and-layered-docker-images

Nix gives repoducibility, docker gives repeatability but not reproducibility.

Also see the video "Use flake.nix, not Dockerfile": https://www.youtube.com/watch?v=0uixRE8xlbY

you can swim upstream and make your own docker image reproducible, but that doesn't change an ecosystem of images that aren't


Thanks.

> Devbox was originally developed by jetpack.io and is internally powered by nix.

If this wasn't mentioned, I'd have easily mistaken devbox to be an escaped implementation of Brazil VersionSets and Apollo VersionFilters (used internally at Amazon).

If I may, in the near-term does jetpack.io plan to continue to use and sponsor devbox's development? And, in the long-term, does jetpack plan to monetize this project, or donate it to a foundation, or some such?


Yes, we use it internally for all of our development and plan to continue actively maintaining it. If there's enough interest from the community we would love to make it a Cloud Native Computing Foundation project.


Small bit of feedback: as a Nix fan and user I love to see a more usable veneer over nix tooling, but I feel the repo should give a bit more credit to Nix and explain how Devbox works.


Nix shells is a killer feature so it seems smart to me to create a wrapper around it to simplify the experience. On the other hand, it's another layer of abstraction that you need to learn and debug when stuff breaks.

Will definitely keep and eye on this.


I use nix/devshell/direnv for pretty much everything nowadays, any tooling that makes nix easier for all users is a absolutely a good thing. Nix going "mainstream" would benefit everyone.


Extremely, violently unpopular opinion: a team of people developing on their laptops is an anti-pattern.

I know what you're gonna say. You'll say, "It's 2022. We developers don't have internet connections; even when we do, our internet is off most of the time, and we can barely transfer 1.25 megabytes per second. How could we possibly use an IDE and filesystem monitor to copy a 10kB source file when we write to it, to a remote container + open shell with our app running? The technology is simply too complicated for us. We will never see such wonders in our lifetime."

And I would say to you: take courage. If you believe in yourself, and maybe with some luck, it might be possible for you to do development remotely, like every server admin has been doing over telnet and ssh for over 30 years.

Maybe in another 30 years, we will have gained the technological insight to be able to figure out how to have a team full of people to work on a single server at the same time, all connected to a network over an internet connection, with some crazy interface that keeps their shell session alive even if their internet disconnects. A man can dream...


If that workflow works for you, that's awesome. As a lifelong Linux user who works primarily in the terminal and uses a terminal based editor, it would work for me too. Dismissing people who work differently as being cowards is a stance I hope you'll reconsider.

Dictating to people how their workflow should look - _that_ is the antipattern. Developers should choose their OS, their editor, and virtually everything about their workflow. Forcing them to work remotely robs them of the opportunity - you can work graphically over SSH, and you can use SSH from Windows (if that is your preferred OS), but it's a hack that isn't always going to work quite right and is going to create unnecessary friction for people with different workflows than you or I. That's to say nothing of auxillary tools, like debuggers, that people may want to bring to bear. Don't get me started on getting debuggers to work over SSH, yes it can be done, yes I have done it, no I don't ever want to do it again.

The infrastructure should bend to conform to the developer's needs, and not the other way around. Generally technology should conform to the needs of humans, and humans should not be asked to contort themselves for the benefits of technology.

For instance, at my last company, I was asked to use a Mac laptop so that the team could standardize on Mac. I complied to be a team player, and this turned out to be a mistake. It robbed me of so much efficiency in the first few months, and it never worked as well for me as Debian running on my personal laptop, which cost a _third_ of the price of that Mac. I had so many frustrating issues and things that just never worked properly. But for other people, it was a great choice; I have no interest in bashing the Mac - my point is that people should be allowed to choose and customize their own tools.


> Dictating to people how their workflow should look - _that_ is the antipattern. Developers should choose their OS, their editor, and virtually everything about their workflow.

I'm not convinced it is. Back in the day, each developer had their workstation (not laptop) configured just so, with their own pile of `doit.sh` scripts to tickle the system in just the right way, to get things to compile and render and send it off to production. But we're no longer there. Developer velocity is a thing companies take seriously, so developer workflow is actually important to them, and standardizing one true editor as the supported editor means that all the developers get improvements whenever the internal tooling team does a release. And I'm saying this as a vim person who's tried to move over to VSCode (and have failed so far).

Of course, smaller companies don't have an internal tooling or developer productivity team, and thus allowing people to choose their own tools is optimal. But I've also seen the big picture efficiency loss that results from every developer having a bespoke configuration thats intelligible only to themselves, and the inability to unilaterally improve people's tooling and integrations with various systems they interact with. Once the team gets debugging over SSH working or whatever, they can just deploy that, in a working state, to everyone.

There's a world where Slack is the one true communication method and everyone is on a Macs. Unfortunately for me I'm set in my ways would rather use Ubuntu on a Thinkpad with vim, but that doesn't mean I haven't see that other world.


Have you tried neovim? You can have pretty close to feature parity with VSCode due to it's LSP support; for most languages, you'll be using the exact same Intellisense engine (LSP server) as VSCode. (Python is the exception I'm aware of, but python-lsp-server is serviceable even if Pylance, the proprietary VSCode one, is better). And when the internal tooling team ships a new version of the LSP server, you'll be able to use it too!

When it comes to everyone building things the same way - that's what containers and CI/CD is for. If we didn't have those tools, then I may be more inclined to agree, but we do and they work. Do they work perfectly? No. But they ought to surpass this fairly low bar. If they didn't then I don't see how we'd have confidence in a centralized solution either.

I literally cannot use an editor other than neovim without hurting myself due to an RSI problem. Believe me, I have tried them all. If I was told that I must adopt VSCode, I would literally have to quit for my health. That being said, VSCode is an excellent tool, and some people I know with RSI issues swear by it. More power to them.

Developer velocity is definitely not going to be helped by handicapping your developers and forcing them to use tools that don't quite fit their workflow. I'm reminded of an anecdote about fighter pilots, it may be apocryphal, I don't know. But it goes like this; the air force was designing the seat for their fighter planes. So they took measurements of a statistically significant number of fighter pilots (all or most of whom were men), averaged them, and designed a seat to accommodate that figure. But pilots complained the seats were uncomfortable, and as the demographics changed and more women became fighter pilots, they were very poorly served by the chairs. Eventually they realized that they had designed the seat for a body that no one had. So when they redesigned it, they designed a seat which was customizable, and allowed each pilot to get a seat which was comfortable for them.

Anointing a "one true workflow" is a similar mistake. It will chaff at your developers and you'll lose velocity due to the chaffing. Over time requirements will change and the workflow will cease to meet anyone's needs, and if your developers remain productive it will be because they are going behind your back to use what actually works for them. Your developers will leave you because they're tired of being patronized, and you'll lose velocity and institutional knowledge onboarding new people.

The internal tooling team can't be all things to all developers, that's true. Let them ship what they ship, and let other people figure out how to adapt it into their workflows. If someone is performing well and getting stuff done, do you care if they're using the internal tools? Or if they've wrapped the internal tool in some macros or something to better suit them? If someone is using the anointed workflow but can't work for more than an hour at once because it's too mouse heavy and it hurts their wrist, is anyone in the situation happy?


I would like to put in a vote of confidence for emacs and evil-mode, here. it's vim with all the emacs goodies.


> Maybe in another 30 years, we will have gained the technological insight to be able to figure out how to have a team full of people to work on a single server at the same time

Single repository, yes. Single development environment, no.

Development environments should be isolated, so developers don't step on each other's toes...

Also, centralised cloud development machines are by definition, a single point of failure, with small gains in consistency in development experience...

> all connected to a network over an internet connection, with some crazy interface that keeps their shell session alive even if their internet disconnects.

You know what's better than this? Not relying on the internet connection at all!...

Decentralized development on local machines is simply a better experience and relies less on a giant cloud infrastructure


And tools like the one that spawned this HN discussion are going to help homogenise things and make it easier to go local.


> Maybe in another 30 years, we will have gained the technological insight to be able to figure out how to have a team full of people to work on a single server at the same time

Why is this desirable?


Lightweight terminals with days of battery life while a juggernaut handles the real work.


But we already have notebooks with battery life for 2 work days which have very good performance like the M2 macbook air. And I highly doubt that it's the cpu that is the limiting factor for battery life and not the screen.


not to mention, horsepower is hardly ever the choke these days. I would say that the greatest limiting factor here is the lack of screen space, because now your dev must context switch to read documentation etc


The only advantage I can think of in this scenario is monitoring of employees.

As for disadvantages:

- no work offline

- no freedom for different OS, IDE, tools

- Single Point of Failure (that server goes down, connection drops and nobody can do any work).

IMO the cons are much worse than the pros. I am curious what other advantages you would see in this approach.

EDIT: grammar, format


That is how I and everybody else were developing software at my university around 1990. A few HP servers, character terminals or X11 servers. Then everybody got a Windows desktop at home powerful enough to work as on those X servers and it was game over.

I never saw a company developing on a shared server. Do you work for one doing that?

In that environment I expect developers to need separate environments not to stop all the team in case of mistakes. Let's say: docker containers running on the server instead of pulling an image locally. I don't see much of a gain.

Personally I could use my emacs to edit files of the server, my terminal to ssh on the server and my browser getting pages from there. For people using IDEs, those IDEs should either work in a different way or be in a remote connection (RDP, VNC, X11.) I remember Citrix thin terminals but I don't remember developers using them. They were for end users.


I've worked at a company where we used a shared server, relatively recently. Isolated environments were different /home directories and agreed port ranges. Actions that interfered with other folk were mistakes that happened once, because we were (for the most part) not a bunch of raging arseholes and were capable of talking to each other like adults.

It was a lovely environment to work in, in part because sharing our work was a matter of "yeah, I've stood that up on port 6001, can you take a look?" Or "take a look at /home/foo/whatever.py, I think the bug's in there but I can't spot it".

The other part was that it was an absolute beast of a machine for the time. RAM for days, and more cores than hot dinners. And, critically, a very close match to our production machines. That matters more than you'd think, for a large set of problems.


This sounds fascinating. What was the config of that beastly dev machine, if you can tell?


Why is it a good thing to give up remote development for a centralized solution?


I dunno, man. Some "dev ops" jerk at work is always complaining about how he has to spend all his time answering support requests from developers with errors in their local environment. It only takes like a couple hours of my week, so what? And then when our deploys fail because the server didn't have the same version of python as our local environment, he tries to blame the fact that the local and remote environments are different!

Can you believe the nerve of that guy? Keeps trying to convince us that a single centrally managed system is easier or more reliable (or whatever) than 30 randomly configured ones on multiple operating systems! I think he's nuts, personally. I have half a mind to tell my manager that he's trying to disrupt our ideal workflows.

And really, what's the problem with testing my app in an environment that isn't the same as production? Yeah, sure, I might save some time not having to maintain a local environment that everyone on my team can replicate. And, sure, developing remotely would let me change laptops without spending a week to set everything up again; but it's only a week, I have a ton of those left. And yeah, maybe the cloud network has a bunch of services that I need proxies and VPNs and other things to test from my laptop. And granted, doing development on a server with 40 CPU cores and the same network as the database and webserver is faster than my laptop.

...But if all that was better, I'd be doing it already. If it's on my laptop, I know how it works. What's better than what you already know?


That's a really snarky way of saying, "We haven't figured out how to use Docker yet."


That sounds like a problem that can be solved with a CI/CD system and Docker Compose. Perhaps there is more going on here institutionally that stops this problem from being solved.

It sounds like something really frustrating is going on though, and I hope it works out.


Hey I'd like to expand on this a little bit. I hope I'm not presuming too much but as someone who recently had to leave their job due to burnout, I feel like I'm seeing signs of burnout here, and wanted to bring them to your attention.

You sound really frustrated because you're being asked to do things that you feel aren't or shouldn't be part of your job, and perhaps it's taking time from your other responsibilities and creating a larger workload for you. It seems like maybe you've come to resent your coworkers and their individual preferences for the complexity it creates and the stress it causes you. It seems like what you want to do is erase that complexity, take away their agency, and centralize on a solution that works well for you.

There's a couple of things to consider. For one, what you're asking is for your coworkers to have a degraded work experience so that your work experience can be better, which is totally understandable but not reasonable.

Another one is that this is a pretty well understood problem with known solutions - why is it you can't get them implemented? Is there something going on in the politics or culture of your workplace that is stopping you? Is there some kind of elephant in the room, and would things get better if you called it out?

The last one is, are you getting burned out? Were you always this frustrated? Should you take some time off? Should you start looking for other jobs?

I can really feel the stress and frustration in your tone, and that sucks. I hope things get better for you.


This is nonsense. We support a dozen developers running fully baked local development environments along with cloud-based dev, staging, and production environments. Multiple heterogeneous environments is normal.

Granted, we don’t run any node, python or Ruby, because yeah I don’t want to spend all of my time debugging monkey patching bugs.


I’m sat in a plane right now and am able to code with a high latency, low bandwidth internet link without issue. If I were using something from this dystopian vision of yours I’d not be able to do that. It doesn’t matter how much “courage” may be involved, frankly.


> some crazy interface that keeps their shell session alive even if their internet disconnects.

This is a solved problem, see eg mosh. Not so crazy.

But, I don't want my development environment running on the same box as other people's environments, it just doesn't make sense: people stepping on each other's toes, the whole box breaking... I really don't want that.

Give me an environment that's possible to build from scratch with a push of a button, and I'll be happy. Anyway in a larger project there needs to be a person/team responsible for keeping the dev env up - either they can keep the shared one up with whatever black magic, or they can make sure the button to create the environment will work for people wherever and whenever they want.


Remote development will be popular? Yes.

But developing in a monolithic machine may be not. The development environment should be clean and isolated, and products like gitpod and coder is promising.

Besides this, maybe you can have a look at https://github.com/tensorchord/envd and https://github.com/okteto/okteto


I agree with the need for a standardized development setup (be it local or in the cloud) but violently disagree with eliminating development setup diversity.

Curious engineers with oddball configurations greatly contribute to the overall health of a codebase. Forcing these folks to use a standardized configuration is a missed opportunity at best, and disgruntling at worst.


It is expensive though. Can get some nice local hardware for those cloud costs and then there is the latency.

A cool option is give everyone the same laptop and set them up from a disk image. Keep data you want to keep on another partition. Reimage every so often with the latest required tooling. New starters will be thankful.


I used to work on remote servers, when the laptops were not powerful enough to sustain a full dev environment and at the same time renting dedicated servers wouldn't break a bank.

Today, getting a server more performant than my laptop (M1 MB PRO) is not economically viable.


I've certainly had internet outages where I've continued to work on my laptop.

I can't even imagine how expensive it would be to run like this. Sharing an environment is a _terrible_ idea, you need isolated environments.


Working in a remote forest is actually pretty pleasant. Offline matters.


How do you then work from a train for example?


agree: I compiled a list of supporting facts and dissenting opinions here https://dx.tips/the-end-of-localhost

this is of course extremely hated on HN, which understandably loves independence and self reliance.


Speaking as a person who does have a strong gut reaction against cloud development, I think a lot of the justifications like 'what if I'm on a train' do mask a real desire to say it just feels bad. Public reason.

Is it logically consistent with other things I don't own? Like Chrome, within which all my apps run?

No. But is there something that feels right about having all my code running on the CPU on my lap? Unaccountably, yes.


good for being intellectually honest about those justifications :)

still, your feelings matter. imo, totally welcome to them.

some people also like keeping gold in their safes, cash under their mattress, running on their own power and food off grid. i will fight for their right to do that.

but also i observe that most people have a demonstrated tendency towards centralization for convenience, and the technology is coming along for mass availability of this tech.


there are more reasons that this take is bunkum than that article seems to be aware of.

- security. the most common vector of attack is through dev boxes. centralizing creates a single point of failure.

- vendor capture. right now, it's cheap to "do a dev". if you put everything in the cloud, the FAANG that be can start charging rent.

- vscode or (insert cloud IDE here) might work great for some, but my local emacs is better. fight me.

- the longer the distance between your devs and the metal, the dumber they get. you learn best by working with visibility into what your code is doing. many of the efficiencies mentioned will lead to educational drift, which is bad.

- connection speed. he gets real hand wavy about "developing on a plane" but even developing on a satellite connection makes latency-dependant tasks super slow. try running Ansible from your star link and get back at me.


fwiw I know some FAANG companies (Google, Meta) have already transitioned to remote development machines that do exactly this.


It's unpopular because it's a terrible idea. Just because it's easy ssh'ing into the box, doesn't mean you should. Twenty years ago we didn't have the tools to do anything better. And while I miss some of the simplicity of those times I don't miss when someone ssh'ed into the server and fucked something up and left us in a lurch. If you really think this is the best approach today I really hope you aren't working on any critical systems.


It's for dev systems not production.


After trying to find out how this works I am not the wiser.

How does this guarantee consistency across Docker, Kubernetes and the local environment?

How are base images selected? Where do the packages come from? Are the packages all Alpine/x86_64 or from Nix directly? Who builds them? Who signs them? Who deploys them?

Following the demo makes this seem more unlikely to be made with reproducable builds in mind, because it shows go's version to be darwin on the host system, which probably isn't going to be used on Docker!?


Love the idea and we are all thinking Nix needs the Docker treatment.

I believe though that something has restart from scratch rather than rely on Nix.

Nix ushered the idea but needs a new face. Building on top of it may be too problematic long term.


My first thought was this is exactly what I'm looking for, ah it needs installing Nix package manager on my Mac, I'll pass for now.


I agree (although I use Linux, not macOS). I tried to use nixpkgs in an existing Slackware installation and... it worked but has a bunch of warts, especially GUI applications don't respect my desktop settings


Um... Interestingly same name, same concept? Down to the use of little box emoji in CLI.

https://github.com/devboxup/devbox


@dloreto @robrich A little aside from the announcement, but since it seems like you both work on this I wanted to surface something that came up down in a subthread:

I'm curious if you attempted to support macOS by doing this with Nix's dockerTools and cross-compiling (there may be better sources, but it's at least hinted at in https://nix.dev/tutorials/building-and-running-docker-images...)? If so, I'm wondering where that failed or bogged down?

---

Background: I build a tool (https://github.com/abathur/resholve) for ~packaging Bash/Shell (i.e., for demanding all dependencies be present). The tool's technically agnostic, but I built it specifically to fix Shell packaging in Nix.

I think it could benefit a lot of other Shell projects, since one of Shell's big tribulations is dealing with heterogenous environments, but most Shell projects wouldn't see much reason to endure the pain of adopting Nix if they still had to support the heterogenous environments.

Much like you're doing here, I've been hoping to figure out how to build a Nix-based packaging flow that can generate deployable standalone bundles or containers. It'd be a heavy way to bundle Shell, but I imagine some projects would take the tradeoff for predictability and reduced support load. But since it would need to take place within a Nix build, I'd need to cross-compile for it to work on macOS. Hoping you know if it's a dead-end or not :)


Haven’t tried that yet. We started with using Docker directly because we were very familiar with it, and thought users might want to “preview” a dockerfile before asking devbox to build a container (we don’t have that yet, but can easily add it)

That said, I do want to experiment with building the container directly through nix and seeing if there’s advantages to doing that. I just haven’t had the time yet.


Makes sense. I'll keep an eye out. Tilting at other windmills for now, myself. Congrats on the great start :)


Shell environments, why didn't I thought about this earlier? I'll definitely enjoy this as soon as I've figured out the best way to integrate it with my development processes.

Usually, I use Viscose development containers where system programs are installed via Docker. However, it's quite tedious to manage installation and versioning of these programs.


This looks really cool! Can you explain the differences between `asdf` and the .tool-versions file?

I can currently add a .tool-versions file in my project folder and run `asdf install` to get everything on the same versions for all our devs.

    elixir 1.13.4
    erlang 24.3.4
    nodejs 16.4.2


nix is in some ways a much more refined version of asdf. asdf is basically a wrapper around curling install scripts to bash. It can do anything--it might download a precompiled version of a tool, or it might build the tool entirely from scratch on your machine. It's up to whoever wrote the asdf package, and the quality of the packages there varies _a lot_.

nix goes the next step and has packages defined as an install script plus all their dependencies (all the way down to the basic C library, compilers, etc.). It caches everything based on a hash and provides a public repository where it will just download prebuilt versions of things instead of compiling them from scratch every time. It can do this because it has extremely strict hermeticity and reproducibility guarantees for all of its packages--asdf has none of this and you'll almost certainly just be compiling tools over and over or pulling down pre-built versions that will probably work (as long as you carefully read the package readme and installed all its dependencies).

Don't get me wrong, asdf is nice and great for simple things. If it works for you keep using it. If you start to run into trouble with the quality of its packages or you start writing your own packages, you might want to look at a more comprehensive system like nix.


`asdf` shells out to a bunch of different language-specific package managers. `devbox` only shells out to Nix, which manages system dependencies (including native libraries) and language ecosystem dependencies in a uniform way. `asdf` doesn't manage deps like the libraries your nodejs packages that use C bindings for number crunching, or CLI utilities that aren't part of your languages' library ecosystems.

Also devbox can dump Docker containers for you


We use something similar at our company, although maybe with not such a nice UX: https://github.com/T-Systems-MMS/docker-management-container

This installs certain tools like helm, terraform or kubectl in the specified version as well as python dependencies or ansible roles and collections in a container to use on your laptop or in cicd-pipelines.

Another nice addition for this is https://github.com/upciti/wakemeops, that provides apt-repository for many tools from the Cloud landscape.


Is it possible to pin a version of the dependencies? Is there any support for nix flakes?


I'm having a hard time seeing the advantage of this vs using the same docker image with all the right dependencies etc across a team. What am I missing? This way or that, the configuration needs to be specified manually.



Wondering how well it integrates with IDEs. Anyone tried it with Intelllij?


For plain Nix the universal work around is to run your editor inside of the nix-shell.

A better option iis a direnv plugin that modifies environmental variables inside of your editor to match your nix shell.


So essentially this seems like another implementation of what toolbox and distrobox is implementing.

https://github.com/containers/toolbox/

https://github.com/89luca89/distrobox

With the downside of not having the option to select a Linux distro and being locked to the nixpkgs repositories?


No, it uses native Nix directly on your machine - Linux, Mac, or Windows. Its integration with Docker is that it can _build_ Docker images from the Nix packages in your "devbox".


I'm assuming for windows it only works in WSL2? cause last time I checked nix wasn't compatible with native windows development, sadly


Native windows support is or at least was being worked on.

Wsl/nix should work.


Yeah I assume so too, but I'm not sure.


It's the same abstraction layer though. The implementation for all three is bound to be different.


The devbox is awesome. We use https://github.com/tensorchord/envd to develop AI/ML applications.

And it will be great if users can build and deliver devbox images with a CI/CD systems like GitHub Action.


I think we’ve all been wanting something along those lines for a while. I hope this is it.

Docker is only lightweight in Linux.


I paid the ~cost of learning Nix mainly to get out of needing VMs for daily dev work on macOS. I won't sugar-coat those costs, but I'm glad I did.


I keep thinking of rolling a simpler solution with chroot + something something, but if someone is able to hide the complexity of Nix well enough, sign me up. Those abstractions tend to be leaky though.


Very good initiative. I remember the first time I tried `nix-shell` I thought how nice it is to have this, and too bad its UX -IMHO like the rest of nix ecosystem at that time- will hinder its adoption.

Great to see it being repurposed and presented with a much nicer UX this time.


I wonder if this will do for nix what docker did for containers. The interesting part of the timeline is that eventually docker became less relevant, but it was still required to achieve that initial success.


I want to love this, and it looks amazing, but it doesn't work for me:

    05:01:13 $ devbox add python2
    Error: Package python2 not found.
Really bad first-run experience when the demo command doesn't work, sadly.


This error happens because you didn't spawn a new shell after installing nix and devbox. I think README could make it more obvious, as I stumbled upon this as well.


I did, I was using the wrong Nix package, turned out. In the end I ended up curl | bashing it, and it worked. I really like this project, just needs a bit of love in the error messages.


It's likely you have not installed the pre-requisites: namely nix. Without nix, it can't find the package.

That said, we'll improve the error message so that when nix is not installed it tells you you should install it.


As far as I can tell it is:

    05:10:27 $ nix --version
    nix (Nix) 2.6.0
I installed it with `apt install nix-bin` on Ubuntu 22.04.

EDIT: I installed Nix with the old-fashioned "pipe shit to bash" method and it seems to work now. When I say "work", I mean it in the sense of "doesn't fail", because it's been stuck at `devbox shell` for a good five minutes now with no indication.

Not to be too negative, I think this is a great idea and is going to be fantastic, but I guess, like any pre-release software, there are some teething problems.


The package you want on Debian is called nix-systemd-setup or something like that, not nix-bin. (nix-bin just gives you the Nix binary, and I guess if you created /nix yourself you could use it for a single-user install.) The expectation is that you want a multiuser install, which requires a running daemon and the creation of some system users, and the nix setup package handles that. I think the package descriptions do outline this, and FTR you almost never want just a `-bin` package on Debian-based distros. The convention is used to allow minimal installation of just the binaries from packages when they're likely to be used by other parts of the system rather than end users.

The Nix package for Debian has some other deviations in the way it's plugged into the system and the initial setup. The default channel (source of packages) doesn't get set up for you, the PATH ordering is different, and NIX_PATH and the NIX_PROFILES_PATH (and maybe PATH?) aren't configured for `env_keep` with PAM's `sudo` configuration so interactions with `sudo` are different. Anyway failure to find any packages is probably due to the lack of enabled channels or the setup not being completed (needing that systemd-setup package).

(All of this stuff is up to the maintainers of the Debian package and Debian policy. It's fine, but it violates the assumptions of some third-party Nix tooling.)


It would be great if devbox displayed the recommended command to install Nix upon first run. First users of devbox shouldn't be expected to know the details on how Debian packages Nix.


That's a smart workaround. Maybe a

> For the best experience, we recommend installing Nix via the official installer, i.e., via this one-liner:

> > curl | bash, blah blah blah

> (Existing Nix users who have or prefer an alternative setup can still use devbox! See this page about compatibility: [link to a 1/2 page reference doc]

(Happily, the experimental features now experiencing widespread uptake in the community will make some of those differences I outlined less relevant.)


The initial shell is slow: nix reinstalls a bunch of stuff the first time it runs. We’re planning to add a progress bar that makes that clear.

Sorry for the hiccups – it is our first pre release version but feedback like yours helps us harden it. It should definitely be considered beta software right now, but we’ll have it hardened before announcing a GA/1.0 version


No problem! I suspected as much about the download, a progress bar would be excellent though. Overall, this is a fantastic idea, and I definitely see myself switching to it basically immediately.


This looks amazing, I will definitely be trying it out. A big annoyance for us lately has been MacOS x86 vs arm development and building. I'm interested to see if this makes life easier on that front.


Nice. I’ve been waiting for an abstraction on top of nix.


Reminds me a bit of Dock [0].

[0] https://github.com/0rion3/dock


This seems like a way simpler workflow than maintaining dockerfiles. I'm interested to try this out.


Microsoft recently released a similarly named product called Dev Box:

https://docs.microsoft.com/en-us/azure/dev-box/overview-what...

I am not sure if they would send a cease-and-desist, but you might want to consider a new name for this tool.


Or Microsoft use a different name because this devbox predates MS's August 15 announcement. Devbox first commit was on August 4.


If life was fair then Microsoft would choose a different name.

However, Microsoft has a team of lawyers and have been known to go after other tech products to protect their trademarks and copyrights.

Devbox is an awesome tool. If it becomes popular enough, I would not be surprised if it attracts unwanted legal attention. (Sadly, it has happened to other tools and products.)


There are dozens of other projects called devbox on github (or some variation of it).


huh this seems like a combination between asdf and docker. looks nice, will definitely try, congrats!


what is this sorcery?


A nice porcelain for ad-hoc uses of good ol' `nix-shell`, it looks like: https://github.com/jetpack-io/devbox/blob/main/nix/nix.go#L3...

The container export functionality is based on BuildKit via the plain `docker buildx` CLI: https://github.com/jetpack-io/devbox/blob/main/docker/docker...

and it uses CUE to validate its configuration, which is JSON.

All-in-all it actually looks extremely simple. I guess the basic idea is to give you access to a subset of the power of Nix and all the goodies in Nixpkgs without exposing you to the Nix language or the Nix CLI.

Longstanding Nix users will probably not be super excited about this (though they might as well try it! it does look very nice to use). However, for folks who are put off by Nix's reputation for difficulty but might be tempted to enjoy freely drawing upon the 80,000+ software packages in Nixpkgs, this might be a way to have your cake and eat it, too.


A clarification: the config file is currently exposed as plain JSON file. Cue is used internally because we use it to validate the schema after the JSON is parsed – but currently not exposed to users.


Ah, thanks! I've updated my comment to reflect that.

What you describe is also a sensible use of CUE. I suppose if one of your goals is 'avoid asking users to use a language they don't already know', it might be for the best. :)

But personally I wouldn't mind seeing a bit of CUE usage to work with devboxes, hehe.

For continuity, here is my aside on CUE from that comment, apropos of nothing but my lack of reading comprehension :D

----

CUE is, like Nix, a simple, configuration-oriented DSL. But unlike Nix, it's really just extended JSON (a JSON file can be thought of as a CUE file where all of the values are concrete), it has a different kind of type system where values are types, and the language is decidedly not Turing complete. The type system is pretty neat, and the basic idea is that you can put constraints in the place of values, and write data structures that contain mixes of each. Then latter you can apply those constraints to a configuration/specification and CUE will tell you whether they're compatible. It's cool because you can write your specification and your data in the same format.

While CUE is not a full programming language, it does have a small stdlib and basic 'comprehensions' (like list comprehensions in Python) for generating data structures from other data structures. This gives at least a little flexibility and some hope of concision, defining repetitive data structures.

Having used Nix a lot and CUE a little, I'm not sure which set of tradeoffs in terms of power and simplicity is it right one, but I do think that CUE's choices are interesting and reasonable, and its type system is clever and easy to work with.


It seems to use [Nix](https://nixos.org). In Nix you can use [nix-shell](https://ghedam.at/15978/an-introduction-to-nix-shell) to create a shell with the packages you want. For instance, to create a new shell with Python-2.7:

``` $ nix-shell -p python27 ```


Yes, internally it's using nix .. but it makes nix soooo much easier to use. With nix I've always struggled with the nix language and writing nix expressions.

If you are comfortable with nix itself, then it allows you to do a lot more than devbox does, but at the cost of extra complexity.

Depends on what level of abstraction you're looking for.


And happily, since this is a front-end rather than an incompatible fork or something, you don't have to choose!

If you wanted you could totally use this most of the time and then write custom Nix expressions only if you felt like that would serve you better. For many use cases, this might be sufficient.


Not surprised. When I read the description I was thinking 'sounds like a job for Nix..'


Imagine a group of Java developers that use Eclipse. Can this be helpful to them?


Looks really useful! How do you go about integrating IDEs (like vscode) with the nix shell?


I am happy to see Nix going more mainstream. Earlier this year we saw Nixpacks and now this.


How does it compare to nix?


Can you run a Docker container (or many) inside Devbox?


This is so much better than vagrant


Great! This is really useful.


Weird emphasis on "laptop" but maybe I'm not hip.


[deleted]




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: