Pretty simple to set up and use, but I'm not sure if it's better than learning `docker` and `docker-compose`. Also, the default `python <file>.py` doesn't remove the container after the file is done running and the image is tagged as `cage/<name of project>` instead of allowing the user to tag it with their own name. Both of these are easy to fix with `docker` commands, but if this is meant to be a library that helps you avoid `docker` then it's not doing its job.
I was going to say something similar. With venv moved into the stdlib in Python3 and ongoing development and improvements to the `docker-compose` workflow I am not sure where a tool like this fits.
* venv is lighter and more or less fairly standardized
* `docker-compose` is more general and language agnostic
So something like `cage` is somewhere between...it's neat, no doubt, but not sure where I would use it personally.
Thanks for the feedback! I'll make sure to add the container removal and naming as new features.
You can indeed run all the commands easily just with Docker.
Cage aims to port all the functionalities from virtualenv to use Docker. After this is achieved I can start working on extending those functionalities.
Slightly off-topic, but does anyone develop Django/Python on a windows machine, and may want to offer any general advice or tips?
I am using virtualenv, but wondering what else I should be doing to make it easier. Previously I used cloud9 (cloud IDE) for rails development. In both cases, I am pretty much a hobbiest, but don't mind spending some money to make things easier since I'd rather spend my time doing the fun part, rather than developer ops.
I just switched from Mac to Windows 10, and I work on a lot of Django projects.
I use PyCharm (love jetbrains) + Vagrant and it's working great so far. I'd highly recommend spending the time learning Vagrant - it can be incredibly frustrating at times (on both mac and windows), but once you get it working right it's really rewarding. Being able to re-instantiate a VM when something gets really botched is a great feeling. I don't have to worry about my host machine getting messed up, so I'm free to tinker around in the VM and that speeds up my learning.
The one downside I've encountered so far is how many different "terminal" style apps I need to run to get everything working. Currently I need powershell (as admin) for spinning up the VM, putty to SSH into it, and git bash for source control. I can probably simplify them all into powershell but for now it works.
You can use ssh from git bash so there's no need for putty. And might want to look into Console2 or ConEmu both of which allow you to open multiple tabs for multiple shells.
Windows is fine for Python development. It's just an executable, and you can even use cmd.exe. Some will tell you for some good reasons, and others just cause they love to hate, to use cygwin or something. You can check out Windows 10 bash as well.
If you find Windows some sort of hindrance, you can just use a Ubuntu VM, or reformat entirely to Ubuntu. I only use Ubuntu for over 2yrs now, and barely miss Windows, not much wrong with it though, esp. Windows10 w/ bash now.
At my previous company we used Vagrant for most projects, and it provided a nice abstraction layer that made things work reasonably well across all platforms (macOS, Windows, and Linux). Vagrant on Windows definitely had its challenges, though, but projects themselves ran well once that was sorted out.
I've heard the company has since moved to Docker, in search of those same benefits but with smaller overhead (specifically in terms of time spent managing the abstraction layer), and I gather the Windows folks are happier with it than they ever were with Vagrant.
My own (macOS-only) experience with both has been mixed, but certainly not worse than just running things locally via virtualenv. Vagrant introduces a full VM into the stack and Docker seems to have a lot of stability issues (at least on macOS 10.12 Sierra), so my next plan is to try combining the two, and running Docker on a Fedora VM. I'm hoping that any Docker nightmares will then at least be confined to the guest system, leaving my main machine mostly out of it, and using Docker to set up the actual project stuff may mean less Linux administration of the VM itself.
Personally I try to work as little with VMs as makes sense, they usually just make everything a bit harder and convoluted ("even" with Vagrant which, so far, has been a constant maintenance burden in all projects were I witnessed it's application, especially so if it's not only for Linux).
My experiences with docker range from disastrous to annoying; I try to just not use it. It's not installed on any of my systems. If I have to use it, I do that in a VM.
(I developed for some eight years on Windows, mostly C/C++, but last two years a lot of Python. At the time I wasn't aware of PyCharm, so Sublime Text "is also an editor" it was, but obviously just an editor; for development PyCharm is infinitely superior.)
PyCharm mostly brings IDE features to the table: context-dependent auto completion, far better navigation, static analysis ("that thing doesn't seem to have this method"), refactoring (for Python mostly moving and renaming; the refactoring engine of IDEA for Java is probably the most advanced on this planet... it's ridiculously good).
Also, being built on the IDEA platform PyCharm has superb support for language injections, eg. CSS in HTML in a template language is not a problem and handled correctly (highlighting, completion, annotation etc.).
Another advantage is that the IDEA IDEs are all very similar in general usage, so if one is also working with another IDEA IDE (for me that's mainly Java and CLion) then no time learning a whole different set of shortcuts or something like that is lost.
I couldn't replace a shell with an IDE, but they do have tasks and stuff like that, might be good enough.
Regarding database, when you're using the ORM it does matter far less for development; I almost always just use SQLite. Some projects do require advanced features of postgres, and would not work with SQLite. In those cases I just start a postgres instance ad-hoc.
Debugger, remote REPL, virtualenv and git integration and management, database interface (also for redis/mongo etc), unit test configurations and integrated runners, auto-linting, deployment tools and of course intelligent autocomplete... in the last EAP apparently they are introducing semantic coloring, which is going to be very fun.
All good points... just to add, PyCharm has a type hinting system (which informed the 3.5 Type Hints design), which knows the expected types for stdlib functions, and can be taught the types of user-defined function arguments. It'll then warn you if it thinks you are passing the wrong type of argument to a function.
Obviously not infallible, but it catches time-wasters not infrequently.
Yes, but the docker for mac uses 100% cpu per core if watching files. I think it's reasonable to assume using a better virtual machine (possibly including vmware and parallels, maybe virtualbox) means that file system events are less buggy.
I recommend running an Ubuntu VM, with PyCharm as your IDE. There are so many little things that are more painful in Windows than in Linux.
For example, getting psycopg2 running under Cygwin was an odyssey, whereas it's trivial to install on Linux. And there are plenty of Python packages that assume you have gcc and unix headers installed, which makes chasing dependencies painful.
Just out of curiousity, do you use something like this: http://www.psychocats.net/ubuntu/virtualbox ? Any other recommendations on how to install and start working with Ubuntu VM?
When I was using Windows, I preferred VMWare Workstation [1]; the number of times that VirtualBox entered the "Guru Meditation" state and cost me a half-day of debugging was just too high to justify.
If the price tag is too high, or you're just doing personal projects, I'd recommend VMWare Player [2] -- it's a free version with slightly fewer features. Last time I used it I think it didn't have snapshots, which can be useful in some cases, but aren't a deal-breaker.
Having said that, a few colleagues used VirtualBox and found it to be OK; it seemed to stabilize a bit a year or two after I had problems with it (in 2015).
Missed your second question there -- I've always just installed Ubuntu from scratch on the VM guest. I've done that enough times that it doesn't feel like a lot of work to me.
VirtualBox does have a nice feature that automates the installer, as well.
I've never bothered with the whole Vagrant thing for personal dev machines, given the low cost of spinning up a new machine, it's not worth learning a new toolchain for me. But if I was spending more than a few hours every couple years on building new VMs, it would be worth it.
Hi! I develop early stages of Django apps on windows (it's probably bad practice, but I usually wait to test postgres until I am on a linux box--or if I am developing on a mac--we're oddly platform agnostic where I work). I have found Wing IDE to be incredibly productive (I've just started playing with PyCharm). I love Wing because of its debug options. You can debug templates as well as running processes....You can try Wing Professional for free to see if you like it. It's saved me a lot of time!!!
Although I'm not sure I would use this, this is the kind of stuff I've been waiting for. Where nice clean containers are used as part of an active dev cycle rather than just the production push.
The ultimate dream of the end user using a desktop comprising only of containers seems to be still a bit far off yet...
About a year ago I switched our dev environments from Vagrant to docker-compose environments with a "helper" shell script to run the most common docker-compose operations you'd do in a day. It starts up so much faster than Vagrant, it's easy to add new services, and it's easy to add and remove test data. Here's an example of the kind of setup I have:
Docker is less of an pseudo-VM and more like a linker. No matter what the environment is like on the host you can be sure that it's identical to yours within the container.
So apart from having separate disk+network namespace, how is this different from a virtualenv? I feel like configurable sockets and paths should provide everything this can, but with less overhead.
Less system resource overhead, more human overhead. VirtualEnvs can bleed over if they're not configured properly or if you forget to deactivate. They're harder to fully encapsulate and carry around (have to initiate with a special flag or there's a bunch of manual find+replace when changing paths). They sometimes end up with system dependencies (though `--no-site-packages` has been the default for a long time now).
This would be a shortcut that bypasses the need for venvs entirely. I'm not sure I would use it, but that's the difference I see.
I'd strongly suggest you contact the maintainer of the python3 package and request a backport. 3.5.1 is in testing, so it should be possible, and perhaps even easy to backport.
Tbh it might be as easy as recompiling it, Debian makes it very easy to recompile stuff, dpkg-buildpackage is trivial to run. Do it once on a throwaway image, keep the resulting package(s) forever - until new official ones land. This sort of approach can be a risk in production environments but for development images is fine and dandy.
It's par-for-the-course to "brew install" random stuff on a dev box. You lose a lot when you ditch distro-packaged Python and start relying on varying third-party schemes to drop something that may-or-may not be getting security updates and may-or-may-not have been compiled against the same libraries as the ones on your production system.
Debian stable is a very very stable distribution, as such packages can be extremely outdated. He could compile or get it another way but that can be annoying.
But seriously. If there's demand it'll get there. I've been using py3.5 on freebsd, openbsd and Archlinux for months. If enough software is written using the new hotness like Async, there will be pressure and motivation to get it packaged into stable.
I hope.
For people on RHEL, the IUS repository[0] guys do a great job.
Python 3.5 will land in stable when Stretch becomes the new stable some time later this year, I'd be surprised if anyone went to work backporting with a new stable around the corner.
Not if you want to get security updates and run a binary that was compiled against the libraries on your system. If you start compiling Python yourself you need to start following mailing lists to keep up on security updates and running build servers to spit out your artifact, lest you put the entire gcc toolchain in to your production environment.
What about things installed via pip that are not part of the package manager, is that not the same thing as with pyenv? You could use debian's tooling or fpm to turn pip->dpkg but you still have the same problem of maintenance (which tbh isn't that big of a deal if you're developing, it's not a production env)
Also, I don't personally see what's wrong with having a compiler on production systems. Granted, if you don't need it and want to reduce bloat but treating it as a security hole is misguided imho. If the actor can't run gcc, then the could still upload their pre-compiled version, otherwise how would they get the source on to compile in the first place. Just seems a pointless measure
Could someone please explain why this might be useful? It seems like something that a crypto developer might like but if I'm developing something innocuous why would it matter if it runs inside a container?