So now that we know what to do, the big question is: who's going to spend the next 5-10 years of their life on that project?
But this is my point: We are already solving all of those problems, and doing almost all of the work I suggested.
All of the main package managers recognise versions and dependencies in some form. Of course the model might not be perfect, but within the scope of each set of packages, it is demonstrably useful, because many of us are using it every day.
All of the people contributing packages to centralised package repositories for use with npm and gem and pip and friends are already using version control and they are already adding files to their projects to specify the dependencies for the package manager used to install their project — or in many cases, for multiple package managers, so the project can be installed multiple different ways, which is effectively just duplicated effort for no real benefit.
All major operating systems already come with some form of package management, though to me this is the biggest weak point at the moment. There are varying degrees of openness to third parties, and there is essentially no common ground across platforms except where a few related *nix distributions can use the same package format.
All major operating systems also support virtualisation to varying degrees, though again there is plenty of scope for improvement. I’ve suggested before that it would be in the interests of those building operating systems to make this kind of isolation routine for other reasons as well. However, even if full virtual machine level isolation if too heavyweight for convenient use today, usually it suffices to install the contents of packages locally within a given location in the file system and to set up any environment accordingly, and again numerous package managers already do these things in their own ways.
There is no need for multi-year ISO standardisation processes, and there is no need to have everything in the universe work the same way. We’re talking about tools that walk a simple graph structure, download some files, and put them somewhere on a disk, a process I could have done manually for the project I described before in about 10 minutes. A simple, consolidated version of the best tools we have today would already be sufficient to solve many real world problems, and it would provide a much better foundation for solving any harder problems later, and it would be in the interests of just about everyone to move to such a consolidated, standardised model.
The problem you cited before is supposed to be easy, but in practice, software development that uses 3rd-party software is destined to run into conflicts. You found portability issues, platform-dependent issues, environment/configuration issues, and multi-layer software dependency issues.
These all happen regularly when OS maintainers have to package software for release. They spend thousands of hours to resolve [by hand] each one in order to support the various use-cases of their end users. If you are imagining some automated process just magically makes all your software come together to build you a custom development environment, you are mistaken. It's all put together by humans, and only for the use cases that have been necessary so far.
So yes, all these things exist. In small, bespoke, use-case-specific solutions. What you're asking for - universal software management standardization - can't practically be achieved in more than one use case. This is why we are all constantly stuck in dependency hell, until a bug is filed, and the system is once again massaged into a working state by a human. Frustrating, sure. But it works most of the time.
I think it’s a stretch to call a tool like npm, which currently offers 90,000+ packages, a “small, bespoke, use-case-specific” solution. I’m also fairly sure most people publishing their code via npm’s index aren’t spending “thousands of hours” resolving conflicts with other packages by hand; certainly no-one is manually checking over 4 billion pairwise combinations of those packages to make sure they don’t conflict.
And yet npm remains a useful tool, and mostly it does what it should do: download a bunch of files and stick them somewhere on my disk. The same could be said for gem, pip, Bower, and no doubt many other similar tools. They just all do it a bit differently, which leads to a huge amount of duplicated effort for both the writers/maintainers and the users of these packages.
I’m not arguing for magic or for orders of magnitude more work to be done. I’m just arguing for the work that is mostly being done already to be co-ordinated and consolidated through standardisation. To some extent I’m also arguing for operating systems that include robust tools to navigate the modern software landscape as standard, mainly because installing things with tools like apt has an unfortunate way of assuming there should be one global copy of everything, which is frequently not the case for either development libraries or end user software on modern systems, and because if the OS doesn’t provide good universal package management tools then someone else will immediately invent new tools to fill the gaps and now we are back to having overlapping tools and redundancy again.
Again, nothing you use works without it being designed specifically to work that way. You can't use Visual C++ to build software that was designed for Linux without writing portable abstractions and host targets for both platforms, and it definitely won't work on two different architectures without being designed for the endianess and memory width of each. It's bespoke because it's designed for each use case. It simply will not work on anything it wasn't designed for.
And no, it isn't code publishers that spend thousands of hours resolving broken and incompatible builds, it's release maintainers. Go look at bug lists for CentOS. Look at the test trees for CPAN. It is literally mind numbing how much shit breaks, but it makes total sense when you realize it's all 3rd party software which largely is not designed with each other in mind. Somebody is cleaning it all up to make it work for you, but it sure as shit ain't the software authors.
Once you develop enough things or maintain enough things you'll see how endlessly complex and difficult it all is. But suffice to say that the system we have now is simpler than the alternative you are proposing.
You can't use Visual C++ to build software that was designed for Linux...
Sure you can. Projects of all scales do this all the time. Have you never heard C described as being portable assembly language?
Unless you are writing low-level, performance-sensitive code for something like an operating system or device driver, usually details like endianness matter only to the extent that they specify external protocols and file formats. I would argue that this sort of detail is normally best encoded/decoded explicitly at the outer layers of an application anyway.
Obviously if you rely on primitive types like int or long in C or C++ having a specific size or endianness, or if you assume that they will be equivalent to some specific external format, you’re probably going to have problems porting your code (and any package containing it) across some platforms.
However, that issue does not contradict what I proposed. It’s perfectly viable — indeed, it’s inevitable — to have packages that are only available on some platforms, or packages which depend on different things across platforms. That’s fine, as long as your packaging system doesn’t assume by default that the same thing works everywhere.
And no, it isn't code publishers that spend thousands of hours resolving broken and incompatible builds, it's release maintainers.
Who is the “release maintainer” who made those jQuery libraries I mentioned in my extended example above play nicely together?
Again, this issue does not contradict what I proposed anyway. In my ideal world, if packages are incompatible or don’t have sufficient dependencies available on a certain platform, you just don’t list them as available for that platform in whatever package index they belong to. Once again, this is no harder than what a bunch of different package management tools do (or fail to do) right now.
But this is my point: We are already solving all of those problems, and doing almost all of the work I suggested.
All of the main package managers recognise versions and dependencies in some form. Of course the model might not be perfect, but within the scope of each set of packages, it is demonstrably useful, because many of us are using it every day.
All of the people contributing packages to centralised package repositories for use with npm and gem and pip and friends are already using version control and they are already adding files to their projects to specify the dependencies for the package manager used to install their project — or in many cases, for multiple package managers, so the project can be installed multiple different ways, which is effectively just duplicated effort for no real benefit.
All major operating systems already come with some form of package management, though to me this is the biggest weak point at the moment. There are varying degrees of openness to third parties, and there is essentially no common ground across platforms except where a few related *nix distributions can use the same package format.
All major operating systems also support virtualisation to varying degrees, though again there is plenty of scope for improvement. I’ve suggested before that it would be in the interests of those building operating systems to make this kind of isolation routine for other reasons as well. However, even if full virtual machine level isolation if too heavyweight for convenient use today, usually it suffices to install the contents of packages locally within a given location in the file system and to set up any environment accordingly, and again numerous package managers already do these things in their own ways.
There is no need for multi-year ISO standardisation processes, and there is no need to have everything in the universe work the same way. We’re talking about tools that walk a simple graph structure, download some files, and put them somewhere on a disk, a process I could have done manually for the project I described before in about 10 minutes. A simple, consolidated version of the best tools we have today would already be sufficient to solve many real world problems, and it would provide a much better foundation for solving any harder problems later, and it would be in the interests of just about everyone to move to such a consolidated, standardised model.