My experience is that the older gen languages you mention had to invent package management, made lots of understandable mistakes and now are in a backwards compat hellscape.
Rust and Go built their packaging story with the benefit of lessons learned from those other systems, and in my experience the difference is night and day.
Go package management is a mess of hacks. It doesn't even have a repository, instead relying on source control systems to do the actual work, with special hacks for each source control system to define the artifacts that can be downloaded (e.g. using Git tags with some magic format, or Perforce commit Metadata). It requires you to physically move all code to a new folder in your version control if you want to increase the major version number. It requires users of your code to update all of their import statements throughout their code whenever you move your hosting. It relies on DNS for package identity. It takes arcane magic to support multiple Go modules in the same repo.
I can go on, but it's a terrible hodge-podge of systems. It works nicely for simple cases (consuming libraries off Github), but it's awful when you go into details. And it's not even used by its creators - since Google has a monorepo and they actually use their internal universal build tool to just compile everything from source.
The flip side of this is that it never has to worry about naming collisions or namespacing: Your public package name must be a URL you control.
Additionally, there is no requirement for a centralized package facility to be run. The Golang project is currently running pkg.go.dev, but that's only been in the last few years; and if they decided to get rid of it, it wouldn't significantly impact the development environment.
Finally, the current system makes "typo-squatting attacks" harder to do. Consider the popular golang package github.com/mattn/go-sqlite3. The only way to "typosquat" the package is to typosquat somewhere up the dependency tree; e.g., by creating github.com/matn/go-sqlite3 or something. You can't typosquat github.com/mattn/sqlite3, or github.com/mattn/go-sqlite, because you don't own those namespaces; whereas with non-DNS-based package systems, the package would be called `go-sqlite3`, and `sqlite3` or `go-sqlite` would be much easier to typosquat.
All those things I find really valuable; and honestly it's something I wish the Rust ecosystem had picked up.
> It requires users of your code to update all of their import statements throughout their code whenever you move your hosting.
This is a necessary cost of the item above. It can be somewhat annoying, but I believe this can be done with a one-line change to the go.mod. I'd much rather occasionally deal with this.
> It requires you to physically move all code to a new folder in your version control if you want to increase the major version number.
And the benefit of this is that legacy code will continue to compile into the future. I do tend to find this annoying, but it was explicit trade-off that was decided back when they were developing their packaging system.
Packaging is a hard problem, with lots of trade-offs; I think Go has done a pretty good job.
One way in which Go and Rust have it easier than Python or Node is that the former only have to deal with developers; the latter have to deal with both developers and users, whose requirements are often at odds with one another.
People lose DNS names by accident all the time. It's also easy to typosquat many DNS domains, and even Github projects occasionally.
Non-DNS-based packages don't have to be named "go-sqlite". You can easily require some namespacing, and even use DNS as a base for that, but having an abstraction over it that recognizes the specific needs of package management is better. For example, Maven packages are called things like org.apache.commons, and registering a new package requires control of an equivalent DNS domain. However, if you later lose control of that domain, the new owners don't simply get to replace the packet in Maven just because they snipes your domain.
Go's choice to require full paths for import in each file is also not a direct implication of the previous item - they could have allowed the go.mod file to specify the path and a name, and then allow source files to import based on that name. Instead, this comes from Go tooling that existed before modules support, when the tooling would scour all files in your project to find dependencies.
Moving code to a v2 dir does not specifically help (or hinder) with backwards compatibility. Old code can always simply keep using the old versions of the package anyway. It is also a very unpopular decision, with very few packages actually adopting v2 versions precisely because of this requirement, even when making major breaking changes. Even the team maintaing the Go protobuf bindings decided not to use v2 when they rehauled their code (opting instead to create a new v1 at a new location with minor versions starting at 1.20...).
Sure, packaging is hard, but the Go team has chosen to go against the flow of essentially all other package managers, and instead of learning from their mistakes, they seem to have decided to make original mistakes all their own.
Note that Maven Central only imposes this once when creating a new artifact - which is, I believe, much better than Go, which imposes this for every go mod download.
The advantage being, if you later lose access to the DNS domain that you used to publish an artifact to Maven Central, the new owner doesn't automatically get to compromise your artifact for all (new) users.
Some of your criticism is reasonable, and I’m no fan of Go’s module system as a standalone artifact, but much of your criticism is unfounded.
> It requires you to physically move all code to a new folder in your version control if you want to increase the major version number.
This is untrue.
> It requires users of your code to update all of their import statements throughout their code whenever you move your hosting.
This is only true if not using a vanity URL, but is sadly often the case.
> It takes arcane magic to support multiple Go modules in the same repo.
I don’t know what you’re calling arcane magic here, but we maintain repos at work with 6-7 go modules in without it being an issue whatsoever, and no “arcane magic” required, so I’m going to go ahead and say this is untrue too.
> This is untrue. [needing to move code to a new folder to increase major version]
I was indeed wrong, this is not necessary, though it is the original strong recommendation of how to do it from the original modules proposal.
Still, slightly changing the critique to say that changing major version is a big hassle, and it requires touching all of your project files, and pointing out how bizarre the official recommendation is compared to other packaging systems keeps my point intact, I believe.
> This is only true if not using a vanity URL, but is sadly often the case.
Yes, I am aware that you can buy a custom DNS and point it to your repo to release under a better name, but it is almost never done (I think the only dependency the project I work on has that does this are Google's Go protobuf bindings and several k8s.io projects).
> I don’t know what you’re calling arcane magic here, but we maintain repos at work with 6-7 go modules in without it being an issue whatsoever, and no “arcane magic” required, so I’m going to go ahead and say this is untrue too.
If you want to maintain multiple Go modules in the same git repo, and you want others to be able to download them, you need to tag commits for each module you are releasing. These tags must be formatted to match the dir path to the specific module's go.mod file (which also "conveniently" becomes part of the module name), except for the afore-mentionet vX directories if you chose to follow the Go team's recommendations for major versions. Then, for local development, you also need each go.mod to contain a REPLACE directive for each other module inside the same repo (or maybe directives in a go.work file, or who knows what else). Overall, this ends up creating lots and lots of useless tags, and is the definition of what I'd call arcane.
You'll not find the tag format in any "getting started" doc, and you'll not get any help from go mod itself if you get any of this wrong - just some "not found" style errors.
Edit: oh, and I should note: I have no idea how this is supposed to be done if your repo is not in Git, even though go modules are supposed to support other VCSs too.
Not the one you we're replying to, but I have an idea of the arcane magic he might be referring to. Back when I started learning Go I wanted to make a few example projects to test how well the language works, but I didn't want to push anything anywhere. When I tried to break my project into modules it ended up being that I needed to give them fake urls or something and then tell it to redirect the fake url to a folder instead. This[1] and this[2] gives a good example of the stuff I was running into at the time. I also realize that I should have probably been using sub-packages and not sub-modules, but coming from other languages I didn't realize they were suppose to be different. The thing that screwed with me most though was that there were half solutions for sub-modules to work so I kept using that terminology when searching google.
It's the recommended way, but you're right it's not the only one. Still, updating your major version number is much harder than in any other version control system I've seen (since however you do it, it requires you to update every single file in your repo to point to the new module). It also complicates the relationship between git tags, go.mod file location, and location within the Git repo significantly.
Idk about Go, but Rust’s cargo seems nice, clean yet powerful.
That was my impression some time ago.
But last week I attempted to compile a couple of (not very big) tools from cargo. And it ended up downloading hundreds of dependencies and gigabytes of packages.
As someone who contributed somewhat extensively to the node_modules problem early on, Cargo is definitely better than the JS ecosystem in this regard.
Further, another major difference is that you don't need those dependencies after you've built. You can blow them away. Doing that with node is not as straightforward, and in many cases, not possible.
imo, it is the best designed dependency system I know of. One of the nice things is that Go uses a shared module cache so there is only one copy on your computer when multiple projects use the same dependency@version
I really like a lot of the distribution- and safety-related decisions Go modules made. Domain names and the proxy+sumdb are wonderfully clear, flexible, and scalable, and I think we'll see copies of it in many future languages.
The rest of the stuff around modules, like crippled constraints, zero control over contents (which they have changed!), and completely non-existent "x is available, upgrade" or release tooling: constant, unnecessary pain, and it'll be inflicting serious damage on the ecosystem for many years to come.
Do you mean ranges on dep versions? The way it is currently, the version you set is the minimum, and the algo finds the highest minimum set across all deps and uses that.
If ranges were introduced, you'd end up with an NP hard problem and need a SAT solver for your deps again
> Release tooling
What are you looking for here? Libraries only need to push a git tag, binaries do require a bit of work, but Goreleaser fills that pretty nicely. It would seem hard to standard where binaries would be pushed
> completely non-existent "x is available, upgrade"
I'm continually confused why SAT solving is seen as a bad thing. It automatically solves common real-world problems in less than a second even in absurdly extreme cases, and a couple milliseconds normally - why would you avoid it?
Go's algorithm is much simpler and does not need a lockfile while still giving deterministic results. Ranged deps without a lockfile cannot. There is benefit to two people running the same command and getting the same dependency versions. Most projects do not start with a lockfile, so it is quite easy to have different versions when running getting started commands.
Another example, if I install two ranged dependencies in both orders, will I get the same final deps@version list?
I think, this is not a problem with package manager per se. But with extremities of coding culture. On one side of the spectrum is “NIH” and reinvent everything yourself. On the other side is: let’s pull left-pad from package, because packages are good, we need MORE PACKAGES.
The best solution always lies somewhere in the middle. But finding this “middle” (and adhering to this approach) is the hard part.
I think I'm on the NIH side of things. Not because I like inventing things (well, partially), but because of the security problem [0].
I don't see it being talked about here, either. Our current system of pulling in packages made by random people on the internet is going to burn us. We assume that everyone who creates a package is an honest, reliable, developer who will not inject malicious code into their package. This assumption is similar to the assumptions we made around SMTP, HTTP, DNS, and every other internet protocol. Turns out we were wrong and surprise! you can't trust people on the internet.
I'm not sure if we can solve this with package managers. But package managers are part of the culture that has created this problem, and are probably a reasonable starting point to try and address it.
Being in the Rust full time for last 2 or 3 years: it is quite a pain to setup a release process for big Rust workspace.
Version incrementing, packaging wasms, dancing around code generation – all doable, but not standardized.
There's a release-please to automate all that, but it's not an easy task to set it up in all of your repos.
Besides, if in addition to Rust projects, you have projects in other languages like JavaScript, then you have to do it twice and struggle with understanding all of the package management systems provided by all languages you have.
A single swiss-army-knife package manager would be amazing.
First they weren't supported, so the community created various ways of dealing it including a kind of Google's blessed implementation, then Google decided to create their own official way, then there was the transition time which I guess not everyone has done, having SCM urls as imports is just bad, and how GOPROXY DoS some SCM repos outside of Github.
Go and Rust don't have packages. They have tooling to pull library code from git and build it in-place to be linked into a local project. That's not (really) packaging.
Cargo does not “pull library code from git” unless you expressly ask for it to.
And given that packages have to depend on other packages, and cannot depend on a git repository, that feature is mostly useful for testing bug fixes, private repos for leaf packages, stuff like that.
I thought Rust doesn't have binary libraries? If you declare a Rust dependency, cargo pulls its source and builds it together with your own code? I assume that's what the GP meant, anyway.
Source code is stored in an S3 bucket owned by crates.io, and not from any form of source control, including git.
That the entire code of all dependencies is hosted in one place is an important differentiator of crates.io vs what Go does, which is why this is relevant in this context. Crates.io is centralized, with all of the advantages and disadvantages that that brings, and Go is decentralized, with all of the advantages and disadvantages that brings.
My experience is that the older gen languages you mention had to invent package management, made lots of understandable mistakes and now are in a backwards compat hellscape.
Rust and Go built their packaging story with the benefit of lessons learned from those other systems, and in my experience the difference is night and day.