One of the thing that's always been a bit off-putting about the Go community and leadership is that it seemed that when I came across things that I perceived as flaws in the tooling or language, I often felt told off and was made to feel that it's me who is wrong for a variety of reasons. Examples of this include GOPATH, the package infrastructure, error handling, lack of generics. Rob Pike disagrees so I must be wrong.
It's kind of satisfying to see that the "Go way" wasn't the best way after all and that some of these things are actively being addressed. I hope it makes the community a bit more welcoming as well.
I think this is an example of rounding things off to one bit: either a feature is good or bad. Worse, people will round a language off to one bit: either it's great or terrible.
Sometimes this is just due to defensiveness: someone uses a flaw to say the whole language is terrible and that gets a response from people who have learned to live with the flaw.
Generally, if you back to the original statements you'll find something more nuanced: generic types have a lot of complicated effects on language design and they didn't like their own early proposals, so they started out without them. Yes, error checking can be repetitive, but many of the attempts to fix that in other languages were worse.
In the meantime, the "Go way" is about following common conventions for working within the language's limitations. This isn't a bad thing!
But it's tricky to be welcoming while getting across that the best way to deal with limitations is just to accept them, for now. On the Internet, everyone wants to be Steve Jobs and demand changes.
Lots of people have made the same complaint/observation, though, and I don’t think they’re all just “rounding it off to one bit” as you describe.
I think they’re onto something, though as the GP notes, hopefully the situation is improving.
I’d flip your response around and suggest that in several cases, the correct fix is obvious and the core Go team’s insistence on extreme caution and “nuance” is misguided. GOPATH is a great example -- it’s just so obviously wrongheaded that I really don’t understand why it stuck around so long.
Another one that comes to mind, although this is quite old now, is the way early versions of Go would insist that the very last statement of a function was a “return”; you couldn’t do an “if foo { return x } else { return y }”, for example. It took a lot of pushing to persuade the core team to add some basic escape analysis, against insistence that it was some kind of nuanced feature with unpredictable side-effects, rather than a trivial thing that every other real compiler has to do.
> In the meantime, the "Go way" is about following common conventions for working within the language's limitations.
Except the problem is that "the Go way" changes in a way that breaks your previously-"working within the language's limitations".
An example of this that comes to mind is how the Go 1.5 vendor/ change was done. The community had rallied around several tools that would do vendoring and then would modify GOPATH to include vendor/ and would then symlink the current project into vendor/. All this required was a vendor/src. The Go 1.5 vendor/ implementation was almost identical but they removed the src directory -- which meant that pre-1.5 vendoring was now broken by Go (you can't really have two copies of your vendor tree in a repo and symlinks wouldn't work either).
There are a few other examples of this, but this one sticks out as it was the first feeling I got that the Go development team doesn't really care how the community has decided to work around a language defect. It literally would have just taken them one additional directory to not break every vendor/ project.
(Also, there has been absolutely no discussion with distributions -- as far as I'm aware -- on how packaging Go binaries should be done so we're all forced to come up with our own ideas. The Go modules stuff has considerations -- like builds requiring an internet connection -- that distributions would've had input on, but we didn't get asked about it.)
Contemptuous scorn seems to be the officially-sanctioned strategy for responding to reasoned critique or even questions about "the go way" within the community.
My favorite example here is this arrogant dismissal of the idea that a one line expression, rather than a 5 line mutating statement, might be preferred for conditionally setting a value:
if expr {
n = trueVal
} else {
n = falseVal
}
> The if-else form, although longer, is unquestionably clearer. A language needs only one conditional control flow construct. [0]
What I find particularly off-putting is that this attitude of superiority is sometimes married with objective incorrectness. If you're going to claim something is black and white, it should actually be black and white.
&&, ||, switch, select, for, and while all do conditional execution in Go.
So the rule is not "A language needs only one conditional control flow construct." The rule is really, "We didn't think a conditional operator was worth it." That's a fine rule, but it's better to be honest about it than to pretend the language was designed around some pure principle that doesn't actually exist.
Yeah, I agree about it being off-putting. There's very little science in programming language ergonomics, so it all comes down to taste, right?
However, I wonder how much of this is a difference in substance and how much is about writing style? There is an older, authoritive style that has an implicit "in our opinion, of course" and sounds pretty grating these days.
I remember being taught in school to remove "I think" and qualifiers expressing uncertainty, under the theory that in an opinion piece, that's understood.
Especially these days when hyperbole is common, I prefer writers that have a humble writing style, but also try not to get hung up on stylistic differences.
The authors of Go have stated they themselves sometimes miss the ternary form, but have seen it abused too much (deeply nested ternary).
This isn't an insult to programmers, it is an option that clarity of code is more important then continence. Thus all if and for statements also require {} brackets.
"While the ternary is often clearer, we chose to sacrifice expressiveness and brevity for the sake of preventing abuses, which we found were all too common."
But that is not the claim being made in the FAQ.
> Thus all if and for statements also require {} brackets.
Indeed this rule seems to spring from the same philosophy. It is most certainly not a preference for clarity, though. It is a preference for consistency.
The philosophy is: "We're giving up expressiveness and brevity because in our experience most people can't be trusted to not shoot themselves in the foot."
This choice would be much more palatable if they were honest about it. But instead, they take the road of insisting that the verbose consistency is actually clearer, which it isn't, at least in many people's opinions.
> It is most certainly not a preference for clarity, though. It is a preference for consistency.
FTR, with "most certainly" you're committing the same fault your accusing the Go team here. You might not think it matters for clarity. I, at least, disagree.
> in our experience most people can't be trusted to not shoot themselves in the foot.
I don't understand the difference between this suggested phrasing and talking about clarity. I'd argue you tend to shoot yourself in the foot if and only if you aren't clear about what you're doing. Clarity and lack of footguns seem directly correlated.
> FTR, with "most certainly" you're committing the same fault your accusing the Go team here.
Fair enough. It's too late to edit.
> I don't understand the difference between this suggested phrasing and talking about clarity. I'd argue you tend to shoot yourself in the foot if and only if you aren't clear about what you're doing.
This is partially true but it's not that simple imo.
In my OP, the ternary version is clearer. But if you allow it, then you also open the door to nested ternaries, and foot shooting.
Allowing single line if statements (as, eg, ruby does) also allows for clearer code, at the expense of consistency.
> Allowing single line if statements (as, eg, ruby does) also allows for clearer code, at the expense of consistency.
This (or rather allowing conditionals without braces) was what I was referring to. They are a foot-gun, because of the lack of clarity. It's a common source of bugs in C/C++ code that a developer thought they'd add a line of code to a conditional block or loop but didn't, because of a lack of braces. Requiring braces unconditionally make it always unambiguous and obvious what block a given statement belongs to.
Obviously YMMV - this is, as most discussions in programming, a matter of opinion. Which was my point :)
I'm aware of the reasoning. As you say, in C people would carelessly edit:
if (condition)
doSomething
to
if (condition)
doSomething
doSomething2
So fair enough, you protect against that. You cannot therefore conclude that:
if (condition) {
doSomething
}
is clearer or that 1 line if statements aren't clear. First of all, you've purchased your insurance at the cost of brevity. Which is quite a high price, especially when you have to use them constantly to write:
if err != nil {
return err
}
and when it forces all of your 1 line conditional expressions to be 4 - 6 lines, per my OP.
And there are solutions that allow you to buy your insurance without such a high price. You can invert the position of the if, as ruby does. You could make a rule that when you don't have braces you have to write your statement on the same line. The point is this isn't the only way out. And clarity is not the same as preventing one specific kind of error, which occurs only in a context which could be changed as well.
Oh man don't tell me these things. I haven't used Go yet in production, but the more I learn about the ways it diverges from functional programming, the more I doubt I'll ever use it.
I was thinking I could write my own if(condition, true_case, false_case) method similar to the way it's done in spreadsheets, but Go also doesn't have macros or inline if/else expressions, so there is simply no way to emulate ternary logic in a performant manner :-(
Oh let's be very clear: Go is actively, explicitly anti-FP, anti-expressiveness, anti-brevity. Those are simply not its values. And any attempts to write Go in FP style will be considered unidiomatic Go.
And the missing ternary will be the least of your problems. Do you enjoy `map` and `filter`? Sorry, the "go way" is to use a for loop every time. Reimplementing map on every type where you need it is considered best practice in Go. Have a problem with that? Expect a snarky comment about your priorities.
This can all be pretty frustrating when you've experienced the joys of more expressive languages.
Go has good concurrency primitives and is fast. That's what it's good at. Those are the reasons to use it. Temper your expectations accordingly.
Quite a few languages do if expressions, and IMO they’re great. Simple syntax, readable and powerful. It’s crazy to me that any new language wouldn’t have them.
The argument specified in the docs is "a language needs only one conditional flow construct" (in a language, of course, that has half a dozen conditional flow constructs ...)
The more palatable argument that people who try to cover for ... seems to be that when people nest if expressions, it becomes a mess. I'm glad these people exist.
(or if you prefer, in the compiler's source itself. It doesn't take a compiler expert to realize it's not very clean code. Even violates the go style, and the C one before that was just horrendous)
The TLDR is that Go's compiler is a "fisher-price my first compiler" and really shows that the authors simply barely knew how to get a compiler working in the first place. That they were not ready to go into a rational discussion on programming language and type theory ... is not something anyone should be surprised about.
That they avoid the arguments by dismissing the people making them in such a condescending way is ... well, there's not really anything good to say about that.
Ken Thompson has been writing compilers for over 50 years now. If you believe that Ken "barely knew how to get a compiler working in the first place", we have nothing to talk about, as you obviously made up your mind already.
I don't know. While no doubt some individual conversations have played out this way (law of averages and all that), I think this is a bad way to characterize the overall conversation. From my viewpoint, the conversation is largely people coming from other languages telling the Go community its way is wrong while the people on the Go team argue that there are tradeoffs that they'd like to explore. For example, pretty much every Go-related conversation over at /r/programming seems to have someone arguing that it's literally impossible to build software in a language that lacks generics (or exceptions) while people from the Go community argue that generics are probably worth the tradeoff, and then only if the maintainers can figure out a way to implement generics _well_ (i.e., not like C++ or Java).
>I don't know. While no doubt some individual conversations have played out this way (law of averages and all that), I think this is a bad way to characterize the overall conversation.
Most everything I've seen from the Go camp, including the new re-discovery of various wheels with the Go 2.0 proposals, felt like the parent describes to me.
I don't really want this thread to further devolve into a tit-for-tat. I'll just say that Rust's community has its bad eggs like any other, but that it does do exceptionally well at sympathizing with criticism. I'm mostly in "the Go camp", but Go's community can learn from Rust's here. On the other hand, for whatever reason, Rust is rarely approached with the same hostility that Go's community is--at least I don't see nearly the same percent of threads in /r/rust in the form, "I've been using Rust for 2 hours now and I haven't been able to figure out lifetimes and lifetimes are different than my language so lifetimes are stupid and no one can ship code with Rust!"
See comments for Rust new website thread on reddit and elsewhere. Most Rust community seems mightily pissed off for all negative comments about their newly designed website. Besides Rust enthusiasts have reputation of trolling random people on internet who have unflattering things to say about Rust.
Or try opening new thread on Swift forum on issues where decisions have been take. They will quickly put you in place and close the thread.
I have yet to see a successful language where whiners are embraced with great enthusiasm.
>Besides Rust enthusiasts have reputation of trolling random people on internet who have unflattering things to say about Rust.
I don't speak about Rust enthusiasts or even the Rust community at large, there are people like that in all languages (though indeed some communities might be better than others, but with large enough numbers of users of a language you get all kinds of people in a similar enough distribution).
I speak of the core team. Go's one has a vibe I don't get from Rust's.
> I think this is a bad way to characterize the overall conversation.
> pretty much every Go-related conversation over at /r/programming seems to have someone arguing that it's literally impossible to build software in a language that lacks generics
Literally literally? Isn't your reply a "bad way to characterize the overall conversation" as well?
Yes; I've challenged this as hyperbole and people double down.
> Isn't your reply a "bad way to characterize the overall conversation" as well?
I don't think so. My reply characterizes the overall conversation as a difference of opinion about whether or not Go's design choices are matters of tradeoffs, and I believe this to be accurate. I chose my example because it was exceptionally stark, but the point is that if such stark comments are made so frequently, surely there is a long tail of less-stark comments. Anyway, I don't want to belabor the point because we have no data and all we can do is conjecture and stir up bad feelings and likely a flame war, so let's not do that.
the fact that you call Rob Pike "Erik Pike" really reads like the complaints of someone who doesn't do their homework and asks questions that have already been answered in many places.
Generics is a major tell, too, since the Go team's position on generics was always "generics are interesting but we don't know how to make them work well with Go's existing type system". See for example, this blog post from 2009: https://research.swtch.com/generic which references the FAQ answer to the generics problem here: https://golang.org/doc/faq#generics: "Generics are convenient but they come at a cost in complexity in the type system and run-time. We haven't yet found a design that gives value proportionate to the complexity, although we continue to think about it."
so I can see why people who have repeatedly answered the same question for years, only to have their answer misrepresented for years, to be annoyed when people who haven't read any prior discussion bring up a point that has been already talked about ad nauseam.
Sorry about mis-remembering his name, but my point was a bit more nuanced. The frustration I had was not that certain features were missing, but I was met with a response that suggested that I was wrong for even wanting these features.
It was a comment on my personal experiences of interacting with the Go community, not the language and its limitations. My hope wasn't that Go implements my every pet peeve, my hope is that the community has gained some humility since I last dipped my toes in it.
Pike seems especially aggressive when interacting with people he disagrees with and this was for me indicative of the culture. It's been a few years though, so maybe by now I'm wrong!
> Pike seems especially aggressive when interacting with people he disagrees with and this was for me indicative of the culture. It's been a few years though, so maybe by now I'm wrong!
FWIW, as much as I agree with Rob Pike on most things, I totally agree that he's abrasive and his way of phrasing things is often unhelpful (even though I think people also tend to read contempt into his brevity that isn't there). If it helps, I think he has realized that in the meantime and is largely staying out of public discussions for that reason.
The community has gotten better. Early discussions were frequently marred hype, and it was often impossible to cut through the noise. However, talks given by the authors usually impressed me by being fairly even-handed, and open about the trade-offs that were taken.
No, while he does reference that (just quoting it directly, not “riffing”), it's not just a joke, it's a reinforcement of his earlier dismissal, in the thread, of syntax highlighting as a “juvenile” practice parallel to colored rods used to teach arithmetic to young children, which grown ups should not need or use.
The kinds of parametric polymorphism implementations mentioned here do not represent the full set of options. Charitably this represents an ignorance on the part of the Go developers, but really it has felt like an apathy towards the field of programming language design and research. Intensional polymorphism (basically passing the type of a parametric argument as a parameter, and either using a specialized implementation of the function or one that operates over a boxed value) has been known about for years and is used in languages like Haskell: https://www.cs.cmu.edu/~rwh/papers/intensional/popl95.pdf
That paper is mentioned in the comments of my first link. I would be surprised if the Go team doesn't know about it. In that thread, as well as here, both you and the other poster haven't actually stated how the paper's strategy would work for Go, and haven't actually stated how the strategy affects compile times, binary sizes, and execution time.
edit: addendum: but also, in every thread about Go, there are some Haskell people showing up to talk about how the Go team is too stupid to understand the Haskell type system. This happens regardless of what the opening topic is, which in this case, is dependency management. No matter what the topic is, if it's a thread about Go, you will find Haskell programmers there, condescendingly talking about how Haskell's type system is so much better than Go's type system. It is very tiring.
There are "I've used Go for a week and here are my strong opinions on it!" posted to reddit (and elsewhere) every week.
Frankly, it's just tiring; especially as a lot of the same points and misconceptions are repeated. Some points are valid, but it's repetitive at best. It would be like discussing Python's significant whitespace with inexperienced Python programmers every week. Sure, it's quirky and arguably not a good idea, but the discussion had been done a few times already.
Turns out that filtering stuff that uses "golang" instead of "Go" is actually a pretty good heuristic for determining if an article is worth reading. Is it perfect? Of course not; it's a heuristic.
Reading your article, this is exactly the sort of "been there, discussed that" kind of example. Your article isn't bad – I think it's mostly on-point – but it's also pretty much a repeat of what many others have said/discussed.
> I started using Go in 2008, but I get your point.
Yeah, it's an imperfect heuristic. I didn't intend it as a remark about you or your article in particular (I had added a sentence about that in my previous comment, but it must have gotten lost in the editing).
> it would have been better if they'd just said nothing.
Yes, I agree. "If you can't say it nice, then it's probably best to not say anything at all". I thought it would just be helpful to explain some of the frustrations that are (probably) behind the comment.
Such putdowns are a regular occurrence in Golang-nuts (ironically, the Go team has named their own/list forum "golang-nuts" despite that "not being the name of the language").
"golang" is the official disambiguation, so it's factually incorrect to call people out for using that term. I don't follow that mailing list particularly closely, but I've not seen anything like this in the threads I have perused. To the extent that these things happen, the community should address this behavior.
> "golang" is the official disambiguation, so it's factually incorrect to call people out for using that term.
I don't think "factually incorrect" is warranted. It's factually correct that the language is called Go. And while "golang" is a useful alternative for searchability and where "go" is taken, it is still a valid criticism to request using "Go" in natural language and prose. As for "official disambiguation", I'd say the most official thing said on the topic is this FAQ entry, which is quite clear on the matter: https://tip.golang.org/doc/faq#go_or_golang
(not saying Brad's comment isn't abrasive or rude. But it's not "factually incorrect")
The language maintainers regularly and consistently use "golang" as a disambiguation:
* golang.org
* github.com/golang
* Twitter @golang and #golang
* golang-nuts
In this context, it seems pretty "official" and the rebuke "factually incorrect" or at least the rebuke applies equally to the maintainers.
That said, it was a single wayward comment and it is (in my experience) out of character. I want to acknowledge that it was rude and validate the person who was wrongly rebuked; I explicitly don't want to pick on anyone nor tempt the Internet to pile on.
What I've come to learn from using Go on a number of projects and reading the long mailing list threads on topics that lead to the decisions and development of language features is that Go is a language intended for many programmers (e.g. average) to be successful writing systems(-like) software. Officially, it was meant to be for systems software, but it's since expanded beyond that scope. Given that it's designed and developed by very bright authors for everyday programmers, it's actually surprising how much thought has gone into seemingly obvious language features. They want users of the language to be successful more than they want their users ego to be happy about language features.
Given that, they do also 'talk down' in their official documentation. I had to look back in time to understand the memory model for the sync.atomic operations which is documented:
"These functions require great care to be used correctly. Except for special, low-level applications, synchronization is better done with channels or the facilities of the sync package. Share memory by communicating; don't communicate by sharing memory."
The final word on what 'great care' means is that Go doesn't actually have a memory model for this part of the standard library. The discussion thread around it has a different tone than the official documentation. They agree on how they want implementations to operate, but don't feel that it needs to be officially defined or documented. I would have been happy if they at least referenced the discussion thread rather than merely label it 'great care' meaning 'not for you'. The other thing you find is that the performant parts of the standard library often does not follow "Share memory by communicating; don't communicate by sharing memory".
One of the betrayals when working with I/O was finding that standard library routines can return a value AND an error, and that sometimes a non-nil error is not an error (EOF is not the only one of these).
I agree completely, that’s precisely the same impression I got. Maybe it was mostly from discussions on HN and it doesn represent the community at large. Whatever the reason I’m glad to see focus on fixing the GOPATH abomination and the plan to finally add generics in Go 2.
My biggest complaints are when something goes wrong. It will frequently silently fail or act like it succeeded. Diagnosing these errors has been a major time sink since I started using it.
Next have been oddities with vendoring workflows. Given previous decisions it seemed like the golang team was driving towards vendored solutions but with go mod they seem to have backed off of that position. The workflows with go mod are clunkier & less well documented. Unfortunately I’m (and my teams) are highly invested in vendored libraries.
We’ve also had trouble with it not playing nice with dependencies that have not taken up modules (and some that are unlikely to).
That’s not even to mention my problem with their design or the hamfisted way they have gone about it, which is something I just have to get past.
I think a primary problem with the modules workflow is that you can't switch to it until all Go projects you work on switch to it. Plus, all of us have got our own GOPATH home-directory workarounds so there's no real point in switching anymore.
That's not true at all. A Go-modules-enabled project can import packages from a non-modules-enabled project that doesn't have Go.mod in it. This was always the case with Glide and Dep, too.
That's not what I'm talking about. I'm talking about being able to build outside of GOPATH. This requires the top-level project to use go.mod and many don't (like Docker for instance).
But that's not the same thing. You don't have to switch all the projects you work on over to Go modules at the same time, only the projects you want to upgrade. It's entirely possible to migrate incrementally.
I never really understand the hate here. It seems more like they didn't hate GOPATH specifically and more that they didn't like not having a package manager.
GOPATH behaves very, very differently from PYTHON_PATH and CLASSPATH, it is not the same thing at all.
Basically before go 1.11, it was impossible to just do
git clone ... somedir
cd somedir
# build code and run something
You had to instead make sure "somedir" ended up somewhere within a particular deep directory structure. That is what people mean when they say "GOPATH".
> Basically before go 1.11, it was impossible to just do
That's not true. It was always perfectly possible to do that, the code you build just couldn't import any libraries not installed in GOPATH. AIUI that is exactly the same as in other languages.
Our git repo for a backend service contains dozens of packages, and many small packages is recommended in go, and they do need to import one another...
I guess it is not all that different from Python, with Python you need to add the source path to PYTHONPATH / sys.path (and often do so at runtime in the entrypoint script). With go you need the "src/github.com/dagss" prefix to the path. Or at least, according to conventions. You are right that it is more similar on a technical level than what I thought.
ie if your code itself is split into modules, they won't work (as they are imported by their full path, not relatively), and anything in your vendor dir is also ignored when used outside of a GOPATH entry.
I have tried several times to fix this problem in the past 5 or 6 years, once with symlinks (which would break quite often -- it's unsurprising that Plan 9 developers wouldn't care too much for handling symlinks correctly :P).
The current way I handle it is that $HOME/.local is my GOPATH, and $HOME/src links to $HOME/.local/src. So you can just have $HOME/src/github.com/bar/foo -- which is less ideal than just $HOME/src/foo, but it's something at least.
I keep the source code for my projects in ~/src and I always set my GOPATH to my home dir. So the special 'go' in the path can be avoided. It will just be ~/src/github.com/...
But, yes, I rolled my eyes when learning about GOPATH for the first time.
i think the "Go way" was more like: Google have a different in-house solution for this so let's just leave it out and for something to bubble up out of community.
Go solves Googles main problem -> something which new engineers can use to write practical solutions without having to think too much. It's not beautiful or expressive or technically interesting; most "craftsman" devs won't stick with it too long..But that's not the majority of devs google is hiring.
Sit down, bang out code and don't concern yourself too much with craftmanship - it will be re-written in a couple of years. Just worry about performance.
And then proceed to shit on whatever did bubble up from the community, at least as far as package management goes. I’m glad we’re finally getting the core team to focus on modules but as an outsider that whole situation was handled poorly and doesn’t give me a lot of faith in solutions bubbling up from the community.
I was very active in the community at around the 1.0 release time, and probably am guilty of this, so late apologies. The problem in my eyes was that this was a new language that really solved a lot of problems in its simplicity from experience. And we all know design by committee is a disaster. The list was under a -constant- barrage of 'you should do X like language Y.' The general response from the community was along the lines of 'if you like language Y, you should use that, stop polluting the mailing list.' Essentially, it quickly got old with the constant asking of things, many ridiculous, that I feel probably a few good suggestions got thrown out with the weekly trash so to speak.
I really want to get into Go, I like that it's opinionated, I like that it's compiled, I like that it's garbage-collected, I like that coroutines are built-in.
My primary use case is scientific computing, both data processing and interactive visualization.
I know Julia is an option as well.
For reasons I don't want to bother getting into I dislike Python.
It's not well-suited to scientific computing at all.
I use and like Go for writing network servers and ETL processes. In a scientific context though, the type system is awkward in the extreme, there is essentially no library of modules, and the interactive visualization story is nonexistent.
Python, R, Julia, even C++ would be better options IMO.
(I'll clarify again: I like Go! I just think it's not well suited for this context)
I'd suggest "now" is actually a terrible time to try to make Go do scientific computing. With generics all-but-guaranteed to be incoming, but also not here yet, a lot of what you would do is going to be superceded in the forseeable future, on the same sort of time scale as your library's expected completion date.
I'd say that even once Go has generics, that will simply take it from being an awful scientific programming language to a mediocre one. I don't really understand why there's a few people who seem to think it's a good idea to try to do their scientific computing in Go when there are so many better options for that that already exist.
Post-generics, though, if you really insist, it will probably be the case that Go can be upgraded from "mediocre" to "tolerable" with a lot of library work. (Part of the "mediocre" is lacking libraries. That aspect can be fixed.) But it'll be hard to start that work without running generic code. Even if you assume the current documentation is the final specs, you still won't be able to guess the performance implications of anything you'd be blindly writing, and performance is very important for this sort of code.
It's frustrating how much résumé-driven development I'm seeing on problems the industry already solved. I'd rather work on harder problems using more powerful tools.
I was looking at this very recently. It's probably impossible - not just difficult - to write a Numpy equivalent for Go as a library. I didn't feel I had what it took to extend the compiler to make that work, but some day someone will.
I'm not certain that Go is a great fit for this particular area. Others have mentioned generics, but I'd also worry that the lack of operator overloading is a weakness. There is a lot of Fortran out there, so operator overloading is not required, but I'd worry about the ergonomics of Go, especially given the strength of the other options.
The best course of action for Go on this front, would be to expose operator overloading of numeric operators, combined with a community expectation that this isn't to be used for anything but numeric operations.
What would be the disadvantage of using a syntactic preprocessor to accomplish the same thing? We have an API for parsing golang. What if there was a way of marking certain files to be parsed and re-written with function calls? It could be done with a file suffix, and this could be made fairly convenient.
> The best course of action for Go on this front, would be to expose operator overloading of numeric operators, combined with a community expectation that this isn't to be used for anything but numeric operations.
I don't think this solves the issues. For example, I have yet to think of a way to overload `+` in a consistent and performant manner. E.g. think of `a += b` vs. `a = a + b`. Either you make them separate operators (in which case there's no guarantee they do the same thing - see for example Python, where they commonly differ) or you are overallocating in the common case.
Either you make them separate operators (in which case there's no guarantee they do the same thing - see for example Python, where they commonly differ)
Geez. That's obnoxious. Why do we programmers do this to ourselves?
or you are overallocating in the common case.
I see. I was thinking along the lines of being able to use complex numbers with arithmetic operators, not along the lines of doing high performance crunching of floats.
Agreed. For data stuff if you want type saftey you really need a language with support for recursive types. More languages lack this feature than have it.
If you have a very well-defined problem and aren't doing much exploratory programming, Go works reasonably well for demanding math stuff. This describes a lot of cryptography work, and Go is a top-tier language for that. But for exploratory scientific and math programming, Go is pointlessly painful. Julia is a better option, and I would do Python Sage before I did Go, despite not loving Python.
I used go in machine learning contexts extensively while writing graphpipe[1]. Go is a fantastic language for servers, and distributed communication. Unfortunately, the lack of generics and dependence on interfaces and reflection makes writing things that deal with multidimensional arrays pretty terrible. See, for example, the janky conversion code in graphpipe-go[2] to convert a multidimensional slice into contiguous row-major arrays. Also, libraries that try to create a numpy eqivalent end up with uncomfortable interfaces due to inability to overload operators. I agree with some of the sibling comments that go 2 will help but probably won't make things particularly pleasurable. Rust would be a more interesting route but definitely doesn't have the adoption of go.
It's worth using. I do science things and use Go. It doesn't require much more mental overhead to write than a scripting language (possibly less once you're into it), it's pretty fast without arcane knowledge, it's easy to maintain, binaries are easy to deploy, and it has a super fast startup time unlike Julia, if you care about that. Overall, I'd highly recommend it.
I use it to push files around and possibly parse and move the data those files contain. But once it gets to strictly numerical things, the purpose built tools are the only real choice. Scientific libraries for Go are nearly non-existent, or just copies of fully formed libraries from somewhere else.
Python has a huge ecosystem for this and levering it is a good idea. In one of the projects I'm working on, I train and test tensorflow models in Python but load it in Go for inference - these types of hybrid approaches can work great for scientific computation.
Go's main use case is for writing concurrent servers (mostly web servers) or daemons and command line applications that need to be really fast. Go is a generic programming language, but Python is "more generic", which I mean that you can go far more in a lot more domains (one of them is scientific computing and data analysis) than with Go. If you want to use go for data analysis I suspect that it would feel "a lot more work" to do the same thing than it would take in Python, mostly because it's a lower-level language and the ecosystem is not that huge than for Python yet.
TL;DR: Go is great b/c it brings great s/w engineering practices and a s/w engineering-friendly environment to scientists.
Admittedly, generics will change how packages are written.
So some code churn will take place when/if they land, but the Go community learned the lessons from Python2/3 and Perl5/6. Expect a better migration path.
Lastly, I guess the 2 remaining weak points of Go are:
- runtime performances sub-par wrt C++ or Rust
- GUIs (which may or may not fall into "interactive visualization")
That said, the Go community worked on a Go kernel for Jupyter:
Gonum is neat, but to the previously-made point about Go's type system making stuff more painful than it needs to be in this application: gonum's linear algebra is defined over float64 and int, which is problematic if you need arbitrary precision.
These are things that are great for product development and devops and not in fact all that valuable in scientific computing, which is a reason why so much of it gets done in Python.
> These are things that are great for product development and devops and not in fact all that valuable in scientific computing
I disagree. Again, this may very well be science-domain dependent, but in High Energy Physics (where, finally, Python is recognized as a mainstream language, BTW) many -- if not all -- of the pain points that slow down undergrads, PhDs, post-docs and researchers at large, are these Go features.
yes, the time from "idea" to "little script that shows a bunch of plots" on a subset of the overall data is definitely shorter in Python (exploratory analysis is really really great in Python).
but, at least for LHC analyses, python doesn't cut it when it comes to Tb of data to swift through, distribution over the grid/cloud, ... you die by a thousand cuts.
and that's when you are alone on one little script.
LHC analyses can see well over 10-20 people assembling a bunch of modules and scripts.
You really long for a more robust to (automated) refactoring language than python, pretty quickly.
In my opinion, the big rub for using Go in scientific computing is the lack of a REPL. The nature of Go essentially requires it to compile. On the plus side, compilation is very fast so iterating small code changes is practical. But it's still not nearly as nice as typing commands into the prompt and seeing what happens.
Beyond that, Go is easy and performant. It's great for paralellizing workflows via concurrency and compiling tools for distribution. So if you know what you want to do and need to scale up, it could be a great choice.
Go works with Jupyter Notebook pretty well. It's a little bit complicated to set up, but it is the same REPL than you use for Python: https://walkman.cloud/s/dtoSfw753YSiMAs
Does anyone know what the story is for binaries generated by things your project depends on? npm versions them and lets you invoke them from the project with "npx"; I wish something like this existed for go. (protoc-gen-go is the main thing I want; if your global version gets out of sync with the version in go.modules, the generated protobuf doesn't compile.)
I am glad to see that the go team is working on cleaning up the tooling situation. I used gocode, then started using modules, found that the primary version was unmaintained, and had to switch to a different fork. I believe I also needed a different version of goimports for a while. Having all this tooling unified into the langserver maintained alongside go sounds wonderful. I hope other languages do the same thing.
This is exactly why I personally think efforts for language-specific package management in general are misguided. This is a solved problem in general-purpose package-managers (apt, dnf, pacman,…), for all their flaws. I don't understand why we need to re-invent that over and over again, including all the duplication of effort to re-package everything over and over again…
Because a language-specific package management work in all OSes supported by the language, while OS specific packages work in a single distribution, or not all in OSes that don't offer such support.
So instead of creating M packages * N OSes, we do it just once.
> So instead of creating M packages * N OSes, we do it just once.
No, creating M packages x N OSes is exactly what we do. In fact, we create M+N package-managers x N OSes - and then also create MxN packages.
If you can write a language-specific package manager for all OSes, you can also write a non-language-specific package manager for all OSes, so I the "it doesn't work in all distributions" argument is just a symptom of my complaint. Instead of working towards a cooperative packaging solution where the effort of packaging can be re-used, we continue to create more and more special snowflakes and fragmentation.
Good luck creating a package format that works across iOS, Android, Red-Hat, SuSE, Debian, Ubuntu, ..., IBM i, IBM z, Aix, HP-UX, Solaris, Windows, Zephyr, Yoctos, RTOS, Integrity, mbed, MicroEJ, BSD variants, Unisys ClearPath, VxWorks, QNX, macOS, Tizen, Jolla, ChromeOS, Fuchsia and several others that I am unaware of or was too lazy to keep adding entries for.
My C++, Java and .NET packages work everywhere there is a toolchain available.
> Good luck creating a package format that works across iOS, Android, Red-Hat, SuSE, Debian, Ubuntu, ..., IBM i, IBM z, Aix, HP-UX, Solaris, Windows, Zephyr, Yoctos, RTOS, Integrity, mbed, MicroEJ, BSD variants, Unisys ClearPath, VxWorks, QNX, macOS, Tizen, Jolla, ChromeOS, Fuchsia and several others that I am unaware of or was too lazy to keep adding entries for.
Can you explain why that would be a problem? It's certainly not a technical one, none of these are special when it comes to versioning or dependency management of software. I can see that there's a social/political problem - which is exactly what I'm talking about.
It surely is a technical one above any political willingness.
A package format that supports all OS system paths, installation processes, difference between build time/dev time/deployment time, language compilation toolchains, compiler flags, ways to address hardware resources,OS specific deployment processes, ... is bound to the lowest common denominator for any chance of success.
Thus forcing everyone that needs something beyond that lowest common denominator to implement their own workarounds, thus we are again back to language package managers.
> is bound to the lowest common denominator for any chance of success.
ISTM the "lowest common denominator" is a superset of everything current language-specific package-managers support and a subset of anything a current language-agnostic package-manager supports (pretty much definitionally). So ISTM that this is a net win - easier to build than APT/DNF/Pacman… and yet more useful than npm/stack/pip/…
In particular, I don't know any currently existing language-specific package-manager that supports what I'd call the GCD of package-management (e.g. none that I can think of supports actual OS-specific installation of packages) so that clearly isn't even a requirement.
> Thus forcing everyone that needs something beyond that lowest common denominator to implement their own workarounds, thus we are again back to language package managers.
FWIW, a) part of the political and social problem is to talk more honestly about what "needing" really means and b) no, that's not at all "back to language package managers". You can have a layered design, e.g. splitting the "building" and the "installation" part, thus letting languages implement their workarounds in their dedicated building layer and letting OSes implement their workarounds in their installation layers. You just need to actually sit down and talk about the interface needed between the two (and the sets of layers actually needed, which will be >2). Which no one seems really willing to do.
On a vaguely related note, is there any decent documentation for writing a lang server. I've seen the official documentation by Microsoft and while it does seem a detailed reference, it's not great for someone who doesn't even know where to start in writing one.
Maybe it's just me but I hate this option. Are we really, really trying to optimize the downloading of source code ? All 5 megabytes of it ? Why would you do that ?
I regularly find myself in a position that this system would make impossible: I need a few custom changes in public available libraries. An extra method. A bugfix. What have you. This makes that impossible using the default method.
What we want is consistent builds. That means a build that happens, with the same source, with the same ... every time, always, on everybody's workstation.
Go modules still allow for changes to local copies. You are not required to use a proxy to use modules. Also, you can always fork a public project and use a replace statement in a module to redirect an import path to your customized module. Check out this related tool: https://github.com/rogpeppe/gohack
And another configuration file that you (and all tools) need to know about pointing at yet another global directory (global for all your projects). That override file, of course, does have to be in your project folder and is yet another thing to take into account.
If I want to share things, I am aware of links in filesystems, thank you very much Go authors. "go.mod" overrides ... I wish golang did not try to solve yet another non-problem badly.
How will that work with modules though? As in, how do you vendor a single library (that you want to make a fix to) while still building with module support? Right now, "module mode" is mutually exclusive with "vendor mode" and I don't think there are any plans to change that.
> Right now, "module mode" is mutually exclusive with "vendor mode" and I don't think there are any plans to change that.
This is not true. If you pass -mod=vendor, or place it in $GOFLAGS, go will build something outside of GOPATH, in module mode, using the vendor directory exclusively. I wish it was the default, but only because it encourages people to not use vendor directories (which is a bad habit), not because it can't be worked around easily.
> in module mode, using the vendor directory exclusively
But that's not what I'm asking. I want "go build" (with some flags) to use my fork (vendored if needed) for one library, and pick up all other dependencies from the module cache as usual. How do I that?
Because this is already possible in a vendor-only world; I just edit the code in vendor.
You check out the dependency you want to change and add a replace directive in go.mod. Either a temporary one to a file system path or a permanent one to a forked repository.
I have to violently agree with this (consistent builds).
All the infrastructure with index, notary, mirror, etc. is nice, but at the end of the day you have to fetch some external code, verify it, lock it down in some vendor directory, and check it into your own backed up repo. Everything else is fluff.
If you're not doing this you're dancing with the devil and one day when you need to tweak some module or fix some bug, and the remote source is long gone, you will rue the day you depended on anything except your own source storage. And it will be 3 AM on Saturday, in some dismal server room, with no outside Internet connection, your customers screaming at you, and your phone battery dying, because Murphy's law.
For the purposes of software builds just pretend the rest of the Internet does not exist. Vendor everything, all the time, everywhere, and trust nothing.
Yeah. I feel like the modules work for the most part, but as soon as I want to develop cross project I have issues. Do I use the replace directive in my go.mod file? That's just gonna lead to merge issues whenever someone bumps a version number. Do I build using the vendor method that sounds like it's going to be ripped out at some point? How does my dev repo even get linked into that?
This whole gopath-less thing is a bit confusing to me. I wish the 'how to use packages' stuff was clearer.
What do you mean by "decentralized"? Do you mean someone can add a third-party repository and download packages from there? (Then it's been the case for other environments too)
I think decentralized is meant as "there's no central repository" here.
There are two different meanings of 'central' in play here, 'center' as in a town square, and 'center' as in the centralized component of a system.
It does not imply that there is a central repository when using 'central' to mean 'a unique, difficult to replicate, or otherwise somehow special component in a system' as in 'centralization'.
Central in the case of 'Maven Central' means something closer to 'default' or 'town square' or such.
Saying that 'maven central' implies maven isn't decentralized would be the same as saying "Most go projects are on github, so go can't be hosted in a decentralized way because github is 'golang central'".
> Saying that 'maven central' implies maven isn't decentralized would be the same as saying "Most go projects are on github, so go can't be hosted in a decentralized way because github is 'golang central'".
It's not, though. Because github isn't the default. You have to explicitly specify that you want a project from github every time you use it.
The difference might be minor from a technological standpoint, but it means a lot in terms of how centralized things become culturally.
> One of the most important parts of the original design for go get was that it was decentralized
That's one of the things I like in go most. Having decentralized packages via domains and URLs and then index them (godoc.org works very well), it mirrors the design of the Web. In contrast npm, Rust and others that are tightly coupled to one site feels like Google AMP - centralizing and hosting everything in one place.
Cargo (in the nightly channel, but eventually will become stable) and npm can both use alternative package indexes and vendored dependencies, so they are not tightly coupled. Having a standardized package index as a database is what allows efficient dependency solving, which Go hasn't considered providing up until recently.
> that anyone should be able to publish their code on any server, in contrast to central registries such as Perl’s CPAN, Java’s Maven, or Node’s NPM. Placing domain names at the start of the go get import space reused an existing decentralized system and avoided needing to solve anew the problems of deciding who can use which names. It also allowed companies to import code on private servers alongside code from public servers.
lol statements like this are infuriating. Go is no more decentralized than those languages, unless you consider git hosting something people in standard practice do on their own (survey some of the top Go dependencies and see if that holds true). They simply lack a package index, but the cited language package managers both have those and allow you to host your own.
It seems like there's a mistake in the diagram. The "notary → mirror" arrow should be replaced with a "notary → go command" one, because the go command shouldn't trust the mirror when it comes to cryptographic hashes.
I think the mirror can pass through the public signature provided by the notary. That cannot be spoofed if you have a trust chain for the notary to ensure the mirror has not tampered with the module.
A centralized module index will be nice, i tend to end up searching github.com which leaves out all the other sites or locations that could have a module that solves my problem.
Note that there's a difference between "central registry" and "central index". In particular, I don't see any downsides from what they are changing - godoc.org is already, for all intents and purposes, a central index, so I don't see how anything changes in that regard.
So the mirror will serve both the package and signed hashes, what if the package has been indexed in the central index but not yet signed by the notary?
It is amazing that it took them this long to realize they were entirely wrong about how dependencies should be distributed. Since it was obvious from day 1 to most people from other ecosystems maybe they shouldn't have so much NIH. We'll likely get exceptions and generics soon as well. What a waste of time.
It's also important to get a number of other things worked out and discuss possibilities. The go process in terms of using a baseline directory with git sources seems to be a very pragmatic approach to start from. I started playing with node before npm was in the box. There were some competing ideas and eventually one won out.
I think that having a system in place with the language may be a better option than a company with its' own motivations and needs separated from the language/runtime/platform.
As to generics, I think you may well find generics in the future. I feel that most of the resistance was in order to better support core language features. I can't think of any languages (I'm no expert) that started with generics support, so I'd be surprised if this wasn't a go 2.x goal.
For exceptions, I think that the go solution works. Similar to the callback interfaces in node, it puts errors in your face, which isn't a bad thing. I mainly contrast this with node, as it's another language/platform that has grown a LOT but also relatively recent.
You can compare the progress of node, go and others to say python2/3, C#, Java and others. I think go progress has been great by comparison, and pragmatic choices have ruled out.
> It is amazing that it took them this long to realize they were entirely wrong about how dependencies should be distributed.
It didn't take "this long"; the Go team openly admitted the need for a better approach at the very first GopherCon back in 2014, nearly five years ago. (That was not the first time, either).
In between community developed many solutions when official solution was lacking. So it's not like people were twiddling their thumbs for 5 years. Also check how long Java took to get official module solution. Work on that started in 2008 and solution was delivered in 2017.
Hah. They don't have one now. The de facto solution is Maven. Java itself doesn't actually have a module system like the one discussed for Go. It is merely a access control system rather than a dependency system.
To be clear, I give the Java people shit about it all the time and was on the original Java module system JSR that was supposed to deliver it. In the end, the Sun people yanked the dependency part and shipped a module system that is really only useful to the JDK.
The way dependencies are distributed (ie. downloaded) won't change a lot, and as they say the distributed model is very important. Maybe you mean how dependencies are installed by the go command?
(As a side note, I don't think there are any plans for exceptions -- for good)
It's kind of satisfying to see that the "Go way" wasn't the best way after all and that some of these things are actively being addressed. I hope it makes the community a bit more welcoming as well.