I'm glad that escape analysis now allows for more natural if/then control flow.
But I'm still unhappy that using named return values still requires you to put a superfluous "return;" at the end of such a function.
The whole reason I use named return values is to cut down on the boilerplate code that carries the return value around -- so why not cut out the final boilerplate 'return'? It doesn't solve any problem or add any information!
Otherwise, there are some nice improvements here. Other than the things mentioned so far, I'm glad to see reflection filling out -- the day a Go REPL will be possible is approaching.
The final return does add information. It is common to name return variables to make the meaning clear to prospective users, without using them in the body. Just to pick the first one I found, here is the definition for regexp.Match:
// Match checks whether a textual regular expression
// matches a byte slice. More complicated queries need
// to use Compile and the full Regexp interface.
func Match(pattern string, b []byte) (matched bool, error error) {
re, err := Compile(pattern)
if err != nil {
return false, err
}
return re.Match(b), nil
}
If the final line is missing, the compiler rejects the function. You might well have meant 'return', which could be inserted implicitly, but you might also have been interrupted while writing the function and meant to write a 'return something'. The compiler (really the language) requires you to be clear to avoid inferring an incorrect completion.
One nit: the return rules are separate from escape analysis, which is about keeping things on the stack instead of allocating them on the heap.
1. Or maybe you were interrupted right after writing "return" :-).
This one weird simple trick could work (and give you white teeth): implicit returns only get inserted if all the named return variables are assigned-to at least once in the body of the function. Perhaps that breaks your pure syntactic rule, though.
> Other than the things mentioned so far, I'm glad to see reflection filling out -- the day a Go REPL will be possible is approaching.
Very close indeed! Struct, array and function types still can't be constructed at run time, but as of Go 1.1, function values can be as well as slice, map and channel types. I exploit some of this in my `ty` package. [1,2]
I tried to convince rsc at some point about the enormous value of REPLs for rapid development.. imagine connecting to an embedded REPL on a server to do live debugging. I don't think I quite convinced him to drop everything and do it himself, though.
Do you have plans to experiment with a transpiler to smooth away all the remaining type noise?
Maybe a first step would be to explore how one could add pluggable 'dialect translators' to the go tool so that anyone could write simple extensions for the language while preserving the existing toolchain.
> Do you have plans to experiment with a transpiler to smooth away all the remaining type noise?
I don't have any particular plans; the `ty` package was a night of hacking plus several days of polishing/writing. :-)
One of the major bummers about moving into the reflection world is performance. My blog article talks about it a little bit, but it's also worth looking more closely at the things I didn't talk about in the benchmarks. (For instance, it seems that function calls in the reflection world pay a very steep price.)
Re transpiler: do you mean a {Language}-to-Go source translation? If so, it seems like you'd want to avoid `reflect` completely in that case. But maybe I am misunderstanding.
> I tried to convince rsc at some point about the enormous value of REPLs for rapid development ... I don't think I quite convinced him to drop everything and do it himself, though.
A REPL would be very nice, but a REPL using `reflect` would definitely be a lot of work. You'd need to make extensive use of the sub-packages in `go` to convert the `Read` portion of the `REPL` into appropriate reflection types. You could do it now, but you wouldn't be able to define new functions, structs or interfaces. And `reflect` cannot spawn goroutines either, which is a bummer.
Just a wild guess, not sure about the true reasons: it adds the information that the control flow can hit the end of the function and that it actually returns something. All that given the writer doesn't write syntactically superfluous returns.
Also I slightly disagree with you that named return values are for short code. This may be true in functions that return at many points. For other functions it may need some otherwisely added boilerplate. If you write inside an if-block f, err := os.Open(...) that err is a different one than your return value err. So you would need to add var f *os.Open above the if to write f, err = os.Open(...).
Having that said, IMHO the greatest pro of named return values is well understandable code, in particular for libraries.
1. I'm not sure I understand. If a function shouldn't hit the end, put a panic there. If you want to emphasize that it does, put an explicit "return;" in. But requiring you to put in a "return;" yields no information at all. Dead code isn't an error.
2. You're right, they're not only for short code. That however is where the forced "return" is most glaring.
That's less of an issue for Go as aside interface{}, fundamental types can't be nil. Plus the instance we're talking about here is with named returns, so the variables are already initialised.
From a personal perspective, I find having to include "return" a pain as there's times when it's completely unnecessary. eg in switch statements or if else where each condition has it's own return. While it's an easy fix for if else (just drop the else / default case), it makes the code look a little less pretty in my opinion.
I'll paraphrase something Andrew Gerrand said about this at a golang meetup last night:
Because race conditions are so hard to detect, the race detector is obviously prone to false negatives. Just because the tester doesn't find any race conditions doesn't necessarily mean that there aren't any. But the race detector never finds false positives. If it finds a race condition, that condition is very real.
You could just run your app with the race detector on all the time, but there is a performance cost to using the race detector.
One way to get around this in cloud/clustered environments is to deploy your app on a few machines with the race detector on and the rest with the race detector off. That way you're running your app with a production load and you're more likely to find race conditions, but you'll mitigate the performance costs associated with the race detector.
It makes sense to me that it can't possibly detect all race conditions but I had never really thought about the ability to detect any race conditions programmatically.
Running the detector on just a few nodes sounds like a great way to offset the performance penalty a bit. The docs on the race detector say that "memory usage may increase by 5-10x and execution time by 2-20x" which could be quite significant.
I also wonder about the effectiveness of randomly fuzzing your app with the race detector on as a form of testing.
It will always be inexact. All dynamic analyses of this type are, because they only observe what the program does as it executes and do not enforce any restrictions.
If you want a more guaranteed form of race freedom, you need to constrain what programs can do, either with dynamic restrictions on mutable state sharing (like Erlang or the parallel extensions to JavaScript do) or with a type system strong enough to reason about sharing of data (like Haskell or Rust).
We used this during a load test and it was able to find insanely small concurrency issues like integer increments, its quite amazing. If your doing go, use it
Honest question: aren't the concurrency primitives in Go largely intended to make concurrent memory accesses like this an anti-pattern? If so, why are so many races cropping up that a detector tool is necessary/very useful?
> Honest question: aren't the concurrency primitives in Go largely intended to make concurrent memory accesses like this an anti-pattern
Intended, yes. But that doesn't mean we also develop "cleanly&properly" at all times, in the real world. It's a great "backup" for when formerly-prototyping-now-production code is getting slightly out of hand over time.
If we're talking about wishlist items, I'd like a tool, any tool, for detecting memory leaks. I understand they can't use Valgrind itself[1], but as it stands, detecting where leaks exist can be very difficult.
in your web server and then visit /debug/pprof/goroutine (or even just /debug/pprof). That listing shows all the active goroutine stacks, but it groups bunches with the same stack into a single entry with a count. Scanning the list it is usually easy to see leaks (hey, why do I have 5000 goroutines with that stack) and also why they are stuck (because you have the whole stack).
Outside of using cgo or working on the Go runtime (written in C), how are you leaking memory in a garbage collected language? Your code either has a reference to something, or it doesn't. If your code has a reference to something, how is a tool going to know you don't mean too have that reference?
Outside of the "dangling reference" issue above "leaking" in a GCd language is non-trivial. Here is a related Java discussion: http://stackoverflow.com/questions/6470651/creating-a-memory... . Also note Go doesn't have ClassLoader or class static fields.
You can leak memory by leaking goroutines.
If a goroutine is waiting on a channel that nobody else has access to, it lives forever, as does the memory it references.
So, you can pretty easily leak memory without messing with unsafe things.
I somewhat disagree. Even if you don't use channels as iterators/generators (which many folks do), it's not hard to end up with a goroutine blocked on a channel that'll never be closed/written to, and this situation (like memory leaks) can result from changes elsewhere in the program or branches not normally taken.
A goroutine count doesn't seem like it'd be useful for diagnosing this in a non-trivial program. The runtime could probably detect if there are goroutines blocked on channels that no other goroutine has access to, and that'd be quite helpful for debugging, but as of now it doesn't. Even if it did, it couldn't catch all goroutine leaks.
>>A goroutine count doesn't seem like it'd be useful for diagnosing this in a non-trivial program.
Sure it is, if you are leaking goroutines you will see an every increasing count, even when your app is idle if the count doesn't return to the proper baseline then you know you have a problem.
If you start a goroutine should should have a plan for terminating it. If you don't have a natural way like the life cycle of handling a request then you need to use channels (defer/close are your friends), waitgroups, condition vars etc. I work on some fairly large Go applications and this hasn't been a pain point.
The reason I don't like this way is it can be harder to refactor. You don't want "return 1" happening if cond. For example, if you refactor to something like this:
func hello() int {
var result int
if cond {
result = 0
}
result = 1
// New code with result
return result
}
Now, in this case, you don't want result to be 1 if cond, so you have to add the else condition. If you start with if-else, this is less likely to bit you in the future.
This particular bug just bit me in a bad way in production because I had what you have and and to make an quick production fix and did a refactor just like this and missed adding the else.
In languages with ternary I would do this (I agree with Go's decision to remove it but in a case this simple, I'd use it if it were there)
int hello() {
return cond ? 0 : 1
}
In Go, I'd do this, but then I seem to like named return values more than most Go programmers...
func hello() (res int) {
if !cond {
res = 1
}
return
}
Of course, coming from C/C++, it would have to be an extremely special case for me to have logic where "true" mapped to 0 and "false" mapped to 1, because that just seems wacky.
EDIT: Sorry, let me explain (I'm not an asshole, really!). I disagree with using named returned for things outside of signaling error/ok states (as explained by Andrew). I feel that our signatures should be written concisely for users of our API, not for our convenience.
Yeah, as do other other Go programmers I know of, which is why I said "I seem to like named return values more than most Go programmers".
I respect Andrew Gerrand and Brad Fitzpatrick quite a lot but I still often use named returns on even small functions. I find doing so usually makes the actual function code more concise and easily readable for me and I don't think the negative impact on the docs is significant. IMO auto-generated go-docs have far worse problems than the 'noise' from named returns, I think they suffer a lot more from core language decisions like the flexible interface system. And to be clear, I think the interface system in Go is brilliant and I love using it, but I also think it makes auto-generated go-docs hard to digest (and use as quick references) in a way that auto-generated OOP language docs (javadoc, doxygen from C++, etc) aren't.
Back when I was in college doing Java, Eclipse would throw up an error for unnecessary "else" statements. Ever since then I can't help but write it your way as well.
I've been taught by some pretty experienced engineers that in terms of readability, multiple return statements are a bad idea. Instead, you should conditionally set a return variable, and return it once at the end of the function. But I'm not sold.... what is HN's thought on this matter?
I understand that but you can have a macro wrapping a goto statement to a predefined label which will do that for you (and potentially set some errors). It's debatable whether this really gives you anything but I kind of like this style since you can replace the whole if statement with a single line something like check_memory(pointer);. The goto's become particularly useful if you want to have some cleanup done at the end of the function even if something during the function fails.
You were misled by engineers parroting the ideology they were taught in school. Else statements are far less readable than straight-line control flow with early returns.
I think a blanket ban on multiple return locations is silly, as they can often be used to simplify code. There may be times when setting a return value is preferable, and I think you should use your judgement there.
The mentality of using a single return statement at the end of a function comes from languages that require memory management. In these languages if you return early you would cause a leak by not releasing your resources at the bottom of the function before returning.
With go's defer mechanic (defer f.Close(), defer l.Unlock()), and the way they handle errors, multiple returns are basically the way code comes out naturally. I think it's more readable than juggling a bunch more variables and returning at the end, others may disagree.
Logically, a simple if-else return tends to follow three basic forms:
if x
return a
return b
if x
return a
else
return b
if x
r = a
else
r = b
return r
Out of these I find the first is the most prone to maintenance errors. It's easy at a glance to see the final return, insert something in front of it, and miss that it needs to happen on another path. At least in the other two cases the indentation makes it clear that it's a conditional return path and you look for others.
I don't have a problem with a "throw" instead of "return a" in any of the forms because that's expected to be an aborted path anyway. In the case of two returns, maybe it is, maybe it isn't.
It's a small thing but when you read hundreds of thousands of lines of code, every little thing that makes it easier is worthwhile.
I prefer to avoid multiple return statements and follow the single-return-at-the-end rule.
However, I happily make exceptions for:
a) Simple shortcut checks at the top of the function. These tend not to increase the complexity of the control flow and can really simplify it.
void free(void * p)
{
if (!p) return;
... rest of function
}
b) Cases where it's just plain unnatural to do it any other way. This can occur with state machines and complex loops.
When I do this, I make sure to put a comment way out on the right.
Before folks jump on me for having nested switch statements or "complex loops" in the first place, let me point out that when I write this type of code it's usually because I'm processing a data format defined by somebody else.
In a question about this on programmers.se, a slightly different history of the "Single Entry, Single Return" mantra is presented: http://programmers.stackexchange.com/a/118793/4025 Essentially, it is argued that the practice that is warned about is to return to different places from the same function, not from different places within the function.
On a separate note, my take is that multiple returns are necessary to write readable understandable code quite often. Guard statements (either handling normal simple boundary cases, or throwing exceptions) at the beginning simplify logic and gives a clean reading of the code.
I agree with the other commenters, but I'll say that I understand the original intent of the rule was to avoid confusing logic, such as:
if (x):
do(thing1)
y := do(thing2)
if (y):
do(thing3)
return 0
else:
return -1
else:
do(thing4)
return 0
___
As you can see, such logic could quickly become hard to test and reason about. Does a single return help all that much? Not in and of itself, but it does tend to make writing such code a bit more painful, leading to better designs. However, guard clauses are a superior design in general.
I still avoid multiple returns in my main logic when side effects are involved, at least when I can.
I used to follow a bunch of best practices like this, that I now often find to be of too little benefit. If the function is small, multiple return statements won't significantly affect readability and it's simpler to code.
You have a group of parameters that naturally go together. ...when you have a bunch of methods that call each other, all of which have a clump of parameters that need this refactoring. In this case you don't want to apply Introduce Parameter Object because it would lead to lots of new objects
There are times when this transformation yields simpler code, there are times when it makes things more complex, and there are a lot of cases in between where it's a judgment call.
No, gccgo has been developed as a first-class compiler; the intent has always been to separate Go-the-language from Go-the-implementation, to prevent a single implementation from becoming the de facto standard over the language specification - a problem which we've seen happen in many other languages.
You can essentially use gccgo as a drop-in replacement for gc; it's only an extra command-line flag you add to the compilation to specify the compiler.
gccgo is actually likelier to be faster than gc for most computationally bound code, since it piggybacks off of the optimizations that gcc has incorporated over the last couple of decades. However, gc may be preferred if you're relying heavily on goroutines (ie, in the hundreds/thousands), since gc is better optimized for that.
"It's worth mentioning the function representation change, since it means closures no longer require runtime code generation (which among other things allows the heap to be marked non-executable on supported systems)."
TL;DR: functions are now represented as (function pointer, pointer to a context structure), whereas before they were represented as a single function pointer.
Right now it's hard to do numerical analysis on large problems using Go. To fix that, I'd be really excited to see bindings for MPI or something like it. In particular, go doesn't have a concept of an Allreduce or an AllToAll communication. I know that one can do that by calling the C MPI bindings from go, but it would be cool if there were a natural way to do it using go's parallelism and concurrency patterns.
It would also be nice to have a first-class array object - from what I understand, current support for multidimensional arrays is very C-like in that you're really dealing with pointer arrays.
Why not write your own, better, MPI with Go? That's exactly the kind of thing it is designed for.
I agree about the numerics. I'm tempted to write an very bare-bones implementation of Mathematica's kernel in Go, but I'll probably wait until people have done more numerics work in Go.
I haven't made the jump to using any of the 1.1 betas or this RC but the reported 30-40% performance bumps look nice. I'm really still waiting for an easy way to get gccgo working on os x though.
And I'm glad that they're bumping up the heap size to system-dependent values on 64bit systems. The fixed heap size of 16 GB was a really unfortunate constraint (even if it could be changed with a little hacking around).
Gccgo doesn't work on OS X (or any other non-ELF) platform, and it's useless on anything but Linux anyway as segmented stacks are supported only with the gold linker, which is not ported to non-Linux platforms.
You can use Go with Google AppEngine [1], though it's experimental. AppEngine also supports Python and Java. It's a nice environment to experiment with Go, since you'll get quite a bit of free bandwidth / storage [2] to start out with:
> "Not only is creating an App Engine application easy, it's free! You can create an account and publish an application that people can use right away at no charge, and with no obligation. An application on a free account can use up to 1 GB of storage and up to 5 million page views a month. When you are ready for more, you can enable billing, set a maximum daily budget, and allocate your budget for each resource according to your needs."
Sure. Go doesn't yet have any mature web frameworks like django for python but quite a bit of what is needed to build web apps is already part of Golang. (Packages net/http, html/template etc.)
If I remember correctly Google switched their download service to using Go and there was a post here not long ago claiming they went from a lot of servers to merely 2 by switching to Golang.
Is Go really that efficient? I heard similar claims from somewhere else, and this has been a motivation for me to learn and then build something in Go.
And what might be the reason for that? Speed? Parallel processing?
As someone who writes Go code professionally, I find Go code to be really easy and quick to write, with fewer bugs/LOC than any other language I've used. Both compilation and execution speed are really fast, so you have both quick edit-compile-test turnarounds similar to a scripting language while having the execution speed of native code. Not quite as fast as C and C++, but getting there.
Writing concurrent code is incredibly easy with the primitives Go provides: goroutines (think of them like green threads multiplexed to 1 to very few OS threads) and channels (similar to pipes). No more faffing around with details of thread creation and teardown or an unreadable mess of callbacks (like you would have in system with event loops). So, if you've written concurrent software (like network servers) before, check it out, you will enjoy it.
> Both compilation and execution speed are really fast, so you have both quick edit-compile-test turnarounds similar to a scripting language while having the execution speed of native code.
It always makes me smile when C and C++ young developers re-discover the compilation times we old timers already enjoyed with Modula-2 and Extended Pascal compilers in the mid-80's.
I learned programming with Turbo Pascal, so I knew before that compilers could be fast and produce efficient code. ;-) If only innovation in Pascal hadn't stopped, we might be using it more than we currently do.
The industry just decided to look into another direction and now with buffer exploits everywhere, it is rediscovering that you can have strong typing with compilers that produce native code.
Now were we talking about Pascal or everything that vaguely looks like a Wirth language? Delphi, alright, Ada, okay, but the others? Meh. None of them had anything fundamentally innovative or were just too obscure from the beginning.
> Now were we talking about Pascal or everything that vaguely looks like a Wirth language? Delphi, alright, Ada, okay, but the others?
I explicitly mentioned Modula-2 on my previous post and most languages on the list are actually done by Wirth or with his input.
> None of them had anything fundamentally innovative or were just too obscure from the beginning.
Well, I consider systems programming languages with GC pretty innovative, given the way Native Oberon and Bluebottle were used in Zurich's Technical University. Even if the languages are pretty basic when compared with Ada or Delphi.
"...its successors Modula and Oberon are much more mature and refined designs than Pascal. They form a family, and each descendant profited from experiences with its ancestors." Niklaus Wirth
Modula and Oberon do not vaguely look like Wirth lanugages, they are Wirth languages. And unlike Pascal, which was designed mainly for educational purposes, Modula and Oberon were designed for real world usage.
z/OS is coded in a mix of Modula-2, PL/I and Assembly. Newer parts of the system are nowadays written in C++.
The problem with any systems programming language is that it needs to be forced into developers by an OS vendor, otherwise very few will use it as such.
While I don't have any first hand experience with Go's performance - it is not that hard to believe it will be fast enough - it is a compiled language, so no interpreter / JIT overhead like Ruby / NodeJS. Plus they try to be closer to C where it makes sense (it has pointers but no pointer math for e.g.).
The synthetic language benchmarks also put it very close to gcc -O2 C performance. About the only things that need to be improved further (so I've heard) in Go are the GC, coroutine/thread scheduler and crypto performance. They are already better in 1.1 release but Crypto is still not close to OpenSSL performance.
But I'm still unhappy that using named return values still requires you to put a superfluous "return;" at the end of such a function.
The whole reason I use named return values is to cut down on the boilerplate code that carries the return value around -- so why not cut out the final boilerplate 'return'? It doesn't solve any problem or add any information!
Otherwise, there are some nice improvements here. Other than the things mentioned so far, I'm glad to see reflection filling out -- the day a Go REPL will be possible is approaching.