Hacker News new | past | comments | ask | show | jobs | submit login
Tor in a safer language: Network team update from Amsterdam (torproject.org)
343 points by _vvdf on April 1, 2017 | hide | past | favorite | 244 comments



As a mere average user of computer languages, every time I play around with Go I start wondering how a language like this became so popular.

It feels like it was invented in a universe where Haskell, OCaml, Erlang, Smalltalk, Lisp and so many more languages and research in languages never happened.


> As a mere average user of computer languages, every time I play around with Go I start wondering how a language like this became so popular.

You can pick it up in a weekend. A lower entry bar means more people will try it out.

> It feels like it was invented in a universe where Haskell, OCaml, Erlang, Smalltalk, Lisp and so many more languages and research in languages never happened.

It was developed in a large enterprise context not in an academic context. I think it shows.


So was Erlang. The language, the runtime, and the standard library were all purpose-built together to solve a real world problem... to build fault-tolerant, distributed network applications.

Which makes Go's rapid popularity as a language for solving the same problem even more peculiar. Especially since Go's surrounding tracing, debugging, and online code swapping facilities are so much worse.

I'm sure some component of that is Ericsson's complete lack of interest in evangelism and Erlang Solution's apparent lack of capability to effectively evangelize, but it still seems like Go isn't so much filling a void as it is being better at attracting an ecosystem that delights in reinventing a particular set of wheels.

Java was very much the same way for a very long time.


> So was Erlang. The language, the runtime, and the standard library were all purpose-built together to solve a real world problem... to build fault-tolerant, distributed network applications.

I'm probably outing myself as the lord regent of all impostor plebs, but Erlang is not IMO an approachable language regardless of its origins. Put differently : it looks completely nuts. I'm sure there's a method to the madness, but comparing it to Golang feels misguided (at least, from the perspective of awful programmers like myself)


Erlang is a very simple language. It doesn't look like C, but it has a very basic syntax and a narrow set of core constructs.

Elixir by comparison is significantly more complicated, but the fact that it looks tacitly like Ruby is apparently very attractive.

I suspect I could sit down with you for an hour and remove all the weirdest seeming bits which feel alien. Such that the rest of it would just feel like any other programming language, and you'd be able to immediately start dissecting and understanding other people's code.

Part of the problem with Erlang (again an evangelism problem) is that there's not much in the way of attractive and we'll structured on-boarding documentation. It's mostly really simple. It's just that the Erlang docs seem to go to great lengths to obfuscate that fact. In large part because they don't do anything to intentionally build "background schema" with the reader by drawing comparisons to similar things they're probably already familiar with in other ecosystems.


I don't think looking like Ruby is at all what makes people like Elixir. Maybe it has attracted a bunch of people to look at it because of that, but I don't think that's why anyone stays (because anyone who cares about that quickly realizes that it isn't actually very much like Ruby at all). I agree that it's a more complex language (though I wouldn't say "significantly", but that's very subjective), but it doesn't really "feel" that way to me because its design choices seem more consistent. But on top of that, it has really good tooling, which I always found to be a major pain point in getting started with Erlang.


Like you, I am also down with OTP (yeah u kno me) and thats why I have to come to you with a harsh message of love: You've been bamboozled.

    {ok, u_no_me, {down_with, init, [State, Mod, OtherMod, Pid, SomeImportantRefIforget]}, {permanent, brutal_kill, 10000, 9999, 100000, 5, worker}
Is not a genius language construct of a faraway Alien race. It's not elite. It's not even Swedish. It's just limited and unclear. (there are even more unclear examples in the annals of Erlang, but everyone is familiar with consulting and reconsulting the child specification documentation)

There are just so many things that make dealing with OTP nicer in Elixir. This is to say nothing of the meta-patterns that Elixir has driven home: Agents, Tasks, the actual supervisor/worker interfaces themselves. You could go on here.

I haven't even gotten to macros -- Have you ever had to make your own behavior, say for an acceptor/worker pool of some variety (presumably non-tcp, otherwise just ranch that out). Making your own behavior (what armstrong himself called "really advanced") is the easy part here. You've got to create a system where you pass around the TRUE module that holds the correct `handle_call/cast/info` implementation back and forth through the true module and the OTP state parameter.

This whole pattern (which comes with very real runtime costs) just doesn't even exist in Elixir. You just create a macro. A macro that sets all of this up at build time. No complex supervisor hierarchies where you need to constantly inspect each and every message, no custom behaviors, etc. You just have a mostly sane way of sharing software logic to begin with.

You can work up really clever solutions with parse_transforms, even standard "-define", but today Elixir has a totally brilliant solution to this problem and many more classes of problems.

This isn't to say there an't blindspots (beyond "wrap proc_lib/inets/other hated library"). There's minor issues here and there. Process registry naming can get confusing, especially operating between Erlang+Elixir supervision trees. Low-level details aren't nearly as well-publicized in Elixir either. There are even big issues, like the Elixir community leaning more "I'm playing around with Elixir because Phoenix is Webscale" than the grizzled realtime adtech/gambling/finance gurus and so if you want help with OTP platform/VM details, you going to have a harder time.

I first tried Elixir in v0.08. & function capture syntax had just been standardized. I suspect that like me, you tried it then, when it wasn't a fully-featured alternative yet. Things have changed and Jose Valim has a vision for the future of Erlang that I think you'd be foolish not to pay attention to.


I do pay attention to it. I actually host an Elixir meetup out of my own pocket... going on 2 years now. I'm ~$3000 into a labor of love helping people learn and be exposed to Elixir's various features and help them make sense of its Erlang underpinnings.

Having used both Erlang and Elixir "in anger" there are a whole bunch of things that bug me about Erlang. There are a whole bunch of things that bug me about Elixir too. I'm not sure what any of that has to do with what I said. Elixir is a more complicated and larger language than Erlang.

It has structs (which are weirdly implemented as maps instead of records). It has type-classes/traits by way of "protocols" (which are a whole different layer of metaprogramming that could be replaced with behaviours and data-structure definition convention or dialyzer type-spec'ing). It has the pipe operator (which strangely elides the first argument in a function such that map/2 becomes map/1 when written... why no placeholder sigil). It has Agents (which are something which is confusing to have in the standard library as its a specific implementation of a gen_server which acts as a KV-store which will be really, really prone to data races). It has a fair bit of metaprogramming to be aware of in terms of macros and hooks that trigger at code loading/import time (which forces one to need to be a lot more aware of the inner details of 3rd party libraries being used). It eventually devolves into requiring knowledge of Erlang when attempting to do anything regarding debugging, tracing, or distribution related.

Elixir is a great language, and it has a lot of neat features, and I love teaching it to people and being increasingly critical of Erlang based on advancements in usability and community engagement/feedback that I see happening around Elixir. But I don't really need to be proselytized to about it.


Golang is simpler java, and that is it's target. It even has the same perf profile as java. Erlang does have a worse perf profile.


Golang performance profile is very different from Java. It's speed is closer to C++, garbage collection optimized for latency, not bandwidth, very low startup time comparable with typical native code.


You can actually get a performance profile very much like Go's out of Java...

* Inasmuch as you can make a blanket statement like this about a language, Go's speed is very much comparable to Java's.

* Java has a huge number of garbage collectors available. You're just talking about the default, but there are extremely low-latency GCs.

* Java has a bunch of ahead-of-time compilers; thanks to Android this might be the most common deployment of Java.

Even green threads were tried (and sensibly abandoned) in Java before.

I think people may not be aware of all the options available in the Java ecosystem, but other than sized types, which are theoretically coming to Java 10, there isn't much performance-wise that Go does and Java doesn't.


Just because a language is compiled doesn't mean it will approach C++ speeds. The GC itself is a hard limit on perf. What I've seen in practice:

1x time: C

1.5x time: C++ (with smart pointers)

~2x time: Objective-C (almost everything is a heap smart pointer)

3x time: Java, Golang (optimized GC languages)


Yes Erlang does, strictly for execution throughput, but less so for concurrency handling. That is also a solveable problem. It just so happens to be a problem neither Ericsson nor ESL have been interested in solving.


Remember, Erlang was first and foremost developed as a programming language (and runtime) to run Ericsson's phone switches.

Session establishment for PSTN can be relatively expensive (much in the way of TLS handshakes), so concurrency and shared-nothing memory model together allowed for real-time streaming to keep on working no matter what else happened. The three main features of a PSTN switch are, after all:

1. Reliable call switching

2. Reliable real-time throughput

3. Reliable billing and accounting data generation

We don't think much of throughput these days, when any home office switch has gigabit ports and 40Gb+ backplane. As far as I know, maximising throughput bandwidth was not a primary consideration with Erlang. Reliable real-time streaming is much more about guaranteed latency - and incidentally, optimising between latency and throughput tends to be all about tradeoffs.


[I apologize for the absurdly snarky and ranty post, but I'm going to do it anyway:]

> You can pick it up in a weekend. A lower entry bar means more people will try it out.

... and then you get the JavaScript situation where everyone thinks they're an actual programmer at $TEAM_LEAD level of sophistication when it comes to modeling, design, architecture, testing and implementation... but they're really not. It's not their fault per se, it's just that they do not have the necessary experience to realize the areas that they're lacking in. (This is a well-known cognitive bias: More than 50% of people think that they're an above-average driver. There are a precious few who realize that they're not -- they are the exception.)

Programming is hard[1] and if there was an instant-humbling device, I'd buy it in spades.

[1] Most people focus on the "oh, that's Undefined Behavior" bits of it, but it's not really about that. It's about recognizing the types of mistakes that you make and working to avoid them or creating a system that does, whether that's by testing or proof or whatever. Obviously it still has to be _practical_, but AGAIN... it's about tradeoffs. If you don't care about correctness, I'm pretty sure I can whip up a "solution" to any problem that's fast as hell... and not correct. Summa summarum: I think we need to be thinking a lot more in terms of tradeoffs and not so much in absolutes.


What you described is basically true for any job, and it's more about the culture surrounding the language and particular use case rather than the entry level.

I've seen a lot of people in academia writing C++, mostly horrible code. I've seen electrical engineers writing awful assembly code. Because the incentive is mostly to get one task done today. None of these languages have a lower entry bar.

JavaScript is pretty much bound to a singular use case (web development) where there is a lot of incentive to get quick money, which attracts all kinds of people. I don't see the status quo you're referring would be much different if the browser scripting language was Haskell or brainfuck.


> What you described is basically true for any job

I agree, but there are a lot of jobs where it's actually effectively impossible to distinguish between good vs. bad practitioners of said jobs.

I just have this feeling that it should be possible in programming. (Because it's quantifiable... or at least quantizable into "works" or "doesn't work" along any number of axes.)

> I've seen a lot of people in academia writing C++, mostly horrible code. I've seen electrical engineers writing awful assembly code. Because the incentive is mostly to get one task done today. None of these languages have a lower entry bar.

Same here, only with FORTRAN and C.

(This hints that this may be a meta-problem.)

> JavaScript is pretty much bound to a singular use case (web development) where there is a lot of incentive to get quick money, which attracts all kinds of people. I don't see the status quo you're referring would be much different if the browser scripting language was Haskell or brainfuck.

Well, there is node.js, but really, the problem with JS is... JS. It's just a horrific language semantics-wise (syntax: meh). It was invented and implemented in ~7 days(?) and it really shows. (No blame towards Brendan Eich, he did the best that he could within the deadline and even got a little bit of higher-order programming in there.) It's been improving, but just imagine the burden of improving a language that's already been deployed to 1B+ computers. Not an enviable task.

Hopefully WebAssembly will (in time) address these problems and give us a real way to program the front end in $WHATEVER_LANGUAGE_YOU_WANT


IIUC Pike spent most of his career in research right ?


A non commercial University research group and commercial Corporation research group can be focused on solving very different problems.

I think Pike and the other designers skew more towards corporate research (Bell Labs). And surely the development of GO as well as other Google research projects are intended to win in the market place.


It's probably useful to look at the goals that the designers of go had when they designed the language. All other discussions seem irrelevant about what go has or doesn't have if it wasn't one of the goals. Pike, et al, weren't interested in solving metaphorically "your" programming language problems. They wanted to solve problems they observed at Google and just happened to open source the resulting language.

From https://talks.golang.org/2012/splash.article > The Go programming language was conceived in late 2007 as an answer to some of the problems we were seeing developing software infrastructure at Google


IIRC Pike worked on concurrent programming models long before entering Google.


Yes. Pike, Thompson, and Griesemer were at Bell Labs before Google.


To be honest, you can probably also pick up Lisp in a weekend, experienced programmer or not. The syntax is also simpler.


But all those parentheses!

I know it sounds like a lame reason to dislike a language, but I've always found staring at Lisp to be so much more difficult and distracting than C-style syntax.


    printf("Hello world");
Lisp mode, move left parenthesis, remove semicolon

    (printf "Hello world")
Second round

    if (var == 2) {
      do_something1 ();
    } else {
      do_something2 ();
    }
Lisp mode, move left parenthesis, remove semicolon and curly brackets

    (if (= var 2)
        (do_something1)
        (do_something2)
    )
On a big source file I am not sure if the amount of parenthesis is bigger than parenthesis , bracket, curly brackets and semicolons counted together.


Did you try coding in it for more than a week? I have found that using a properly indenting editor (Emacs, drracket) the parents just fade away.

And then you discover paredit and you start wishing every other language would let you treat your source code like that.


Yes. I was required to use Lisp for intro comp sci in College, followed by all the AI courses in Lisp, then a few jobs.

It's just never been a go to language for me.


Sounds fair. Most of the time people say "i don't like the parens" the closest they came to actually using a lisp was JavaScript.


From experience of pairing with other people, there are those happy to leave indentation and formatting to their linter/IDE and those who are anal enough to do it all manually.

The latter group tend to find lisp not that painful because lisp parens/s-exps are as explicit as you can get, and indentation makes parens almost invisible


There was an attempt to deal with that:

https://en.wikipedia.org/wiki/Dylan_(programming_language)

Also, Julia was syntactic and/or semantic sugar on top of femtolisp. Got converted to it in first pass with LISP's power doing the rest. Here's Stefan Karpinski on that:

"So ultimately the reasons for Femtolisp are:

1. Scheme is excellent for writing parsers since trees (aka S-expressions) are its forte.

2. Femtolisp is a small, simple, highly embeddable and remarkably fast Scheme.

3. We control it (and by "we" I mean Jeff) and can fix any bugs we encounter."

Yep, No 1 are those godawful parentheses and s-expressions making the job easier. ;)


Use Emacs and take advantage of its electric indent.


Parens are not syntax to me. Syntax is that ADA thought ' meant string indexing.


What parentheses?


When you talk about Go I think you should start with C.

One of my professors used to tell us, that C was built by people who wanted to use it and didn't care about academic style.

In many ways Go is just the next step of C. C did not have object orientation and even passing functions around was kinda hard. While C++ tried to bring object orientation to C (total failure) Go decided to keep the core values of C and instead improved the rough features (e.g. easier binding of functions to structures, faster build times).

By making it easier to pass around functions Go enables functional programming styles, but at its core, it is still just an improved C. The only revolution within Go (as a language) is the concurrency and channel concept and I think that was taken from some functional language (not sure).

I really like Go, because it just feels right. It might not be as clean as Smalltalk or Lisp, but it has data structures and functions, teaches you how important interfaces are and lets you build highly concurrent applications with ease. In addition, it brings a nice set of tools which integrate well into a shell driven workflow.

After all, the whole thing should not surprise anybody as Ken Thompson[1] was part of the Team which invented Go.

[1] https://en.wikipedia.org/wiki/Ken_Thompson


I wish people understood this better.

I'm not a Go programmer, but I have a lot of respect for it.

If you're wondering "why Go"? Think of it as a modern version of C, at a slightly higher level, developed by the same people for slightly higher-level tasks. They made C for low-level stuff, and then picked up with Go for higher-level stuff. It's like C+.

C has been very successful in part because it's so simple in certain respects (although not in others) and I think Go will be successful for many of the same reasons. Go does what it does very well.

I think Rust is actually a great choice for something like Tor, but I wouldn't use Rust for some of the things I'd use Go for.


> It's like C+

That's perfect.


Go doesn't have faster build times than C. Rather, it has a de facto single implementation that's fast because it doesn't do the same level of optimization that modern C compilers do.

The one improvement over C related to build time in Go is that you don't need include files. Granted, that's a big one. But modern C compilers make header parsing extremely fast.


Go's concurrency model is based on CSP [0] and the previous languages Rob Pike worked on that were also heavily influenced by CSP.

[0] https://en.wikipedia.org/wiki/Communicating_sequential_proce...


> I really like Go, because it just feels right.

I think we've seen a few posts like that on HN and now I wonder, what that means exactly. What else falls under the "feels right" umbrella for you?


I am now sure if I can put it into words, but here are some comparisons:

- In Scheme/Lisp, you write the function name before the opening parenthesis. In languages related to C, you write the function name before. For me, the second one feels better. In general, I like the C syntax pretty much.

- In C++ you easily provoke very long compiler error messages. Go compiler messages are much shorter and much more to the point. I like the ones where I do not have to search the real error in the error messages themselves. (I heard Rust compiler error messages are even better).

- To run a go program on a computer you just need the binary, done. To compile a program you need the compiler which comes with a few cli tools, done. For java, you have to decide if you need the JRE or JDK, agree to some license before being allowed to download it from their website. In addition, you have to place the jre on every computer which should run your program. I like the simple way.

- When I design a program there are a few parts. One is the Entity Relationship Model. Sometimes I think about it as something that has to be saved to an SQL database, sometimes as an object oriented inheritance hierarchy. For both there are reasons, but even simpler is it to just think about it as a struct or JSON. This fits pretty good for designing JS and go apps.

- When I write bash script I know what a slow language feels like. I know there are faster languages than go (e.g. fortran), but I am also aware that much of the performance is in the hand of the developer (e.g. memory management). When I write go programs I feel like my efforts to make the program efficient are worth my time and the tools support me while doing so.

- Last but not least go supports then functional programming style when needed. I like that. Sometimes I miss object orientation a little, but that mostly happens when I forget what interfaces are for. And while I truly respect Alan Kay I think object orientation has been overused/misused enough, so that its ok for me, that go didn't build it into the language.

So maybe I just like 'simple'. I am sure, that for every example I listed you can find a language which does even better than go, but I think it becomes harder when you consider all examples together. Nonetheless, the list is far from complete, I just tried to write down out of my head, what I like about go.


> In Scheme/Lisp, you write the function name before the opening parenthesis

after

> the second one feels better

Not really. Parentheses makes code manipulation easy. Thus it actually it FEELS better, though it may not LOOK better.


I hope you realize that you are arguing against an explicitly subjective statement.

Nevertheless, please elaborate. What do YOU think is the difference here between feeling and looking? Do you mean that your favorite editor (or the majority of editors) better supports outside parenthesis?


Even original Vi from AT&T Unix (never mind Vim) better supports outside parentheses:

If the cursor is here

   (foo bar)
           ^
I can cut the whole thing using just d%: combine d)eletion with the % cursor movement (jump to opposite parenthesis). Then you can paste it elsewhere with p.

It's a POSIX-standard feature: see here: http://pubs.opengroup.org/onlinepubs/9699919799/utilities/vi...

That's just a crap editor for sysadmin tasks I wouldn't use for development.

If the syntax is:

   foo(bar)
          ^
then we jump only over the argument list, not the whole expression.

Another thing is the damned comma disesase in f(x, y) languages. Say we are in Vi:

   (foo abc def ghi)
           ^
We want to swap the last two parameters. Easy: type deep. Done! d)elete to e)nd of word, go to e)nd of word, p)aste.

Now try it with

   foo(abc, def, ghi)
           ^
Annoying!

Imagine if your operating system shell forced you to use commas between command line arguments. Nobody would use such an idiotic thing. Why do we put up with languages that do that?

    $ ls, -l, *.foo  # just kill me now

Move second argument to third position: "parens outside" together with "no commas between arguments" makes it a breeze:

    (foo (a b c) (d e f) (g h i) (j k))
                ^
Instead of deep we just do d%%p. Done.

    (foo (a b c) (g h i) (j k))
                ^ d%

    (foo (a b c) (g h i) (j k))
                       ^ d%%


    (foo (a b c) (g h i) (d e f) (j k))
                               ^ d%%p


Thank you very much for your explanation, I see your point.

I think I will still prefer the C style for parenthesis (as I would care more for the LOOK than for the FEEL ;-), but regarding the commas, I agree that skipping them would make the life a easier. I mean besides lisp and bash/shell other languages have done so too (e.g. Smalltalk) and spaces aren't allowed anyway inside parameter names.


> I hope you realize that you are arguing against an explicitly subjective statement.

Yes.

> Nevertheless, please elaborate. What do YOU think is the difference here between feeling and looking? Do you mean that your favorite editor (or the majority of editors) better supports outside parenthesis?

Lisp has a two-level syntax. The first level is the syntax of s-expressions. On top of s-expressions we have the actual Lisp syntax.

S-expressions have a few features:

  * it's a data syntax for lists/trees, numbers, characters, symbols, strings, ...
  * delimiter surround the data, it is always clear where the expression begins and where it ends
  * whitespace is used to delimit the elements
  * s-expressions are not sensitive to lines and whitespace
  * s-expressions can be automatically formatted by simple rules, according to different widths
  * the tree structure is explicit, not implicit. It is visible, based on the s-expression nesting.
For an s-expression editor it makes not much difference to edit a data list like ((berlin germany) (rome italy) (paris france)) or code like (defun collide (object wall) ...) .

The first level of editor support you get for editing s-expressions.

Thus editing on this level FEELS like you manipulate data: create, transpose, delete, list, de-list, flatten, copy, indent, format, ...

That every list has explicit delimiters makes clear where the expression begins, where it ends and what its contents are. The parentheses also serve as 'handles' for the thing. If you use some more advanced Lisp system, the s-expression creates a region and moving the cursor into this region enables context sensitive commands. This is possible in other systems, too. But here the relationship between the s-expression and the region is visually clear: each expression has explicit delimiters, front and end.

So, the first level of Lisp editing is data manipulation. That's a big difference to editing many other languages, where your program is not also a simple data-structure. There you are always on a language level, maybe on a primitive token-scanner level. You can reconstruct the tree structure, but it is not visible, explicit and delimiting like in Lisp. If you refactor a program, you work on the programming level - in Lisp you can work on a plain data level, too. This makes code and data interchangeable and when you work with a Lisp listener (running a read eval print loop), you will work with code as data and the listener helps you: you get support on the language level & the s-expression level on the editor side. But at the same time you can cross the the border into the programming language: you can let Lisp manipulate your program. Thus programming becomes a mix of manipulating text and data. The s-expression syntax helps to make that simple - because of the features above.

A typical example would be writing a macro (Lisp code which transforms code) based on some existing expressions. You would take the expressions, convert them into data, create the transformation code, define the macro. Then you would test the code generator. Thus suddenly from writing code, you switch to writing code-writing-code and the input and output is no longer data, but code as data. Thus while programming you will interact with the code generator. This can be done in many languages, but in Lisp it FEELS different, because you work on s-expressions - easily delimited hierarchical pieces of code as data, which can be transformed by your editor and your underlying Lisp system.

After a while, editing conventional code FEELS less direct. It feels like you manipulate the code with instruments, while a good Lisp system feels direct. That's it direct manipulation of code and data.

A Lisp programming will learn this code manipulation side and then is willing to give up some looks for that. Originally Lisp had a more traditional surface syntax, but it turned out to be more practical to use the s-expression based code representation not only internally, but also using it externally on the display or textual side.


Is there any good starting point to use lisp in the same way as I would use go? I mean I write web servers and command line tools in go. As an editor I use vim.

Some time ago I used DrScheme (now Racket) to write Scheme but I never found it to fit easily into my workflow/use-cases.

So I would I like to use the language with the following workflow (100% terminal):

- write code with vim: vim main.lisp - simply compile the code to binary: lisp build main.lisp - execute the compiled program: ./main

Any ideas?


I wouldn't propose to do it in Lisp. Batch programming is done better in languages like Go, especially if you are already feeling comfortable with a batch workflow. You can write web servers and command line tools in Lisp, but it would be a huge investment to learn that and it is not clear if it pays back for you.

Essentially for anything slightly complex one would use an interactive programming style and 'only' deliver the application in a batch style.

Racket is more oriented towards batch programming, compared to popular Common Lisp development environments.


Lisp is terrific for batch programming (too).

You can edit a Lisp program strictly in files, and run a build step to produce a clean image which is tested, using a REPL just as a debugger, to "go in" and find out what is wrong.


If he uses a REPL, it is no longer batch programming. The plain edit/compile/run cycle is best left to other languages.


Ok, so the definition of batch programming means that debugging is only allowed by print statements (or other side effects from which we infer what happened in an actual run of the program and work backwards). For instance, using a breakpoint debugger on C isn't batch.

Still, Lisp is good for that. I've debugged Lisp programs with print statements and it was at least as good an experience as debugging programs in other languages using print statements.

For instance, if we compare to C, C has no trace, and generally no easy way to wrap any function with a wrapper that takes the arguments. Ecosystems built around C, like the Linux kernel, have developed things like that: Linux has a function tracing thing in it (more than one, I think).


Go is designed for average programmers working in a huge organization. That's why its dullness is a virtue.


Seems weird that from what I hear about the intense interview process that Google programmers go through that they'd feel a need to dumb down the language of choice.


No matter how smart you are (or think you are), or how much state you can store in your head at once, if you remove some of the mental overhead of a language, then you have more time to think about other things. It's not about people needing to "dumb down" anything, it's about having a bit less mental overhead, meaning you can get a bit more done a bit easier.


No matter how smart you are (or think you are), or how much state you can store in your head at once, if you remove some of the mental overhead of a language, then you have more time to think about other things.

So you use a language that makes specifying ownership (and immutability hard)? I always feel Go adds a lot of mental overhead.

E.g., if you have a method that returns, say, a float32 slice. 1. If I return a slice of an array/slice that is a struct member, the caller could modify elements in the slice, breaking struct invariants. 2. Returning a copy of the slice is safe, but adds a lot of overhead. What you'd actually want is to return an immutable slice, but Go does not provide any facilities to do so (apart from wrapping a slice, but the lack of generics and operator overloading makes this tedious).

I guess a lot of Go code will just assume that returned pointers/slices/maps will not be used in a way that breaks invariants. But you usually end up reading the source code of 3rd party packages to see what is safe, whereas in other languages you could just read the method signature.

tl;dr: I think ownership and preserving invariants usually give the most mental overhead and Go does zero in that department.


I agree that mutability constraints are a major weakness of Go. Interestingly, they did make string immutable and added []byte as a mutable alternative. It's unfortunate that string and []byte are so similar and yet it's impossible to treat a []byte as a string without copying (with the exception of looping over runes). This leads to massive code duplication and/or lack of functionality (byteconv where are you??). Just look at the strings, strconv and bytes packages. This whole area of the language is messy.

The funny thing about Go is that it makes up for its long list of weaknesses with essentially one single strength. You can actually read other people's code without much introduction to the concepts used in that codebase, because the number of possible meanings of any particular expression is much smaller than in other languages.

There is so much talk about Go being for dumb, second rate, corporate developers, because that's what Pike essentially said at one point (perhaps without thinking first).

But in fact, it's not the developers who are dumb. It's the process by which large corporations employ and dispose of developers. They are thrown into some project and expected to "hit the ground running". There's no time for explanation. So what they do is read code to acquaint themselves with the codebase and hopefully become productive before they move on to the next job. And that is the one task where Go really shines. Reading arbitrary pieces of code.

Of course powerful abstraction features eventually make reading code easier as well, but only after having learned the abstractions created for that particular problem and codebase and only if those abstractions are very carefully crafted.

Powerful language features help writers of code long before they help readers. And that, I believe, is essentially the dirty secret that Go exploits.

We all want to be brilliant writers of code when in fact we are often readers poking helplessly at half understood code to make something happen. We even forget our own abstractions once we haven't looked at them for a couple of months or even weeks.

The problem with Go is that it not only acknowledges this state of affairs, it also enshrines it.


There's no time for explanation. So what they do is read code to acquaint themselves with the codebase and hopefully become productive before they move on to the next job. And that is the one task where Go really shines. Reading arbitrary pieces of code.

Definitely. This really shines in the standard library, it consists of extremely readable code and is a good way to get up to speed on canonical Go.

We all want to be brilliant writers of code when in fact we are often readers poking helplessly at half understood code to make something happen. We even forget our own abstractions once we haven't looked at them for a couple of months or even weeks.

Definitely, but what are we comparing to? I would agree that e.g. C++ and Haskell have this property. Unless you understand the language and commonly-used abstractions, template-heavy C++ code is difficult to read. However, there are many languages that have more powerful type systems than Go, but where code is still easy to read (ML, Object Pascal, Oberon, Ada, etc.).


I'm comparing to other widely used languges like C++, C#, Swift, Scala, Java or Python. I don't think any of them allow as few possible meanings of any given language expression as Go does. And I don't think any of them requires as little non-local information to find all code that gets called by any particular expression (perhaps with the important exception of Go's structural interfaces).

I wonder whether it is simply a theoretical tautology that the more abstraction features you have in a language, the more different possible meanings any particular syntactical expression can have, and the more effort it requires to figure out its true meaning, assuming you're not familiar with the codebase.

Or is that a false dichotomy? I am unfortunately not familiar with Ada or Oberon and Pascal is but a faint memory.


I have yet to meet anyone inside Google who actually likes the language. At best they call it a decent replacement for C or C++, which is damning with faint praise. Lots of people are neutral, of course.


I like it. My brain's already pretty full with trying to get a ML pipeline optimized and launchable, and if I need to write something to read a CSV file, make some RPC calls to a service for each row, and dump the results to a file, go just works.

Sure, I could use something like Haskell, but then I'd have to worry about whether I'm accumulating a giant stack of thunks that'll blow up. go just works, and less of it will bit-rot than the comparable python script, thanks to at least some types.

This is why the SREs seem to be fans, e.g. https://talks.golang.org/2013/go-sreops.slide#1


trying to get a ML pipeline optimized

I wrote some machine learning tools in Go and the experience is quite bad. The lack of operator overloading and parametric polymorphism make most ML code ugly. It also does not help that Go's compiler backend does not optimize very strongly and calling out to C comes with a relatively large overhead.

Sure, I could use something like Haskell, but then I'd have to worry about whether I'm accumulating a giant stack of thunks that'll blow up.

There are many languages between Go and Haskell that are productive and provide a sufficiently strong type system.


Yeah, the ML parts aren't in Go. I'm working in TensorFlow, so it's C++ scripted by Python. But there's a lot of incidental stuff that needs to be done, most of which I don't want to spend cycles on thinking hard about, and Go does a good job there, in that narrow niche where I want something that runs on one machine but find python too slow.

At a previous gig, I had done a lot of F#, and I think that's close to my personal sweet spot, but I'd have to use it frequently to keep it in my head. Go is small enough that I can load it into cache when I need it.


But there's a lot of incidental stuff that needs to be done, most of which I don't want to spend cycles on thinking hard about, and Go does a good job there, in that narrow niche where I want something that runs on one machine but find python too slow.

Definitely! I have continued to use Go for small utility every now and then as well. The standard library is extremely well-suited for that kind of work.

(Now using Rust more in that role as well, but mostly to get continued practice ;).)


I think in areas where there's a lot of math, the absence of generic collections might prove to be a frustration relative to C++ or Java (both allowed languages at Google).


The idea is that it makes software engineering across an organization much simpler if your language is minimal and eschews magic or complex constructs


So, Java?


Kind of, but with more `strings.hasSuffix(...)` and less `new StringComparatorFactory(new StringSubsetComparison(), StringComparisonPositions.END_POSITION, ...`


Give it time, Go is still pretty new and requires reinventing all those wheels and flexibility-allowances. It's already visible in some areas though - e.g. look at the hoops you have to code through to allow decent unit testing of controller-style code. Anything of interest has to be exported and injected, or you simply can't do it. Even Java's significantly better here with its runtime manipulation.


In Java it is string.endsWith(...), not much different.


Ok bad example but you get the idea.


No, Java has become too advanced for the typical Go developer.

Too many features.


It is for me. Could not figure out advanced tools like Maven or gradle. Maybe it was beneath Java experts at Sun/Oracle develop simple cli tool that could compile a java project with one command.


You mean just using javac -sourcepath is too hard?


'gradle build'

Unless the project is very badly configured, that should be all you need to compile it. Now, writing those .gradle files...


Yea, advanced language like Java should have advanced build tools and build files. Go is simplistic so `go build|install|run` just works without IDEs, build files, and external build tools.


Well, yes, if you include the build tool in with your language implementation you do not need external build tools.

Compiling most Java projects is usually as simple as 'brew install maven; mvn package'. Maven does require a build file, but it can also do more than the go utility, such as deploying artifacts in a repository, build distributable tarballs, RPMs, debs, etc.

Maven, like go, largely relies on convention over configuration as well. If you generate a POM file from the standard archetype, you basically drop your files in the right source directory and it will build.

(I am not a fan of Java, but I think a lot of the criticism here is lazy. The first time you use the 'go' tool you have to learn its usage as well: how to separate library code from programs, what are the conventions for package structure, how to avoid API breakage for downstream users, etc.)


> Now, writing those .gradle files...

Gradle build files have confusing syntax. Just knowing which lines have an equals sign and which don't is a bit of work.


It was intended to be a simple language. Each of the three authors had to agree on every feature (lowest, common denominator). One made C, one worked on Java, and one did an innovative language for Plan 9. Two of those languages ignored as much of the kind of work you described as possible. Something like Go is a natural result in terms of feature choices for the language itself.


Sometimes I think the syntactical difference of functional languages alone is the reason they're unpopular.

Iterative languages seem to match more closely how people speak/think in verbal language.


It may match more with the way we speak but not really the way we think, imho. When you ask someone to do something you usually just give him a general description of what the end result would be and then add more details if necessary. You don't tell him exactly what to do step by step from the start to the end, because you don't really care most of the time. Markup and declarative style are closer to the way we think.


It's a pity it's hard to be precise in verbal communication without resorting to talking like a functional programming language.


I agree completely. I recently went to an Elm workshop and while I found it very interesting, the Haskell syntax just didn't agree with me.

I want more enforced required clarity. I compared it at the time to writing English without any punctuation, you can do it but it makes comprehension much more difficult.


Don't frame the question as language X vs language Y. Think about the ecosystems instead. Golang provides a batteries included API that allows developers to rely less on third party libraries with shaky SLAs, a promise of version stability, and great tooling that has increased consistency of coding styles across projects. It's a simple language with a a low barrier to entry, and the Google brand encouraged earlier adopters to take risks and build products that luckily became very popular (Docker, Kubernetes, etc). It looks a lot like the languages that are already popular in Industry, without providing any real groundbreaking concepts. Building software is an economic value proposition, and it turns out that there are many ways in which a language's ecosystem can provide good value for the investment beyond the pros/cons of the language itself, and often these advantages are easier to measure from a managerial standpoint.


From the perspective of a user, you make some sense. From the perspective of a language designer however, great ecosystems are no excuse for crappy languages.

The Go language doesn't have to be crappy to provide batteries included API, version stability, and great tooling. No generics and no sum types? Those are no longer groundbreaking, they're the bare minimum.

Old languages like C at least had the excuse of being created a long time ago, when we possibly didn't know better (and computers were much slower). Go's creators however don't have that excuse. They screwed up, plain and simple.


If you're used to dynamic languages, it gives you type safety and performance for little effort.

I love Python but always wished for a simple, type-safe language; Go gives me that. It's not worse than Python IMHO.


Without support for meta-classes, annotations, generators, iterators and list comprehensions it surely is worse.


Unless you're one of the people (like me) who considers its lack of many of those things a feature. More features doesn't necessarily make something "better" and less features doesn't necessarily make it "worse" (whatever your definition of "better" or "worse").


No I am not, I am no longer programming in the mid-90's and even on those days, Turbo Pascal had more features than Go.

I only advocate Go as a replacement for those that would use C for user space applications, or possibly some kind of low level stuff.

If Python and Ruby eco-systems had blessed compilers, instead of just CPython and MRI, I doubt people would be flocking to Go.

We already see this happening in the Ruby world, just let Crystal become a bit more mature.


Goroutines and channels can substitute for generators and iterators, with maybe a downside of being too much more expressive.


Colleagues I've spoken with who use both still say, that Python is "20 times" as productive as Go. For some applications this multiplier likely goes down considerably; but Python holds an edge in a lot of areas.


it was also invented in a universe where haskell, ocaml, erlang, smalltalk, lisp and so many more languages never achieved mainstream adoption. go will never be all things to all people, but it's clearly popular in the niche it's targeting.


The success for Go was in the out of the box libraries, and by this I mean you didn't have to "pip install" , "npm install", "nuget install" - no idea what the command is for nuget, always used the GUI, but you get the idea... For example when I was trying to learn Go for the first time I saw how easy it is to dive into Web Development. I can just import a package that came with Go itself, and I would be good. I think also the way it pulls in packages is part of it's success as well. It's not a traditional package manager (yet) but it worked. You put in a repository from the web and it pulls it, it made sharing and reusing code really easy. I never had issues just running Go code either if it was a codebase that was full of mostly Go code.

Btw I don't consider myself a Go developer, at work I use C# and Python. I would love to learn Rust, used to be the other way around, but I've lost hope in Go and have taken a second look at Rust, I love the direction Rust is going overall, but I understand why Go got so popular so quickly, it came at the right time with the right amount of working parts.


Wow, have you seen JavaSCript?


Yes, but I think know how JS became popular; syntax similar to Java/C++ and pretty much the only language available on the browser.


I'll agree with your last part, "pretty much the only language available on the browser."


Go is a genersl purpose industry language. The ones you mention are niche or academic research languages.


There are companies built on Haskell and Erlang.

Just because a language is founded on good programming theory doesn't make it unviable in the industry.

Ideally, the reverse would be true.


Erlang, sure. Maybe a few. But Haskell? I doubt there are many. I bet Go's use in industry is easily 100 times that of Haskell's already and it is much younger.


Galois, Facebook and Microsoft are some examples, and all ML derived languages tend to be used for data modelling in the financial sector.

https://wiki.haskell.org/Haskell_in_industry

https://ocaml.org/learn/companies.html


It seems wrong to call Facebook and Microsoft "built on" Haskell, even if they are using it effectively for some important things.

Galois, of course, is very much built on Haskell. And there are certainly other examples.


Still their money is certainly landing in some pockets relevant to those communities.

A company doesn't need to be built on a single language.


For sure. My complaint was only wording choice (not yours) without clarification (which would have been yours).


Erm Facebook is built on PHP, Microsoft on C++ (or C#) and I've never heard of Galois.

I mean the fact that there is even an exhaustive "Haskell in industry" page at all shows you how rare it is. There isn't a "C++ in industry" page! I did actually find a Go one here:

https://github.com/golang/go/wiki/GoUsers

But it's both hilariously long and also obviously not exhaustive.


While you disdain Haskell's use in the industry, there are researchers much relevant to the IT industry than any of us, that happen to be on payroll from Facebook and Microsoft.

Maybe you should spend some time learning how FB and Microsoft use Haskell, you might be surprised with what you find.

As a clue, try to find out how C# and VB got LINQ, maybe you will find the right MSR paper by a certain Erik Meijer.


If you mean https://wiki.haskell.org/Haskell_in_industry, it is not exhaustive. The first three Haskell startups to come to mind (AlphaSheets, LeapYear, Takt) are all missing - as is the (very limited) use at Uber.


I just put some names randomly. You can add C++, Python, Ruby, and pretty much any other language after C.


When I read safer language, 'Rust' came into mind automatically.. Not sure if 'Rust' will ever be as popular as golang but certainly see a future with it popping up everywhere mission critical / super safe software is required.


I am afraid Rust will become much more popular than Go is atm. I think the C++ crowd will embrace it as soon as it will be mature. I am sure it is better than C++, but I still like the concepts and syntax of Go much better.

But as C++ programmers obviously never care about readability and simplicity, I am pretty sure they will take Rust. After all, Rust is a decent language and as users, we will all benefit from the migration.


I've... never had a problem reading Rust code. It's generally well-typed, and makes good use of "automatic" error handling constructs to ensure errors flow upwards without visually polluting the success case.

Whereas with Go code, I have to filter out all the error handling (which often takes up 2/3rds of lines of code, even when it's just "if there's an error, return the error" which it almost always is), wade through the mass of functions that take interface{} and cast it to something internally, etc etc.

I've tried programming in Go and I find it horrifying - I either get to ignore errors or spend ~80% of my code doing this, repeatedly, a few times per function:

    foo, err := bar()
    if err {
        return nil, err
    }


My problems with the readability of Rust is related to the syntax and not so much about the code structure. For example, Smalltalk has the most readable syntax I know, while I find it's code structure average.

I know that error handling in Go can be tedious, but to some extent, it is also the programmer's job to utilize the features of the language to write clean code: https://blog.golang.org/errors-are-values

Regarding your type problem I find very few cases where I need an empty interface (mainly container types). Most of the time I use real interfaces and the most code I have seen used non-empty Interfaces too.


I tend to find that Rust's syntax warts are primarily in function headers (primarily due to lifetimes), and less so in the bodies.

> Regarding your type problem I find very few cases where I need an empty interface (mainly container types).

I've run into the issue with quite a few "generic" libraries, since Go doesn't have a generics system. For example, https://github.com/manyminds/api2go requires a few of them in common cases.

My alternative is writing the same code over and over, and hoping that I've not missed something in one instance of it.


It's slightly more tolerable if you do this:

    foo, err := bar()
    if err { return nil, err }
But then the Go community yells at you for daring to not using `go fmt` style.


You are not handling the error, you are just returning it.

More correctly, you would decorate the error:

    foo, err := bar()
    if err != nil {
        return nil, fmt.Errorf("failed to do bar: %s", err)
    }


Sure, in some cases you want to wrap the error type - Rust handles this by essentially having you create a custom error type which is an enum (which can be done via macros like quick-error and error-chain), and implementing From<BarError> on it which'd create your custom error type from the BarError. This trait is used by both the try!() macro and the ? operator (which do much the same thing, the latter being the updated syntax):

    let foo = bar()?;
If you need to do something custom in converting bar()'s error into your error type - which is rare - you can use .map_err() like this:

    let foo = bar().map_err(|e| MyError::BarError(e, extraContext))?;


> ... but I still like the concepts and syntax of Go much better

The concept of not having features because someone on your team might use them? You subtly say that Rust is a worse language because it doesn't seem like it was designed in the 80's, when clearly as a language (together with the compiler), it's objectively better than Go.


I did not say that Rust is a worse language. I said I like Go better. I do not know exactly what it is, but I find rust code harder to read (e.g. I do not like the :: operator, which other languages have too).

So this is my personal preference and should not offend any Rust fans. Besides the syntax, which reminds me too much of C++, I think Rust is a very good language. I do not know if it is 'objectively better than Go.'.

Btw. restrictions are not always bad.


C and C++ are used in generally very conservative systems programming contexts. Safety is always considered with other items for tradeoff. Adoption just for reasons of safety isn't a given.

As a result, the "maturity" required for adoption differs in different parts of the C and C++ community. If Rust becomes more popular than either of those languages it is unlikely to be on a time scale shorter than a decade or two. That's not to say it can't happen quicker, especially given that communication and adoption can be much quicker today than it was even 15-20 years ago, but it's unlikely.


I agree that it will take a few years, but from the current point in time I think it will have a large market share in about 10 years.


Rust is almost always going to win for any projects that take a 'gradually replace' approach, simply because it can interact so easily with C code in both directions (i.e. C calling Rust and Rust calling C) in a way that other languages can't, or can't without significant efforts/hassle.

The Rust team showed great foresight for bringing about those changes before the initial 1.0 release and it will pay off in spades.

It's a key step in removing barriers to entry:

https://www.joelonsoftware.com/2000/06/03/strategy-letter-ii...


Since bitexploder asked, I'll add what I wrote on this on other forums. If it's about secrets or anonymity, make sure you always use a safe language that supports careful control and reasoning about both memory and CPU time. The reason is that this enables covert, channel analysis for vulnerabilities that leak secrets through storage and timing. It's why I wanted Freenet to ditch Java aside from the obvious reasons. It's also why GC languages such as Go are better not used. Although, memory management where programmer controls timing & it's simple to analyze might be used. Reference counting comes to mind.

The other thing you want is proven, successful use in high-assurance systems. That is, systems that either didn't fail or provably couldn't in certain ways. These are almost all written in a subset of C or Ada/SPARK. The advantage of using those is you can combine them with a vast array of proprietary or open-source tooling to catch about any error you can think of if it's implementation. There's also formal specification and protocol analysis tools that combined with expert review can catch the rest. Rust, although a good choice for increased safety/security, doesn't have such tooling yet. That means they will get less correctness overall and over time vs MISRA-C or Ada/SPARK unless similar ecosystem in industry and CompSci emerges for Rust. That's why I recommend against it for high-assurance security for now.

It does seem good for medium-assurance security where you want to knock out low hanging fruit in systems code. It will avoid serious errors in C while providing additional benefits with type system and other features. Ada 2012 + SPARK 2014 are the standard for safe systems since they systematically eliminate all kinds of errors with a consistent design and tooling with decades of field success. I haven't seen a direct comparison with Rust on each protection to see if it matches it already or not. The main advantages Rust has over them are its borrow-checker for temporal safety, more usable method for safe concurrency, and (best for last) highly-active community to provide libraries or help. Go has similar benefits if its GC works with your use case but a lower, learning curve & possibly lower efficiency. Due to ecosystem benefits, these are main two I'm recommending for medium assurance if Ada/SPARK are too much to learn.


You can use some analysis tools on Rust code, because it generates C-ABI-compatible objects; for instance, I'd expect that https://github.com/agl/ctgrind would work on Rust.

Looking at the MISRA-C guidelines (or more specifically, a pirated copy - are these available legitimately to the public?), it seems like about half of them aren't problems in Rust, because it's warning you about stupid things in C that can't be fixed for legacy reasons, and half of them are things that could be caught with a linter / static analysis tool like https://github.com/Manishearth/rust-clippy . Do you think building something that implements as many of the MISRA-C checks as possible is useful progress towards the high-assurance Rust goal?


I said MISRA-C combined with tooling. Many of the best tools using very, diverse methods start from a safer subset of C (eg MISRA-C) or need C itself. Such tools would have to be rewritten for use with Rust. Difficulty of that varies per tool but it's not happening for most right now.


I get the side channel concerns, but most developers make a mess of languages that let you control these things. A few, very few, people I trust to write in a language like C build tools I trust (Dan B, etc). I also think it is very hard to convince folks about side channel concerns.

We have to live in the world we have too (e.g. Signal on an Android is better than not having it, etc.). When I document side channel issues in customer code there is almost some way higher priority issue they have to fix first, etc. But your advice is good for someone endeavoring to do a "correct" approach from the ground up.

Thanks for the detailed thoughts though. This is the sort of thinking anyone who thinks "I will write a secure chat program" has to do ages before they start writing code.


It's unlikely that the cost of whole-program side channel resistance is worth it for an application like Tor. Side-channel-resistant cryptography is almost certainly sufficient for most reasonable threat/cost models.


You can limit it to the protocol layer. High-assurance security in military systems just used fixed-speed, fixed-timing transmission with covert channel analysis of setup, crypto operations, and error handling. Any problems at all violating key invariants cause fail-safe. Tor could do something similar.

Still need a language that allows such analysis at least for those components being analyzed.


G'day Nick!

Java is getting an AOT compiler in July (Graal, http://openjdk.java.net/jeps/295) that will let you AOT compile parts, or all, of your program, including the JVM modules themselves. This would seem to leave the GC as the main source of side channel vulnerabilities. The GC itself will become more pluggable as well, with a pure Java implementation. What requirements would you put on a GC for side channel safety? (Ignoring the obvious just allocate a large heap and never GC anything, eventually just restarting the process when it runs out of heap). What if we had a fully concurrent, pauseless GC (e.g. Azul), would that change things?

What other issues would remain in your opinion?


A concurrent GC running on a single processor machine is still going to pause. There are still other ways to get into situations where an attacker can cause the gc to kick in in ways where you can get information -- find a few CPU-heavy functions that nudge the GC to kick in when and where you want it. Harder, but even a background GC can wiggle into the foreground.


Even a statically compiled program run on a single core machine will pause because of the OS scheduler. Assuming we have a multicore cpu then one core can be dedicated to the GC.


Good you're thinking out the box. That pause might not be a problem, though. The pauses that are a problem when the malicious app can control the timing with an external observer seeing those manipulations. The OS scheduler is usually independent of secret processing. It's just doing it's own thing causing pauses that don't tell you about the secret itself. Theoretically, there could be an esoteric attack in some situation where an OS scheduler would accidentally leak somethinh or be activated to leak something. Not popping into mind righg now, though.


Good to see you, Ian! Yeah, there was a recent post on Lobsters with a pile of progress on tech you sent me a while back. Them getting an AOT can help so long as you can get a good, mental idea of what the assembly is going to be doing for a given piece of Java. Otherwise, you'll be doing a lot of rewriting that might make one reconsider Java in first place. Pluggable GC's is a good think as is ability to turn it off for unsafe native. You told me before they could do the latter, probably were for some of the platform.

Before answering the other question, it helps to understand what covert channels are fundamentally. First, know there's always two parts: Sender w/ access to secrets but inability to do I/O outside the system; Recipient w/ no access to secrets but ability to send its info out. These might be in separate processes, partitions/VM's, or even on a network w/ stuff happening due to protocol interactions. Idea is to find a way to communicate that wasn't intended for communication. Should help confirm the dark things I say about mainstream INFOSEC when I say I couldn't find almost any intros or blog articles on these for you in top results. I found one, though, that describes them well even if not having many examples:

https://arxiv.org/pdf/1306.2252.pdf

Now, back to the GC. The GC might kick in whenever secrets are being processed. This could mask information about them due to unpredictability or leak information about them. The simplest route to dealing with it is not allowing GC while performing any operation that processes secrets. Depending on the app, the impact on memory availability or performance can vary considerably. There's also at least three more channels to look for on even basic app. The keys might leak in memory that's released back out of the system. First needing overwritten is in Java app itself for anything GC releases. Gotta look at assembly since compilers sometimes get rid of that as a "useless" operation. Second, the OS itself might leak by swapping out privileged Java app (Sender) for a Recipient that simply reads the registers before doing anything. Secrets might still be in them. Orange Book & separation kernels required these overwritten every time a process change. Don't know if current OS's do that. The "swap" part of the filesystem itself is a risk and should always be disabled. Finally, a Recipient in another process can receive secrets through the cache activity of the Java app. That's an old one that's hard enough to [confidently] deal with that I just advised running trusted apps on one CPU and untrusted on another CPU. That's physical isolation with them communicating over a pipeline, SMP since caches are separate, or multicore with no shared cache between cores. There's CompSci work such as partitioning caches and potential with embedded CPU's w/ things like locking, real-time caches that might help. Who knows real practicality, though.

So, you prevent or overwrite any storage of secrets that another process could touch. You put a brick wall between two events where a recipient observing timing could possibly learn something. Eliminating non-determinism can go a long way. Reference counting might help since you at least know when you'll deallocate w/ similar checks happening constantly. The deallocation might even be masked. For protocols, the classic response by military systems was fixed-size, fixed-rate transmission with extra attention that error responses didn't leak anything. Every detail you can sealed instead of just data payload since any might be storage channel. So, there's you a start on it.

And this was just running an app + considerations of a GC running in the background. Leads to all those problems. See why high-assurance security invested so much effort into automatically or at least reliably getting these damned things out of our systems? Just imagine how many are in UNIX API's, common protocols, and clouds. The fix is hard and expensive if it's legacy so they're definitely still leaky. :)


What are good resources for learning Ada/SPARK and MISRA-C?


Those resource change quite a bit over time since it's both a small, legacy ecosystem and a constantly-changing, active ecosystem. I'd have to update my resources when my busy, busy schedules slow down. I plan to, though, on those two tech w/ guides & whatever FOSS tools I find. Just email me at address in my profile about it. I'll just save it in there so I can email it back to you once I'm up-to-date on the learning resources & tools available.


How has this community reached the point where the vast majority of comments on this decision are arguing about Go, which isn't even the language that they picked?


pg's early essays like http://www.paulgraham.com/avg.html and http://www.paulgraham.com/power.html drew a crowd who want to see language design move in the polar opposite direction. From that viewpoint, Tor only barely dodged a bullet.


This is either a horribly timed announcement or a joke written very seriously. I have no idea which.


Seems legit, the meeting/sprint where all that stuff was decided/announced and done was last week (22 to 27 of March), it seems to just be the ML update which was unfortunately timed and posted late on the 31st.

Or Sebastian was very dedicated to the joke, but they seem to have posted a comment confirming it's serious on /r/rust: https://www.reddit.com/r/rust/comments/62o9rx/tordev_tor_in_...

Looking at their comment history, both the Tor membership and the interest in Rust are there, and Sebastian just tagged themselves on the relevant Tor issue: https://trac.torproject.org/projects/tor/ticket/11331


Well not everyone in every culture thinks no serious work can be done on April 1st.


It's legit, Tor devs I know have been talking about it for a while.


How so? I'm honestly wondering why.


Today is April Fool's Day.


Oh. So I get it is a joke, I just don't get why it is a joke. I program daily in Go and everytime I read about buffer overflows and a dozen other preventable security holes I'm glad I'm programming in Go.

But maybe this joke is only funny to those who enjoy programming in C? :)


Doesn't seem to be a joke, the "meeting" was week-long and lasted from the 22nd to the 27th: https://trac.torproject.org/projects/tor/wiki/org/meetings/2...

It's just the ML recap mail which hit a rather poor timing.


The email was time stamped as being prior to April 1st UTC, so it's actually less an issue of the recap email being poorly times and more an issue of the post to HN being during a confusing time period.


I mean, the post could have been posted on April 1. We don't know that it was posted from a UTC area.


I believe Sebastian is German (or possibly Austrian), I think the CET date change was last week so we're in CEST (UTC+2) which means it was posted at 30 to midnight local, close enough that it could be just in time for April 1st local.


I am curious why they were advised not to use Go. Probably not a safety concern.

Edit: cgo != Go. Thanks for the responses. I have done a bit of Go, but just pure Go.


They were not advised against Go but against cgo. Part of what they want is incremental conversion and cgo is at the same time not-go[0], costly[1] and complex[2], and then you still need to manage the Go runtime (GC & al) from within your C system. That makes integrating the two difficult, especially when you want to replace the existing system piecemeal.

A pure-Go rewrite might be an option (in fact Tor seems pretty firmly in Go's use cases), but that's not what the Tor team is trying to do.

[0] https://dave.cheney.net/2016/01/18/cgo-is-not-go

[1] a cgo->c call is ~100 times more expensive than a go->go call, and ~400 times more expensive than a c->c or rust->c call https://www.reddit.com/r/golang/comments/3oztwi/from_python_...

[2] https://www.cockroachlabs.com/blog/the-cost-and-complexity-o...


go -> C calls have gotten way way cheaper in newer versions of Go. There's still overhead but it's not as bad as it used to be.


I love that the Rust core team is defending Go in a thread ostensibly about Rust and Tor.


It's still absolutely terrible in terms of ergonomics. You're forced to perform manual memory management, etc. I've done it a few times and I absolutely don't recommend it.


I'm not making any value judgements here; just saying that the times have decreased significantly since the links that were posted.


Finalizers help with memory management in the simple cases.


But C code may not keep a Go pointer that persists between calls (because GC). I can imagine that this is a problem for gradually converting code bases.


Sure but that's not what I was replying to? I was only talking about Go->C FFI. If someone didn't know about finalizers, then I've might be trying to insert `free` calls everywhere in their Go code, which could become quite annoying.

But yes, Go pointers in C code is bad juju.


Sure but that's not what I was replying to? I was only talking about Go->C FFI.

Sorry, my reply was too brief. I wanted to add that the ergonomics are bad, not just because of freeing memory (for which the inconvenience can indeed be reduced with finalizers and/or Close() methods plus defer). But rules such as this one make Go->C FFI unergonomic as well. To give one example: many linear algebra libraries (e.g. Tensorflow) have their own wrappers around raw arrays to represent tensors (with their dimensionality) [1]. As a consequence of this rule, one cannot just a pointer to the first slice element to such functions (since a pointer to a Go object would be stored in a C struct), but have to malloc an array and copy over data from the slice to the C array.

[1] There are other issues, such aligning slice memory to 16-byte boundaries.


Ah yes, you are absolutely right. I actually modified Rust's regex C API in part because of this problem in Go. I can't remember the details, but they were similar to your example where the only way to work around it was an unavoidable additional allocation.

(Of course, I think the change led to a better overall API. Go just helped me get there in a circuitous way.)


Go and Rust are very different languages. Rust is, by design, well-suited to Tor's use case, where they have a large C or C++ program and they need to incrementally rewrite parts of it (and maybe never all of it!) in a better language.

It turns out (or so I hear) that Google statically links everything in production, and has been using C++ as a language to implement HTTP endpoints for a long time. So Go is a better C++ for what they want out of a better C++; for the rest of us, it looks more like a compiled language along the lines of Python/Ruby/etc. with a nice deployment story. If you want that out of your better C++, Go is great. If you want to reimplement all of Tor from scratch, Go certainly seems like a reasonable choice.

But as a result of these priorities, Go basically doesn't have interoperability with the platform ABI as a goal. (For some combination of historical reasons and the lack of complicated features in C, the platform ABI on just about every platform these days is a C ABI.) Rust does; it uses a standard compiler toolchain (LLVM) instead of what's basically a custom one (Plan 9), and the standard toolchain knows how to generate calls that follow the C ABI. Rust doesn't have a runtime of its own, and it's safe to directly call into a Rust program from some arbitrary point in a C program. Rust's allocator doesn't care if you do stupid things with pointers it allocates, as long as you give them back eventually. Rust doesn't create threads on its own unless you ask. Rust functions use the normal stack. Rust on UNIX uses the platform libc. And so forth.

It's possible to call C code from Go and vice versa, just as it's possible to call C code from Python and vice versa. But Go is not best tool for this particular job.


I think Go is a better C, while C++ is the opposite.

+1 for the rest of the explanation.


Totally agree, and Rust isn't a sane choice either even though the points you make are valid.

Rust has some interesting features, but being C isn't one of them.


Can you expand on why you think Rust isn't a sane choice here?


Because again you are deviating from the industry standard completely portable lingua-franca language that is designed explicitly for precisely these types of problem spaces, and is perfectly in tune with the OS and existing standard library.

What would be the advantage, just improved memory safety guarantees for people working on the project?

If that's the case start again from scratch and the first thing you do is build a handle based memory management system that everything must go through Safexxxx() versions of everything and then explicitly enforce no other memory access patterns.

If you want to do concurrency, build simple threads with input queues that would just be like channels in Go.

You can use a completely functional actor model in C. Is all this object oriented dangling pointer stuff that is scary, but there really isn't any need for that.

Software pipelines with slab allocation or ring buffers virtually guarantee no leaks and are very scalable and cache friendly.

That would be my personal recommendation.


You can do all of these things in C - but at the point that you're enforcing use of your Safexxxx() functions, you've already given up on the industry standard. You can't use the normal OS libraries; you've got to route everything through your special functions.

The advantage of Rust is, honestly, that it has a community of people who are excited to do this sort of work in a language. At least 50% of the advantage of any particular language is not the benefits of the language directly, but the community around it (that derives indirectly from the benefits of the language, but also from marketing and other things). Some time ago I needed to write bindings to SCM_RIGHTS / CMSG_*, which is probably the trickiest part of the libc API (all of cmsg(3) is macros that do stupid things with casts). It was annoying, but it led to a better interface than C itself offers, and someone else found a bug in the OS X handling. I couldn't expect that if I were working on my own custom reimplementation of libc.


So basically what it comes down to is the argument for building it in Rust is there are some excited people.

And is funny you talk about community, since as the guys on slashdot and others pointed out, the community of people who can use and work in C effectively is orders of magnitudes more than Rust. Have you any idea how many Linux Kernel developers there are alone?

http://m.slashdot.org/story/324469

Building API's specific to the use case is part of our jobs as professional developers.

Abstracting a few things things isn't "rewriting libc", that is just general practice for most decent size projects.

Anyway, the decision is made, so the whole thing is moot at this point.


> the community of people who can use and work in C effectively is orders of magnitudes more than Rust

The key word here is "effectively." For the purposes at hand, effectiveness includes memory-safety. Do you know of a single project that implements cmsg(3) in C or C++ in a memory-safe, well-typed, cross-platform way? Or a project where I can submit a pull request and expect it to be reviewed, tested across platforms, and fixed?

I do genuinely believe that the community of people who can use and work in C effectively, in the sense of effectiveness that I and the Tor Project are interested in, is orders of magnitudes smaller than Rust.

I have a very good idea of how many Linux kernel developers there are - and also how many high-severity security bugs there are. I'm a coauthor of a research paper where we wanted to talk about exploitable security bugs in Linux, so we sat down and found a local privilege escalation in hours (CVE-2009-0024).


C programs don't tend to be in tune with the OS and existing standard library if they support more than one OS. You need a heavy abstraction layer by the time you're doing anything vaguely complicated - which Tor is, by simply using an event loop. It's also not super-easy to interface with crypto libraries safely without building an abstraction layer. So you're a few hops away from being "in tune" to start.

Rust provides, among other things, a decent type system that can encode some of your invariants in a manner that's a lot clearer - and every operation goes through the equivalent of your Safexxxx() functions by default, making the actual logic of what you're trying to do further clearer. A reasonable error handling mechanism is also built into the standard library and partially into the language, rather than checking return values for -1, or is it 0, and is the actual error in errno or was that function actually meant to return a value between -1 and -100 corresponding to the error, etc etc.


Go needs a garbage collector that needs to be set up. Using Go code within C code is possible, but it creates additional hurdles.

Therefore a slow transition of rewriting parts of the code in a safer language and having the core still in C is much less feasible with Go. With Rust you can easier just compile some object files and link them into your application.


Here's what I'd have told them:

https://news.ycombinator.com/item?id=14013617


They were advised against cgo. In my not so recent experience, it is a huge PITA.


I never understood the idea behind cgo instead of having a proper FFI like Delphi, .NET, Eiffel and so many other languages.


That's probably because of coroutines.


The runtime could take care of that, just like it happens in another languages.



I looked into Ada last year. Getting a toolchain working sanely on a Mac seemed quite a lot of work; it's not in Homebrew, the MacPorts version has some weird bootstrap process, and the various random versions available for download had a murky mix of license and implementation issues that i don't remember in detail.

So i read documentation instead. Ada mostly seems like a pretty sensible language. Its story on memory safety, though, seems to be "don't use dynamically allocated memory", or at least, if you do, you're on your own. It doesn't have anything like Rust's safety guarantees, or those that fall out of having a garbage collector.

If it was 2007, and Ada was a bit more easily available, I'd be agitating for it. But i think its shot at the open source big time has passed.


Someone left a comment mentioning that i could download a compiler from AdaCore, but seems to have deleted it. I was going to thank them! AdaCore's website is not very helpful, but googling turned up:

http://libre.adacore.com/download/

You can download the 'GPL edition' of GNAT [1]. This is a complete toolchain for Ada 2005, with the same actual compiler as the commercial version, i think. The libraries are GPL'd, so if you distribute a binary built with it, it has to be GPL'd. So, no Ada 2012, and no way to distribute binaries willy-nilly, but certainly enough to explore the language.

[1] http://libre.adacore.com/tools/gnat-gpl-edition/faq/


Ada allows to return from a function a stack-allocated array of runtime-dependent size. This alone removes wast number of cases when one needs to allocate memory dynamicalky in C/C++.

I always puzzled why even C++ does not provide any facilities like that out of the box.


(1) Better explanations about how to live without heap allocations or how to use them effectively in Ada. It feels like unchecked_deallocate is wrong, but then I don't really see explanations about what programmers should do instead.

(2) Compiler and runtime licenses. I always feel like the version of Ada I get is either somehow not the "best" one or has some lingering license issue with the runtime library. I don't think this is necessarily true, but it's easy to get that impression.

(3) Packaging, libraries, and documentation. Many developers expect a tool like cargo to be available and have access to a wide variety of open-source libraries. They also expect lots of tutorial documentation and blogs.


I advise you to watch this presentation.

"Memory Management with Ada 2012"

https://archive.fosdem.org/2016/schedule/event/ada_memory/


Thank you, very educational!


I mean, isn't SPARK basically meant for high assurance things (like Tor should be)?



& considering that Naval Research Laboratory gifted us with Tor it is surprising it wasn't written in Ada to begin with.


1) Two words: "begin" and "end";

2) Unix, C is so fundamental to building software, that I think any language that doesn't share syntax with it is doomed. Having a common syntax helps in learning new languages, IMO, and can also be a launching point for differing semantics...


I'll give you it's overly verbose even though quite a bit of it was justifiable. The theory of Ada's designers was that people read software more than write it. So, the syntax should be designed to facilitate catching errors in maintenance mode, during extensions, or during integrations. It's done phenomenal at that per industrial, case studies despite having been invented in the 80's when lots of language decisions were still being debated.

EDIT: I keep thinking a different language that acts as a front end w/ a better syntax might be a good idea. It outputs Ada that integrates with the tooling ecosystem. Also, a seemless FFI for C libraries like Julia's.


I totally agree that code should be written to be read.

It certainly wouldn't hurt if people simply used a more literate programming style no matter what language they choose

Far too many people think code as below is acceptable.

This is C obviously, but pretty much equal horrors around in every language.

This isn't 1994 and the compiler really doesn't care how long your variable names are, plus EatWhite() is pretty damn fast.

  const int to_pn  = base_n ^ label_n;
      const int from_p = _array[to_pn].check;
      const int base_p = _array[from_p].base ();
      const bool flag
        = _consult (base_n, base_p, _ninfo[from_n].child, _ninfo[from_p].child);
      uchar child[256];
      uchar* const first = &child[0];
      uchar* const last  = flag ? _set_child (first, base_n, _ninfo[from_n].child, label_n)
        : _set_child (first, base_p, _ninfo


> This is C obviously

No, that's C++.


You can't say that definitively based on that code. It could very well be C code. There's nothing there syntactically that a straight C compiler would barf on.


Absolutely correct.


It could be C if `_array` is an array of

  struct {
          int check;
          int (*base)(void);
  };


It is!


It is C.


> I keep thinking a different language that acts as a front end w/ a better syntax might be a good idea... Also, a seemless FFI for C libraries like Julia's.

You mean Rust I think ;)


People have had a hard time learning Rust. I also get gripes about its inconsistency in language syntax. The FFI I hear is good. It certainly doesn't output to SPARK or a C subset designed for easy, static analysis.

So, Rust ain't the Ada makeover Im thinking about. It's also in a stability-oriented freeze of existing design right now. So, the makeover will need to be a different language.


> People have had a hard time learning Rust.

I won't lie to you. It took me longer to produce good code in Rust than almost any other language I've recently learned. But it's worth it; it's quite simply amazing that it is capable of guaranteeing what it does, with nearly zero overhead. Also, I'd say in many ways it's simpler than C, because it has no undefined behavior; I don't need to concern myself with any of the nuances that I had to learn in C. I highly encourage you to give it two weeks, that's what it took me to become hooked.


That type system is pretty sophisticated. You could probably slap a FP front-end ala Haskell but that would hardly make it more accessible. Might as well just bite the bullet and learn the language.


Regarding #2, there's definitely a penalty a language pays for not following C-like syntax and semantics, as people consider it "harder" to learn, because people generally discount the time they've already put into learning similar languages when considering how easy a language is to learn. I suspect someone that knows Lisp but not C might find Clojure easier to learn than Rust, or even Python. The less software in use written in C, the less fundamental it is.

There's also a penalty a language pays for following a C-like syntax and semantics but diverging in specific but significant ways. In this category, I present Perl, which much of whole generation of users decided to treat like C, and got very confused and upset when it didn't always behave like they expected (which I maintain is because they didn't actually understand the language as well as they thought). There's a penalty for being very like something else to the point that people can mostly ignore the differences, but occasionally those differences come out to bite them if they haven't actually learned what they are.


"BEGIN" and "END" aren't all that bad. Some of us got our start in software using Turbo Pascal long, long ago.


"End" didn't stop Ruby from taking off... admittedly a different crowd.


+1 for Ada. Rust is basically hipster-compatible Ada, which explains why Rust will be adopted and Ada will not.


Repeating this nonsense won't make it any more correct.


I like their approach of chipping away pieces instead of porting the whole system. It will result in better modularity and some generic libraries that can be used by other people. Seems like a win-win.


I live and love in Amsterdam and Golang seems to be the quintessential hipster language for this quintessentially hipster city.

Booking.com's soup du jour if you will.

Basically, if you aren't using JavaScript in a web shop, and you claim to be a "full-stack" "ninja", then probably you are using Go around here as a jobbing programmer.

I know that sounds terribly cynical and obviously a massive generalization but that is my personal experience.

I don't have a problem with Go per-se, but I do have a problem that a lot of people seem to go to extreme lengths to defend what someone else mentioned is frankly a pretty "mundane" language, citing memory safety, but more strangely portability and performance as its wonderful virtues.

When I meet with the zealots of the Go community around here I often have to quickly excuse myself with good grace.

Seriously, if that's what you want... go (pardon the pun) use C#, it is a saner and more expressive language, has a higher performance runtime and is way more portable with less vendor / career lock-in.

Go... I just don't get it.

Positives... Has a fairly nice package manager like npm.

Isn't made by Microsoft if that's your thing.

Oh and heaven forbid you don't have to think much... until you do because it is slow.

For a project like Tor though, which is damn slow as it is, I think it is totally the wrong choice.

Developers, please suck it up, get over it and learn C/C++ and some variation of Lisp.

Fine, go play with different languages for fun stuff. Use python for ML, try serious meta-programming in D, or jump into Haskell for kicks.

On the other hand modern C++ really can be a very safe language to work in if you can be bothered, and you are only deceiving yourself and your project going with something less sympathetic to the machine itself, especially if you are working on infrastructure level systems.

Sorry if this offends anyone, and of course it is just one opinion, but I think I'm being fairly nice as compared to what Linus might have said in comparison.

Basically stop it with the hobbyist shit. You are horrifying me and probably many others.

You aren't writing a web page here, so please treat the project seriously.

IMHO


> Developers, please suck it up, get over it and learn C/C++ and some variation of Lisp.

I've got over 20 years of writing production C/C++ under my belt and I know Lisp. So I've "sucked it up." Am I allowed to like Go now?

It never ceases to amaze me how many people are bothered about other people's taste in something so mundane. If you don't enjoy programming in Go, don't do it. I personally think it feels light and easy like a scripting language, but with more C-like performance.

I like the fact that its stdlib is very complete, so I can sit down and do pretty much any kind of small project with zero external dependencies, I like the fact that the core libraries are well-designed so the interfaces are consistent and easy to learn, I like the fact that it produces statically-linked binaries so deployment issues are minimal, I like the simplicity of the CSP approach for a lot of concurrency problems. I could also come up with a list of things I don't like about Go, but I'm not going to bother, because I've decided that on the whole, I really like it as a tool.

Btw: > Positives... Has a fairly nice package manager like npm

One of my biggest complaints about golang is that it really doesn't have nice dependency management. It's terrible and probably the one issue that has made me seriously consider walking away. :)


> If you don't enjoy programming in Go, don't do it.

Ordinarily, I would agree; I don't really care what people do when writing application code. The way it's filtered into devops tooling makes the choices of Go peoplea problem for me, though. If I'm going to be stuck with a language with bad error handling and inexpressive typing, I'd rather it be Python or Ruby so at least I can leverage dynamic typing instead of bad static typing.


Well I could make the same complaints about how ruby has "filtered into devops tooling." If I never see another quasi-DSL that's really just a cute set of ill-specified ruby methods I will die happy.


You're talking a taste thing; I'm talking a correctness thing. The inability to create an easily-managed DSL doesn't compensate up for Golang being actively dangerous when it comes to error management. One of these provides structures for ensuring that you catch errors and can do the right thing, and the other demands that you if-check them upon every invocation of a method that can fail.


Sure, what you say is true but equally so or better for .Net or Java as well, plus they are generally faster and in the case of C# a better language imho. Why not them?

You are right, the standard library is a pleasure to work with in Golang and for the most part I trust it, but hey anything you can do in that library is easily reproducible in more or less any other mainstream language.

I really don't have an axe to grind with the language itself, it's a fun and productive thing to use for sure.

My main problem is with people who spend most of their time writing customer service applications thinking that the same language they are ultra successful with in that space is suddenly appropriate for writing systems software, because, ughh "safety"?

There is a heck of a lot more to safety than buffer overruns, and if you can't formalize abstractions for dealing with memory usage patterns, one might argue you have no business writing Tor in the first place.

Hah well compared to C/C++ dependency management, Golang does pretty darn well!



Go doesn't really have a package manager (and that's a problem), it pulls the sources of the dependencies from github for example. There are multiple tools [1] to manage versioning, but Google only released recently (end of january) an official tool, still in beta.

[1] https://github.com/golang/go/wiki/PackageManagementTools


You do realize they didn't pick Go, yes?


Yes, but we got into a heated discussion about Go earlier in the thread. And it got my blood pressure up. :)


I hope Linus can develop a new language.


Literally none of this matters. None of the flaws exploited in Tor are memory corruption flaws. They are entirely architectural and design flaws. No attacker cares if they move to Rust. _NO ONE_.


> None of the flaws exploited in Tor are memory corruption flaws.

http://www.cvedetails.com/product/5516/TOR-TOR.html?vendor_i...

This list is full of memory safety issues.


> None of the flaws exploited in Tor are memory corruption flaws

_exploited_ is the operative word here.

Show me the list where people wrote exploits for the bugs you point out, or where someone abused them to de-anonymize a Tor user? There aren't any.


Concurrent thread for this safer language: https://news.ycombinator.com/item?id=14013444


I'm disappointed they did not consider COBAL given its vast superiority and flawless track record for April first rewrites.


Fri Mar 31 21:23:27 UTC 2017


I believe there is a crate they can use to prevent date bound out of bounds fun leaks. (can't find the link atm)


This is exciting not only because of the Tor project itself but because this will set an example for other projects to follow.


This creates also an opportunity to engage Mozilla into contributing


Note that Mozilla is already working with Tor developers to get more of their patches upstreamed into Firefox to ease their maintenance burden.

https://blog.torproject.org/blog/tor-heart-firefox


If any of the developers are reading this, converting the existing C code to SaferCPlusPlus[1] (a memory safe subset of C++) is probably a more expedient solution (if that's what they're looking for). (And speaking of contributing, an automatic translation (assistance) tool is in early development, and could maybe be functional in short order with a little extra motivated talent... :)

[1] shameless plug: https://github.com/duneroadrunner/SaferCPlusPlus


I would recommend updating your .gitignore for VS2015. The .vs directory and *.VC.db should both be excluded from the repo.


Thanks. I'll take all the git/github advice I can get. :)


we all want to be safe. no one can say no to safety ...


I don't know much about Tor. But I hope I can route all of my home network traffic through it. That or route everything through VPN. I'll bet you can guess why I'm suddenly interested.


Couple of things. With Tor, you cannot control your exit in a sense. Since some websites are really shoddy (a lot of application portals are) and don't have / support SSL, you would be transmitting your entire application profile through TOR unencrypted. Somehow, I trust my ISP more than some random TOR exit when it comes to this.

Second, many websites (sadly) do not work if you are using a VPN (like Netflix).


Netflix's VPN detection seems to work by blacklisting a known list of IPs (e.g. AWS datacenters, etc.). Proxying through a machine that's not on the list works.


Founder of https://easyvpnrouter.com/ ask me anything or how to build one yourself if you want a project


I guess the main question about this product (which is the sort of thing I'd buy -- an hour of saved effort pays for it) is how can I trust you to be less evil than Comcast?

I guess if you're open-source and a lot of people are paying attention, that would do it.


It is much more difficult than 1 hour to setup yourself, because of FCC regulations flashing new routers is much more difficult, you need a TFTP server, it's pretty annoying. In addition the edge cases of the internet going out, then when it comes back bringing the VPN back up automatically, being able to switch PIA accounts / countries easily, it's not a simple thing. I got into this because I wanted to build one myself, and I have a friggin master's degree in CS and it took me several weekends. So I figured a business could be formed :)


We use OpenDNS for safer kid surfing and logging. Can I get similar functionality? Your FAQ says you use "Private Internet Access’ custom DNS servers".

Also, what about the Netflix issue?


You cannot use Netflix with any non-self-hosted VPN service, they have put a lot of effort into getting around it. We use PIA's DNS for anonymity.


I'd be cool with whitelisting a couple of URLs, such as netflix.com. Or google, hacker news, whatever. Who cares if comcast sells the fact that I use a service everyone uses?

I really just want things that I access in Private Browsing, or using a service other than http/https, not to be recorded.

I understand that people in various countries want to use the Netflix from another country. But I'm representing the 99%.


As I said in the other comment, right now we recommend running 2 routers, and connecting to the normal router when you want to access VPN blocked services or want low-latency connections for online gaming. But we will investigate white-listing IP addresses, that would be very useful.


So do you have a workaround? Can you bypass VPN routing for Netflix and other specific domains?


Interesting idea - adding whitelisted domains to bypass the VPN. We will investigate this. Currently we recommend running 2 routers simultaneously (which do not interfere because of automatic channel switching) with the VPN chained after the normal one. This allows you to choose when to use VPN. Basically the Easy VPN Router is designed to be super easy so that non-technical people don't mess anything up. Building one yourself is tricky, especially to cover edge cases like the VPN dropping, the internet going in and out, etc.


Nice! There's definitely a need for this. It would be good to include the actual router specs on your site, like cpu frequency + cores and RAM.


We can include that, sure. The router's are well-known brands which can be looked up in tons of places, but yeah we should add it.


You really don't want to do that. Any exit node could be recording or tampering with your network traffic, e.g. using sslstrip or worse.


you can, but it's not really nice, because TOR is relatively slow, it only makes sense for heavily text based sites and not images/video etc.


*This and almost every CDN starts popping up 'Captchas' atleast the last time I tried.. I find it easier to setup a VPN server and use one.


*This and some sites block completely if they use cloudflare for example https://blog.cloudflare.com/the-trouble-with-tor/


Please forward to me any Cloudflare site on which you still see a CAPTCHA.


Curious why this is one of my most hated comments in at-least a couple days ...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: