Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Unless there is a massive ecosystem that you need to take advantage of (for example javascript), in 2018 greenfield development should probably be in a statically typed language.

I find this sort of generalizations off-putting. It is all about trade offs.

Types can be very useful. There are whole categories of errors that are avoided by types.

And yet, in many cases they are a productivity drain. Time is often better spend by writing testcases instead of chasing down type issues(more so if a language has a 'generics' or C++ templates concept), so that approach got favored in many fields. These will catch type errors, and static languages also need tests, so the gains are not so easily quantifiable.

It is not just a matter of ergonomics, it is a matter of getting out of the way of the problem domain. Most people are paid to solve actual issues, not to craft convoluted C++ templates, custom types or deep type chains.

Type inference helps, but even Haskellers will often avoid it when defining functions, so that they will be explicit. But Haskell has incredibly powerful constructs. I was amazed that you can actually tell the compiler you are writing a recursive data structure[1], which is something that most languages won't allow you to do – the most they will allow you is to define a reference to the same type of object(or 'struct'), and you have to stitch that together in runtime. Simple cases are also simple. A function that gets a list and returns an element of that list? [a] -> a, I don't care what a is, nor does the compiler. But feed it something that is not a list, and it will complain.

A language should let you easily define new types, and allow these to be used without friction by standard functions. For instance: if I have an integer, is this integer describing a temperature value? If so, functions that work with speed should refuse to operate with this data. Standard mathematical functions _should not care_ and should do whatever operations we request(without any silly type casting, I told the compiler this thing is a number, use it). The compiler optimizer should also use this information to generate appropriate assembly instructions.

So yes, if you have an expressive type system, types can be a joy (you'll find yourself thanking the compiler often). If instead your type is spent trying to figure out why IEnumberable<Something> cannot be converted to IEnumerable<SomethingElse>, even though they are subclasses and even though the same operations work with arrays, then types are getting into the way of solving real problems[2][3].

And even with an expressive type system, there is a non trivial cognitive cost. Sometimes you just want to write (map (lambda (x) (do-something x)) lst) and not really care what it is that you are manipulating. On the other hand, I personally do prefer using Golang to talk to AWS apis (instead of python or javascript), precisely because of their deeply nested type hierarchies.

Being able to sprinkle type annotations where it is useful and have the compiler figure out as much as possible otherwise is a good compromise, which is the approach that the Python article talks about.

I would read this as not favoring mandatory static type checking in 2018 but, instead, using a hybrid mechanism that benefits from both approaches.

[1] http://tuttlem.github.io/2013/01/04/recursive-data-structure... [2] https://blogs.msdn.microsoft.com/rmbyers/2005/02/17/generic-... [3] http://www.baeldung.com/java-type-erasure



>Time is often better spend by writing testcases instead of chasing down type issues(more so if a language has a 'generics' or C++ templates concept), so that approach got favored in many fields. These will catch type errors, and static languages also need tests, so the gains are not so easily quantifiable.

Assuming an equivalent program written in a statically-checked language and a dynamically-checked language, if your program has type errors at compile time, it will also have type errors at runtime. The difference is when you know about them.

Static type checking is (informally) about statically proving that the domain of your function is actually what you think it is. They catch "obvious" (for a definition of obvious that is defined by the power of the type system) domain errors. They don't replace tests, but they do constrain the domain you need to test and let you know when your tests are sane.


> And yet, in many cases they are a productivity drain. Time is often better spend by writing testcases instead of chasing down type issues(more so if a language has a 'generics' or C++ templates concept), so that approach got favored in many fields.

From my experience I've seen that type hinting and checking provides a better ROI than automated testing. It is not either/or, you will most likely need some automated testing, even if it is just unit tests. But at the very least use types. It provides broader benefits than just catching stupid bugs and typos. At makes your code base much easier to understand and code paths much easier to follow.


With static typing, you usually have this big suite of unit tests that check a whole bunch of basic correctness out of the box, called the compiler... /s

If you actually use the type system, and build less anemic types, you can eliminate huge swathes of potential errors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: