Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Fortran has the more optimized libraries (blas) and is also capable of optimizations that C isn't (guarantees that pointers dont alias).


Modern C has the restrict keyword for that. There isn't any competitive advantage left for Fortran over C or C++, the only reason why Fortran is still part of the modern numeric stack is that BLAS, LAPACK, QUADPACK and friends run the freaking world and nobody is ever going to rewrite them to C without a compelling reason to do so.


Fortran's primary competitive advantage over C and C++ is that it's a conceptually simpler language that's easier for humans to use safely and effectively for numerical computing tasks.

It's not just being tied to BLAS and friends. Fortran lives on in academia, for example, because academics don't necessarily want to put a lot of effort into chasing wild pointers or grokking the standard template library.

For my part, I'm watching LFortran with interest because, when I run up against limitations on what I can do with numpy, I'd much rather turn to Fortran than C, C++, or Rust if at all possible, because Fortran would let me get the job done in less time, and the code would likely be quite a bit more readable.


I'm aware of the restrict keyword, though its usage is not typical. It enables an optimization that fortran has by default, that was my point.

And yes, I already said that packages like blas have had massive amounts of work put into them and have lasting power.


I didn't downvote you, if that's what you were hinting at, however your phrasing was more likely to be interpreted as "C can't do that" as opposed to "C doesn't default to that".

It seems we agree, after all.


I believe we do.


Note that while BLAS and friends aren't getting rewritten in C, there is an effort underway to write replacements in Julia. The basic reason is that metaprogramming and better optimization frameworks are making it possible to write these at a higher level where you basically specify a cost model, and generate an optimal method based on that. The big advantage is that this works for more than just standard matrix multiplies. The same framework can give you complex matrices, Integer and boolean matrices, matrices with different algebras (eg max-plus).


That's a cool idea. I don't know how realistic it is to achieve performance parity, but the "generic" functionality is definitely intriguing.


The initial results are that libraries like LoopVectorization can already generate optimal micro-kernels, and is competitive with MKL (for square matrix-matrix multiplication) up to around size 512. With help on macro-kernel side from Octavian, Julia is able to outperform MKL for sizes up to to 1000 or so (and is about 20% slower for bigger sizes). https://github.com/JuliaLinearAlgebra/Octavian.jl.


Except "modern" C does not prevent one to misuse restrict and doing so is UB.


Also, when Travis Oliphant was writing numpy he liked the Fortran implementations of a lot of mathematical functions, so it was an "easy" borrow from there instead of reimplementing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: