Hacker News new | past | comments | ask | show | jobs | submit | pkofod's comments login

Software has bugs. That's the way it is. You may think that Julia (but I suppose this is mostly about the ecosystem of packages around Julia) has too many bugs. Then you can use something else. Like Python. If you move from Julia to Python, you may want to use Numpy? Pretty cool project. It currently has 1,9k issues on Github and if you filter by bugs, it has 599 such labeled issues. How many of those are issues like in the post? I don't know. The same applies to Scipy. For example, the gaussian hypergeometric function returns wrong results for some input values https://github.com/scipy/scipy/issues/3479. This issue was filed in 2014. You can find similar old issues in Julia packages. That's how these things go. Luckily, many of the issues listed in the blog post are fixed.

If you think that picking any language and any library combination with a semi-high requirement for the number of features you want to be already implemented will be able to fulfill the "this has to be completely correct or I won't use it for my research"-requirement you will have a hard time.

The last part of the post seems to be about OffSetArrays.jl. Many people who have implemented libraries and who care about composability and generic input also agree that the Base AbstractArray interface is not perfect or complete and sometimes the issue is that the interface that does exist is not followed well enough for composability to work. A more complete, agreed upon, and generally adhered to interface for an "AbstractArray" would be nice and has been+is being worked on and discussed by many people in the community.


Scipy (I believe Numpy parent project) just straight up has wrong formulas in some of its methods because they are legacy and changing them would change people's codes.

Hamming distance was one, if I recall correctly, that wasn't correct, as well as a few others in its parent module.

I still use the package, of course, because its great. But given the disconnect I saw I'm still careful when documenting as to what method is used (and use a comment to clarify). Most of the time it isn't a huge deal.


Econ PhD here! Also IO.


Please chime in if you think anything that I said is wrong.


You don't precompile just because it's run from the commandline though. There must be something missing from your workflow description, or you're somehow misunderstanding what is going on. Since this is a v1.6 blog post: did you try a v1.6 RC?


Maybe precompile is the wrong word, and I should have called it JIT or something else. I don't know.

The point is that running a .jl script that calls my algorithm on a small test case takes 40 seconds to run. If I change the script to run the algorithm more than once on that dataset, all calls after the first complete in less than one second.

Running it with v1.6-rc1, it appears to have improved the running time from 40 seconds to 30. That's pretty good, but still way too slow to enable any kind of workflow that doesn't involve the REPL.

(The 30 and 40 second numbers are very consistent from run to run.)


What does "32GB" have to do with it? Why would a worker process segfault in this scenario?


This makes no sense without showing what you're running ? Are you also using MKL in Julia? https://github.com/JuliaComputing/MKL.jl


Mine was 1½ month "late". The new due date was given 3 days after I put in the order. What are the odds? :)


What do you mean by "refuse to show willingness to fix or even acknowledge the problem"? Do you realize that two of those issues were started by core contributors?


I feel so stupid when talking and hearing about physics... but we are not in any meaningful way "at the center" of the universe are we? Or is every point in some sense at the center (a point of reference thing)? I'm asking because why would it be 46.5 "each way"?


We are at the center of the observable universe. And not that's not special, any star/galaxy is at the center of it's observable universe.


Imagine we exist in a 2D universe, but one that happens to be the surface of a sphere. Any point you pick on that surface is "at the center of the universe."


Hmm, but it looks like the universe is flat, doesn't it?

(Or is it flat like a torus, so doesn't need any curvature to go loop on itself?)


Cosmologists talk about "horizons" a lot, and the analogy of standing on a sphere actually works quite well. Remember that horizons only make sense on curved surfaces. You can't see beyond the point where certain features of spacetime (black holes, expansion, or sheer distance) prevent signals from reaching you, just as you can't see beyond a mountain range or the curvature of Earth itself. Of course you'll need to extrapolate the analogy to three, four, or more dimensions, but the basic idea is the same.


I'm not talking about our own light cone. I'm talking about https://en.wikipedia.org/wiki/Shape_of_the_universe

Basically, ignoring wormholes and black holes and assuming that spacetime is locally flat everywhere and it's mathematically a manifold, my question is: what's the shape of the (global) universe?

Global as opposed to observable. So we might have a hard time answering that question. How would you be able to distinguish between the (n-dimensional equivalent of) a torus vs a flat infinite space, if you can't see the repetition?

You'd even have a hard time distinguishing a hypersphere from a flat infinite space, if the hypesphere was big enough so that we can't tell it's curvature apart from no curvature.

Or the universe might be weirdly shaped, and we just happen to live in the flat part.

So I guess the question comes down to:

* assuming no edges * assume Copernicus at least for space (we might have a special position in time) * What's the simplest theory about the shape of the global universe that satisfies our observations?

I suspect general relativity toys around with such questions, because I know that they sometimes look at cosmological (toy) models for the whole universe, and not just what's in the light cone of one particular observer.


We are by definition almost at the center of the universe that we can observe - we can see as far away in any direction.


I know that http://ternaryintelligence.com/ use Julia (because they use a package of mine and have reached out [and allowed me to state that they do so]), but it's also quite clear from past juliacons that http://relational.ai/ do.


For complete completenes sake, Haskell has existed for almost three decades :)


While the earliest versions of Haskell date back that far, those combinators only came out in 2007 (IIRC).


Those functions weren’t part of Haskell back then.


What do you mean? That's my point exactly. While Haskell has that now, they didn't always have it.

If a programming language is created in 2019 they can certainly instantly incorporate insights from existing technology in their design an plans. However, that doesn't mean that any programming language/compiler project started today will instantly have all of those features. Things take time in each new framework.


I don't know what your point is, then.

Haskell didn’t have it originally. Ok. But what do you think that says about whether it’s a new idea or not? What’s the connection? Why mention it in this thread about whether it's a new idea or not?


At what point did anyone claim it was a new idea? The blog post cites prior art in the very first paragraph. Not in a footnote—in the main text. The top post here cites several other languages with a similar model. This entire thread reads like a bunch of dudes who are really dying to "well, actually" someone.


I think people were responding to

> The only systems which have supported compute-oriented composable parallelism like this are Cilk and TBB

Because it isn’t true. Haskell is an example of why it isn’t true.


It's extremely debatable that Haskell is compute-oriented. I know that Haskell people like to talk about doing numerical computing in Haskell, but in reality it does not seem to be a thing. No one has ever used Haskell to implement large scale scientific computations on supercomputers, for example.



This particular example appears to be an implementation of a 'Computer algebra system', which seems to be a type of symbolic computing rather than numerical computing.


[flagged]


You're really asking for a Dan Luu sequel here.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: