Hacker News new | past | comments | ask | show | jobs | submit login

> based on your ability to solve the largest dense linear solve you possibly can - something almost no real application does.

Sounds right.

I was going to say what about large-scale optimization problems? But I realized that most typically only require sparse linear solves.

Gradient descent does require the solution of dense Ax=b systems. But the most visible/popular application of large-scale gradient descent today, neural networks, typically use SGD which require no dense linear solves at all.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: