> based on your ability to solve the largest dense linear solve you possibly can - something almost no real application does.
Sounds right.
I was going to say what about large-scale optimization problems? But I realized that most typically only require sparse linear solves.
Gradient descent does require the solution of dense Ax=b systems. But the most visible/popular application of large-scale gradient descent today, neural networks, typically use SGD which require no dense linear solves at all.
Sounds right.
I was going to say what about large-scale optimization problems? But I realized that most typically only require sparse linear solves.
Gradient descent does require the solution of dense Ax=b systems. But the most visible/popular application of large-scale gradient descent today, neural networks, typically use SGD which require no dense linear solves at all.