I'm saying that techniques that are easy to parallelize will become more popular.
Of course he's right, that if you can make a serial technique faster, you should do that.
I see a way to support his position: in past decades we got free speedups from silicon - then we hit a kind of "Peak Silicon". Just as for Peak Oil, it makes sense to try techniques that weren't worth it before.
Personally, I think the first step should be to give up all the horrific layers of bloat-on-bloat; but instead we got (e.g.) faster JVMs and JIT JS compilation.
But what about the step after that? He thinks we should mine the tailings (and he's surely right there's a lot to be found there).
I think multi-core seems the only extendable solution (til I guess computers are the size of houses again). But because it's so difficult to parallelize code, we'll instead change our approach, and favour techniques that are easy to parallelize. Arguably, this is currently happening with DL.
I've only skimmed the links you gave, so I don't have a deep appreciation of his position.
Formal problems often have theoretical limits, and solutions approaching them. But for "real-world problems", you can often change the problem, and even change the context of where that problem originates - perhaps eliminating that specific problem entirely [1].
So we'll change problems/contexts to ones that are easily parallelizable.
In that dispelling-myths paper, Patrick Madden notes that a less efficient but faster-because-parallel method still uses more energy. A good point, but I'll note that because energy consumption is superlinear with frequency (e.g. a 2 core method at half the clock speed of a 1 core method will use less energy, assuming perfect parallelization), a parallel solution can be both faster and more energy efficient. GPU compute shows this e.g. bitcoin mining.
His last paragraph begins:
Despite these challenges, there is no choice but to forge ahead with parallelism
...which doesn't sound like disagreement. He's mainly criticising breathy papers.
Finally... why is he known by the half-life protagonist? Does he look like him, or is there some thematic connection?
1. Maybe the entire context can be formalized and solved, so you can't "change the problem", but even then, real-world problems tend to keep changing, because of changing customer demands, competition, technology, legislation etc.
>Personally, I think the first step should be to give up all the horrific layers of bloat-on-bloat; but instead we got (e.g.) faster JVMs and JIT JS compilation.
I think this is already happening. The trending languages I see in HN are Rust, nim, zig right now. We're going back to native, and in Rust's case also with fearless concurrency as a paradigm.
Right, but how does that compare to the incumbents? The JVM and .NET CLR don't seem to be going anywhere.
In this day and age I don't think the abstraction layer is the issue. Modern bytecode VMs are already able to be within an order of magnitude of native code, and I'd imagine there's still plenty of room there for improvement (WASM comes to mind). Unless you're on constrained hardware, or developing directly against hardware (rather than using an operating system, which is itself a horrific layer of bloat-on-bloat), a bytecode VM is still perfectly reasonable even for soft-real-time applications (let alone for applications that are throughput-sensitive rather than latency-sensitive).
>Modern bytecode VMs are already able to be within an order of magnitude of native code
Looking at the slowing of hardware advances this could be a 10 year lag in performance. I think that's quite significant.
I think byte code VMs have their place, and for many applications that may be enough, but if you have high performance needs it may be worth checking if the price is one you're willing to pay.
The significance of a "10 year lag in performance" has wildly different meanings between the present day and, say, 10 years ago. 2010 (serial) performance on 2020 hardware would be hardly perceptible. 2000 performance on 2010 would be much more noticeable. 1990 performance on 2000 hardware would be catastrophic.
And yet, Java, C#, Erlang, Perl, Python, Ruby, Tcl... all of these languages and their VMs are decades old now. And yeah, the languages toward the end of that list ain't exactly speed demons, but the ones toward the front are already commonly used for high-performance applications with pretty dang good success, and even the ones toward the back of the pack are there for other reasons beyond just the VM (global interpreter locks, parsing overhead, things like that).
The VM, that is to say, ain't the issue. Hell, things like SPIR-V demonstrate that VMs are perfectly fine even for GPU computation. There's a price, sure, but eliminating that price is about as premature of an optimization at it gets for all but the most constrained of environments. There are almost certainly worse bottlenecks - and those bottlenecks are probably around I/O and memory, knowing most applications (and a bytecode VM often helps here, since the opcodes can be tuned for minimum size, often giving even RISC CPU opcodes a run for their money).
If it makes your life harder, yes, it's premature optimization. But Rust, Nim and Zig are quite high level languages, they don't drag me down in terms of developer performance.
With C/C++ I agree, you'll spend weeks and years debugging depending on how long lived your application is, that you wouldn't have in Java or C#.
In the top 10 you will always find bytecode/vm based solutions not far from C/Rust implementations. Of course, these are highly optimized, but the assumption that native is always better is not always true.
I've seen a few of them, the highly optimized ones usually fall back to some form of manual memory management, which then becomes way more complicated than in a language like Rust which has all this by default.
What I find so beautiful about the native/close to the metal languages is that you can have clean code that's also near optimal, without optimizing a lot. Compared to the lengths people go to get some Python code to run fast it's the easier way.
> which then becomes way more complicated than in a language like Rust which has all this by default.
Um, no:
1. Manual memory management doesn't have to be complex at all - and is arguably much easier to reason about than garbage collection, even if it's more error prone / less guaranteed-to-be-safe
2. Rust's manual memory management (i.e. the default kind of memory management in Rust) is of at least the same complexity as any other sort of manual memory management - if not more due to the need to be explicit around lifetimes and borrowing.
The memory management strategy is regardless orthogonal to whether there's a VM involved; there are garbage-collected native languages (Lisp, D) and not-garbage-collected VMs (WASM, or languages like Perl and Tcl if you count reference-counting as "not-garbage-collected") galore.
- https://www.cs.binghamton.edu/~pmadden/pubs/dispelling-ieeed...
- https://dl.acm.org/doi/abs/10.1145/1324499.1324502
- https://community.cadence.com/cadence_blogs_8/b/ii/posts/eda...
- https://www.cs.binghamton.edu/~pmadden/