I wonder how this compares to v8 over the same code. Will v8 get this smart after running the benchmark? I’d test it myself but I’m on an iPhone right now. I wonder if static Hermes will ever end up at break-even or ahead of v8 - especially if doing a lot of FFI in WASM, Hermes might have a better shot at bypassing the memcopy/TextDecoder needed to interop between browser JS strings and WASM utf8 strings in other languages.
If your runtime is a JS interpreter, on iOS you’re almost always better off just using the OS’s interpreter (JSC). Your app will be smaller, and only the OS is allowed to do JIT so it will be much faster.
Even without the JIT restriction, you wouldn’t be able to beat JSC as it’s one of the fastest runtimes out there, pretty much level with V8.
Static Hermes sounds like a different prospect, though, if it’s able to use TS annotations to compile to mostly-static native code. That could be great for genuinely high-performance TS code. (Although you wouldn’t want to immediately shackle it to React Native...)
Hermes itself is already faster than JSC, which is on par with V8. This just puts it way ahead and probably closer to Swift/Kotlin code, at least in the realm of mobile apps.
We've seen the opposite for networking related activities especially when there are 100's of requests that can happen on certain screens. We've tried using Hermes but didn't see any differences other than the app bundle increasing in size.
No, the Hermes interpreter is probably on-par with the JSC-Interpreter if this is a comparison made on iOS React Native apps. JSC-JIT is usually on par or ahead of V8, the iOS JIT restrictions really fudges up results since it's so imperative for JS performance.
From what I've seen Kotlin(and Swift prob also) semantics are fare more AOT-compile friendly so they're probably a fair bit ahead (depending on how much you gimp your JS target to mimic Asm.JS/Wasm but most regular JS developers won't write code that way).
The Hermes interpreter is probably faster than the others because (afaik) it interprets bytecode produced by the hermes frontend parser (which is don’t ahead of time).
There's not much sign of "Static Hermes" on the web. Appears to be a JavaScript compiler from facebook.
"Static Hermes is the the next major version of Hermes, still under active development. It enables optional ahead-of-time native compilation of soundly typed JavaScript, using TypeScript or Flow type annotations."
I find the AI images in this presentation to be quite distracting for some reason.
Probably because it's pretty new. React Native needs to start a JavaScript runtime before it can evaluate its JavaScript code, which is what the application logic is written in, and JS engines are typically not designed to prioritize time-to-interactive metrics and disk usage.
Hermes is meant to address some of those concerns and this is probably the next step towards improving those metrics.
At first, it sounds really promising, like "wow, all the overhead is fixed". Until you get to the bottom and see:
"Currently the only built-ins that we support this for are mostly limited to the math built-ins."
So all those terrible "before" graphs mostly still apply today for most DOM interaction and browser APIs. A few more pieces still aren't done for that to work well. This has some pointers to current state: https://stackoverflow.com/questions/62587845/wasm-dom-access...