Hacker News new | past | comments | ask | show | jobs | submit login

Another thing that would be interesting to measure here is memory usage. QuickJS might be slower (as a minimal interpreter) but at least it should be much smaller than a full optimizing JS engine (running normally or compiled to wasm), both in terms of code size and runtime memory usage. That might matter in some cases.



"at least it should be much smaller than a full optimizing JS engine, both in terms of code size and runtime memory usage"

Why should QuickJS use less runtime memory?

Genuine question, I just recently learned of QuickJS and what I understand, it is way smaller in itself, meaning you do not need so much memory for the engine itself, but that does not say anything how efficient it is with executing js code. Or are there benchmarks that say, it is also more memory efficient with executing js?

Either way, yes, I also think meassuring memory is important.


Optimizing VMs often use lots of memory for various reasons. Specific costs in modern optimizing VMs include the memory for the JITed code, and with multiple tiers you end up JITing each function more than once. You also need memory to save information about which functions to tier up (a call count or something more refined). Even things like polymorphic inline caches have a cost, the space for each of those PICs, and the code around them.

A simple interpreter can avoid all those. It can build a very simple and compact bytecode and just execute that.

(But, of course, the speedups in optimizing JITs are usually worth it! 10x or more.)


Does the engine store compiled binaries for the next time the page is loaded?


Some (most?) do, yes, here is a V8 article on that for example: https://v8.dev/blog/wasm-code-caching


That seems to imply julia for wasm on chrome wouldn't have the "time to first plot" issue.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: