Hacker News new | past | comments | ask | show | jobs | submit login
Chrome 59 with TurboFan is Sometimes Slower than 58 (prerender.cloud)
90 points by jotto on June 22, 2017 | hide | past | favorite | 39 comments



This page does not show Chrome being 59 being slower. Rather, it shows a contrived bench running 38% slower. Interestingly, it also shows their site running 32% faster.

Given the V8 team has explicitly stated the plan to optimize for real websites rather than synthetic benchmarks [1] these results look completely appropriate and desirable. Good work V8 team!

https://blog.chromium.org/2017/04/real-world-javascript-perf...


Perhaps the title changed? It now says chrome is sometimes slower.


Yep, sometimes, when you're running a synthetic benchmark that doesn't represent real world usage, it is slower.

My Ferrari is also sometimes slower than my VW Beetle, when driving on a uneven dirt road .


So it is slower in a benchmark that doesn't represent any real world problems, while an actual app performs much better?

Isn't that exactly what they were going for?


Yes. It was a very deliberate, conscious decision to not care about the benchmarks, marketing be damned; some of that was a belief the V8 had historically become overfit to them.


I don't think it's so much as a complete abandonment of benchmarks as a de-emphasis on them. I'm sure the chrome team are still running plenty of benchmarks, but I'd guess they're more tuned towards 'real world' usage.


I believe they now benchmark by running popular websites/webapps instead of single tests in loops.


The problem with javascript JIT's is that benchmarks which run for tens of a second in a tiny piece of code are not reflective of general usage patterns. So the quality of the initial pass (baseline in Firefox) is likely more important. Worse, given fairly dynamic code paths, the system can get into a situations where it is cycling between optimization targets and spends a lot of time running sub-optimal code and entering/exiting the JITed code, or the JIT/compiler itself.

Anyway, there are so many engineering trade offs its like a big water balloon, squeeze it here and it gets faster in this spot, but slows down somewhere else. When, what you really need is a way to let some of the water out of the balloon.


A bigger problem is the variability and pace of change combined with poor and level inspectability of generated code. This makes it fragile and laborious to design primitives that compile to efficient code. Contrast to eg. JVM.


webassembly is the answer to the few high-performance bottlenecks.


WebAssembly is a poor and high effort compile target for languages like ES6, ClojureScript, Elm, etc.


WebAssembly is not designed to be a compile target for ES6, ClojureScript, etc. At least not yet. Its primary target currently is languages like C, C++… low-level compiled languages that don't have a garbage collector.

(Source: https://github.com/WebAssembly/design/blob/master/GC.md)


That's what I tried to say, wasm does not solve the problem I described.


Like nailing jello to a tree.


Chrome is faster on a real website, however slower on my Hello World React app with 1000 hello-world components.

Literally unbrowsable.


I wonder why these guys are using Chrome for this? I've had no problem pre-rendering React using a simple node process and ReactDOMServer.renderToString();


After reading their 'How It Works' page, it looks like they load the DOM into headless Chrome, let all the JavaScript events fire, wait for rendering to complete, and then serialize the DOM afterwards. Whereas with just rendering the React component you're getting the resulting HTML but you're not seeing the effect of actually executing any referenced JavaScript and the resulting DOM how a browser would see it.


Because rendering in a Node process is more complicated when async state is involved - using a browser eliminates that complexity. For simpler apps without any async state (AJAX or WebSockets), a Node process may suffice.


Dumb question because I have very little node experience - I've been doing a node course and it uses a lot of async stuff server side, eg for everything that hits the database (using promises). Why wouldn't you just do that for your async state when you render server side?


You could indeed do that, it's just more work. The beauty of server-side rendering a React app in a headless browser is that since the environment is the same as what the app was originally designed for (a browser), things _just work_.

For example: you load the app in your headless browser and let it load as would in any browser. The browser fires some events and XHR/WebSocket requests and you can cleanly wait for them to finish as opposed to doing something like this: https://techbrunch.gousto.co.uk/2016/10/10/isomorphic-react-... which requires you to do some extra configuration in your app to help support a Node environment.

TL;DR: ReactDOMServer.renderToString won't wait for your async requests to finish, you need them to execute and finish before you call renderToString


This sounds like a major kludge to me. What if the page keeps firing events? What if the page is relying on cues from the browser (dimensions, cookies, JS capabilities, etc.) to know the correct requests to make.

If you care about performance you should design an architecture where it's possible to determine exactly which parts should and can be executed on the server.


Agreed. A react component that can't render something without extra async state updates is a poorly designed react component.


But aren't async state changes usually a result of user interaction, or extra network calls? I can't think of an example where an app can't render an initial frame without async state. Sure the state might change after the page loads and JS binds, but that doesn't matter for the purposes this is covering (search engine indexing, faster apparent page load)


Here's an interesting talk about optimization and JIT compilation in V8: https://www.youtube.com/watch?v=p-iiEDtpy6I


Also this one from the last IO: https://www.youtube.com/watch?v=N1swY14jiKc


I've also noticed the chrome devtools using 100% CPU quite often, since I updated to v59. Pretty annoying since the ff devtools aren't (imho) up to par with chrome's


If you can make it eat less memory, go make it 2x slower im fine with it.


Why? memory is cheap.


This really hasn't been true on consumer devices for a while now. Sure, you can add RAM to a server for cheap, but no one really runs a browser there. In many cases now, adding more memory to a device means buying a completely new device. Unfortunately, it doesn't matter much if the component is cheap if there's no way to actually install it.


I think the parent's comment still stands if you're comparing the cost of memory relative to processing power. Most devices have more memory than most people need, but there is never enough processing power. So, given the tradeoff of having Chrome (or whatever other program) use more memory in exchange for speed, I would take it.


I guess we're going to have agree to disagree. My experience is many people have more processor power than they can make use of but never have enough memory. Laptops are shipping with 7th gen i5s (granted, mobile components), which are more than suitable for many consumer applications, but still only have 4 - 8 GB RAM. Memory configurations in laptops & desktops haven't really changed much over the past several years. Mobile devices still ship with much less than that.

But in either event, the parent was directly replying to someone whose experience suggests Chrome's memory usage is a problem. Telling him to just add more RAM is disingenuous because that's simply not possible in many (probably most) cases now. Taking him at his word that memory consumption is a problem, what's the real solution?

Frankly, I'm in the same boat. And it's a trivial search to find numerous people that find the increasing memory usage of Chrome to be problematic. For my part, it's been a long time since I've visited a site and wished it did something faster that wasn't directly attributable to a bad code pattern in use on that site. Continuing to improve performance here has been of minimal benefit while increasing memory consumption is something I feel acutely.


Due to the way RAM works, you also always have a use for more RAM. Chrome is slowing down other programs on your system by using more RAM, so it's not using more memory in exchange for speed, it's sacrificing the speed of the rest of your system to speed itself up.


Just because your disk cache is expanding to fill your available RAM (which doesn't even always happen depending on the kernel/RAM size) doesn't mean that extra cache buys you anything. At some point it's just going to be holding stuff that isn't used before it's evicted.


Disk cache? Nowadays Chrome is eating so much RAM that it's pushing other programs into swap.


Not for many people. Look at the stats for how many people are running old android versions to get an idea of what percentage of users are getting by with 1-2gb of memory.


I use a older Macbook Air w/2GB memory as my main home machine, for example.


A lot of low-end computers still ship with 2 or 4 GB RAM. Upgrading isn't cheap (look up current RAM prices).


Also some not-very-low-end computers still ship with 4GB of soldered on RAM, like the $1000 model of the new Surface Pro or the €1399 model of the Matebook X.


not only that, but crashes correlate with too-high memory usage (I already use the great suspender to fold tabs up until I'm ready for them - still too high).

This is sometimes devastating for me, as described here:

https://news.ycombinator.com/item?id=14617175

so decreasing memory usage would greatly help here.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: