The most impressive bit of this is that it it keeps under 20% CPU on my Mid-2010 MBP [Chrome 19].
It's hard not to imagine another cyclical shift towards browser centric development, away from rich client apps, when you have Chrome pulling off tricks like this. Things that would've seemed impossible in 2008, the last time everything was moving to the browser.
Give Apple/ARM/Intel/Samsung a few more cycles in mobile [more RAM, if nothing else!] and you might need to reconsider the disadvantages of native apps in favor of all the advantages that centrally hosted applications offer [no Apple fee, no pirating, continuous deployment, cross-platform, etc, etc.].
I long for a vibrant open-source project that builds quality widgets and elements for people to use inside this framework. Let's not pretend jQuery mobile et. al. are anywhere close.
CPU utilization is the wrong metric. Your CPU is mostly idle because it's feeding a tiny set of commands to the GPU each frame, and then idling waiting for user input and/or vsync. The GPU is running full-tilt, but unfortunately there's no good OS-level tool to show that.
One of the reason CPU utilization is so low is that this is a demo. Most of the "logic" is just a straightforward computation of the mouse position, and the particle coordinates are all figured out from that in the shader engines. Real apps and games have real data that needs to be crunched.
(Edit: I just checked the source, the particle positions are actually computed in Javascript, the shaders are just straight rendering. And I don't see anything particularly clever about the implementation, it's just a bunch of array accesses. V8 is doing an amazing job on this.)
But broadly, you're right: the need for highly optimized native code to drive a modern GPU has mostly disappeared. A javascript interpreter (well, V8) is more than good enough.
The particle positions are indeed computed in Javascript, and even if the version you tried only had 30000 of them, the "real 80000" version [ http://minimal.be/lab/fluGL/index80000.html ] does not raise my CPU usage that much.
A more clever way to code that animation would be to pass the mouse position to the GL shaders that would compute themselves the particle positions, and I think modern GPUs are very optimized for that kind of stuff.
Please don't be too rude, cause it was my first WebGL experiment.
Apologies if I seemed rude, it's really very nice. But yes: a shader absolutely could compute a particle position based on a previous value (and other state, and inputs like mouse position, etc...) as a mapping between, say, a 1D texture and a 1D output framebuffer that gets used as the next frame's source texture. Honestly, given the apparent performance this is what I had assumed was happening.
Having the GPU compute position updates into a texture would indeed be orders of magnitude faster, but it would require the vertex shader to read from a texture to get the results. Unfortunately, vertex texturing is an extension that is not required in the WebGL standard and not supported on a significant percentage of machines. It's almost a shame that vertex texturing makes really fun demos really easy to make. Every time I see a VT demo, there are dozen of comments crying "Doesn't work for me. WebGL is broken!"
No need to read the texture in the vertex shader. Render to a buffer object and use that as your vertex array. But broadly yes: that's the problem with OpenGL support, and WebGL in particular is still very bleeding edge.
I'd be pretty surprised if vertex texture fetch wasn't supported though. It works on basically all hardware from the PVR SGX on up. Unified shaders are pervasive on both phones and desktops. I did find this, though, which implies that for a while that the browsers weren't properly exposing support:
FBOs are absolutely part of ES2, and thus presumably an official part of WebGL. I've used them in embedded contexts, but never in a browser. And as always, this is on the bleeding edge of what the drivers are prepared for, so dragons may lurk. But at least in principle it should work.
I've seen talk of CPU-less render-to-vertexbuffer dating back to 2004, but I've never dug into how to actually do it until now. From what I can dig out, it requires PBOs which are not available in ES2. I guess copying back and forth over the bus via gl.bufferData(ARRAY_BUFFER, gl.readPixels(...), gl.STREAM_DRAW)) is still better than doing the math in JS. I might have to try combining the glReadPixels with the mapped buffer extension that is available on the iPhone P:
It's hard not to imagine another cyclical shift towards browser centric development, away from rich client apps, when you have Chrome pulling off tricks like this. Things that would've seemed impossible in 2008, the last time everything was moving to the browser.
Give Apple/ARM/Intel/Samsung a few more cycles in mobile [more RAM, if nothing else!] and you might need to reconsider the disadvantages of native apps in favor of all the advantages that centrally hosted applications offer [no Apple fee, no pirating, continuous deployment, cross-platform, etc, etc.].
I long for a vibrant open-source project that builds quality widgets and elements for people to use inside this framework. Let's not pretend jQuery mobile et. al. are anywhere close.