Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Like someone else said, real-time renders need to output at a reasonable frame-rate, which is the top priority. Therefore, per-frame image quality can take a fairly severe hit before things start being noticeable.

For the record, most real-time renderers are rasterisation-based, where geometry is assembled, rasterised, and then the fragments shaded. This is what almost all video games have been running on since the 1990s. Many so-called 'RTX' games you see today still do the bulk of their rendering using rasterisation and all the associated hacks to achieve photorealism, and only enable path-tracing for specular reflection, soft shadows, and diffuse-diffuse global illumination.

A high-quality real-time path-traced pipeline was impossible to achieve in playable framerates until very recently (~5 years ago). This is because we simply didn't have the hardware to do it, and denoising algorithms weren't very powerful until we got generative AI algorithms (OptiX, DLSS, etc). Even today, any real-time path-traced pipeline renders much fewer samples than any offline render does—usually 3 or 4 orders of magnitude less—simply because it would be too slow and a waste to render so many samples for a frame that would be displayed for several milliseconds and then promptly discarded.

Offline renderers do jack the quality up, and they use massive render-farms with hundreds of thousands of cores, with memory on the order of 10^14-10^15 bytes. The scales are completely off the charts; a single frame using an off-line renderer can take up to several hours to render on an average home computer.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: