Hacker News new | past | comments | ask | show | jobs | submit login

I don't think it's a critical feature for a commercial renderer.

- CPU renders typically might use 32Gb memory (or more).

- most rendering is done on a render farm which won't have GPUs (unless you maintain a gnu render farm).

There are good reasons why commercial renderers are avoiding the GPU, at least for now.




Xeon Phi may well cement this further. Knight's Landing, due in 2015, is going to feature 72 Atom cores (288 threads) with AVX, socketed in a standard Xeon motherboard.

In situations where your problem domain doesn't fit comfortably within the memory restrictions of a GPU, or porting a legacy code base difficult, this could be a very interesting option.

http://en.wikipedia.org/wiki/Xeon_Phi#Knights_Landing


It is a non-starter for the successful commercial renderers because the majority of their market already has render farms with standard CPUs. Also studios generally use a mix of tools and if only one of them is optimized for Xeon Phi's that isn't enough of a motivation to spend the money on Xeon Phi's.

It is generally a no go in the mainstream rendering market, although cloud-based renderers can use specialized solutions.


I think that's the way as well

You may be able to accelerate some steps in a GPU, but rendering usually involves "a lot of data", if you add all resources for a given scene it might be on the order of gigabytes

This is not like bitcoin mining where you have to do a lot of math in a small batch of data.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: