There are too many mature renderers and the competition is intense.
The current popular actively developed ones are: V-Ray (which we use in http://Clara.io), Arnold, Maxwell, Pixar's RenderMan, and KeyShot.
Less popular but actively developed ones are: RedShift, Furry Ball, 3Delight, NVIDIA iRay, Octane/Bridge...
And then the ones that are integrated into the 3D packages themselves like Blender's Cycles, Modo's renderer, Cinema 4D's renderer, Houdini's Mantra, Mental Images (included in most Autodesk products.)
Then the smaller opensource ones: Sunflow, Lux, Corona, Mitsuba, Pixie...
That is a lot of renderers and I am sure that I am missing quite a few.
A similar current company is http://ZyncRender.com, although even today with great bandwidth, cheap cloud computing, and an amazing sync system, they still found it a hard market to crack.
While I am programmer, my first love was animation. I had done some cluster computing with thought experiments around huge render farms. When screamline came online a bunch of years later it caught my attention immensely. What a perfect combination of the both!
Aside: I tried doing EC2 type stuff in 1999-2001, it was really hard, too much resistance for running anything "sensitive" in the cloud.
What a great way for Intel to sidestep the artificial scarcity and release cycles to the world instead of chips! They could prototype new hardware, use "crippled" cpus that they couldn't sell, I thought it was a great vertical integration play that sidestepped many of the ridiculous economic factors at the time. Eventually it will get there when we have massive bandwidth, AMD and Intel if they are still around will sell mostly cycles.
I was really bummed to see screamline go out of business. It was like a sad canary.
Screamline Rendering Services, provided by Internet
Computing Services, has enjoyed working with our
customers in the computer graphics animation industry to
provide outsourced rendering services.
Due to a deteriorated funding climate for Internet-based
businesses and unfavorable market conditions, Internet
Computing Services, part of the Intel New Business Group,
will exit the outsourced rendering services business,
effective October 31, 2001.
How many of those take advantage of GPU floating point power? This is no longer a nice to have sort of thing, but a critical one, since as the new Mac Pro shows, there's going to be orders of magnitude more GPU than CPU power available in newer workstations.
I've seem some, like Furry Ball, that support only CUDA, but very few are vendor agnostic with OpenCL. I hope AMD tries to fix this.
For doing production VFX shots, unless there's fur/hair or volumetrics involved, you're pretty much IO constrained in terms of pulling in textures and paging them in memory.
Often scenes can have > 300GB of textures total. Out-of-core rendering is not something that GPU renderers can do very well yet (redshift can do it a bit), so the 12GB limit on GPU memory is a huge problem. Arnold and PRMan both have deferred shading which means for GI they can re-order texture requests for similar texture mipmap levels to reduce the swapping of textures needed. I'm not aware of any GPU renderers that can do that as of yet.
On top of this, geometry assets are always overbuilt these days, with millions of polys/subds, and often multiple displacement layers on top of this. This geometry requires even more memory.
And the biggest studios are rendering deep output, requiring even more memory for the output files as more of the render samples need to be kept in memory for the render duration.
And while GPUs are faster, they're not THAT much faster for raytracing - a Dual Xeon top end CPU can almost (85% of ) match a K6000 at pure ray/triangle intersection speed.
Where GPUs can win is the fact it's a lot easier to cram multiple GPUs into a single workstation than get a 4 socket CPU system.
In a traditional render farm that's not an issue. As the new supercomputing centres have shown, there's a huge push to add massive amounts of GPU power to these because it's more cost effective.
I'm not saying GPU rendering is easy, it's way harder to do correctly because of memory constraints, but the performance of GPUs over CPUs (compute per watt, compute total) is a gap that's only getting wider. It's easy to add a few thousand more shaders, just add another card. Not easy to strap on another 12-core chip, you need to engineer from the ground up for that.
A GPU also has access to all system memory via DMA, so that shouldn't be an issue. It's just going to be hard to coordinate that data transfer in a performant way.
In a traditional render farm that is an issue, as the vast majority of renderfarms for VFX companies are CPU only. renderfarms aren't just using for renderer rendering, they're used for comp rendering, fluid sims (often using up to 96 GB of RAM), physics sims, etc.
Very little of this is GPU-based currently.
Where GPUs are starting to be used is on the artist workstation to speed up iteration/preview time. But not on renderfarms.
GPU memory access across the PCIE bus is ridiculously slow for smaller jobs. It's often the case that the slower CPU can do the work before the cumulative time of copy data to GPU, get GPU to do work, then copy it back again has been done. For longer jobs of smaller data, GPUs make sense.
GPUs also have much higher power draw than CPUs and produce much more heat, which is a huge issue for renderfarms with close to 100% utilisation at crunch time.
I've used Octane, which is a GPU-only renderer. Unfortunately, it is VRAM-per-GPU bound. In other words, you cannot render any scene larger than the usable VRAM per GPU. High end video cards with lots of VRAM also tend to have more than one GPU, so the VRAM-per-GPU figure is actually (total VRAM)/(total GPUs), with slightly less than that actually available for the scene.
I've also experimented a bit with Lux which has a hybrid CPU/GPU mode. However I've found it isn't necessarily any faster than CPU only on my system (which has a lot of CPU cores) and it isn't as stable.
AFAIK, there are no video cards currently available with more than 6GB per GPU, since something like a nVidia Titan Z with 12GB has to share that between 2 GPUs.
It's conceivable that as GPU rendering becomes more commonplace we'll start to see manufacturers loading more and more RAM only high end cards, possibly at the expense of compute units if power consumption is a problem. After all, a render farm with many separate cards is just as fast as one card with more compute units, but VRAM per GPU is currently a hard limit that will affect anyone rendering very large, complex scenes.
Xeon Phi may well cement this further. Knight's Landing, due in 2015, is going to feature 72 Atom cores (288 threads) with AVX, socketed in a standard Xeon motherboard.
In situations where your problem domain doesn't fit comfortably within the memory restrictions of a GPU, or porting a legacy code base difficult, this could be a very interesting option.
It is a non-starter for the successful commercial renderers because the majority of their market already has render farms with standard CPUs. Also studios generally use a mix of tools and if only one of them is optimized for Xeon Phi's that isn't enough of a motivation to spend the money on Xeon Phi's.
It is generally a no go in the mainstream rendering market, although cloud-based renderers can use specialized solutions.
You may be able to accelerate some steps in a GPU, but rendering usually involves "a lot of data", if you add all resources for a given scene it might be on the order of gigabytes
This is not like bitcoin mining where you have to do a lot of math in a small batch of data.
There is also the Renderman compatible Aqsis, although the last time I looked, development on that had slowed considerably as they work on version 2.0. I got the mailing from Pixar about the pricing changes and new version yesterday. I'm curious about the technical changes in Renderman and the new RIS architecture. Does it still have RIB? And SL? I can't imagine those going away. I'm also interested in the non-commercial free version. One can never have too many renderers to play with.
Another integrated renderer is the one is Lightwave. I believe that Blue Sky and Dreamworks still use their own proprietary renderers. Sony is using an Arnold based renderer.
Also less than half the price of Arnold and VRay now...
It's possible they were hemorrhaging money thanks to the Arnold and VRay competition over the past few years.
Looks like ILM are being "coerced" into using PRMan again from what I hear...
But either way, it sounds like either they're willing to take a hit on the profits, or they're expecting to make up for it from renderfarm expansions...
I feel like the main takeaway is that they recognized non-commercial use of this crazy expensive software to be legitimate. I think this covers 80% of the use cases where people turn to piracy instead. Good on Pixar. Lets see if Adobe follows suit sometime soon. I know some design students who don't buy a lot of textbooks but have a hole in their pocket from Creative Suite.
>I know some design students who don't buy a lot of textbooks but have a hole in their pocket from Creative Suite.
An educational license for Adobe Creative Cloud, which covers virtually all of Adobe's products, is $300 a year. I'm sorry, but if you are in art school and can't afford to pay that there is something very, very wrong and it's not with the price. Many schools even have a site license and you only pay a nominal fee or nothing at all for Adobe CC while enrolled.
Creative Cloud allows you to purchase a valid license for Adobe products for a reasonable monthly fee. For the first time ever, I have a fully legal copy of Photoshop that I pay $10 a month to use (I still mostly use GIMP and ImageMagick though). More specialized software, like Illustrator, is more than that each month (something like $40/mo iirc), but if you need it for school, you can justify it.
I think Creative Cloud is cool because it actually makes it reasonable to pay for some Adobe software, and you don't have to sink a bunch of time into cracks, activation workarounds, etc. I don't know if it's going to actually be more profitable for Adobe long-term or not, but as a consumer, I'm excited to see them try something new.
KeyShot is more for static product rendering and not the animation market where RenderMan is targeted towards. So I am not sure there is direct competition between the two.
Agreed, but they've been marketing Keyshot at the animation space lately (it's all over their website) and I've heard it labeled as 'poor man's Pixar'.
So, now that RenderMan is free for non-commercial use, what do people consider to be the contenders for best-in-class free software for modeling, animating, lighting, etc?
If you wanted to set up an all-free environment to learn, what's the list of software you would use?
I'm highly biased, but I think our project, http://Clara.io, is the least hassles for learning the basics. But it is more limited than the other more established programs.
When (if at all) will it be feasible for a bunch of kids in a garage to make a feature-length realistic-cgi movie? Assume computing on the public cloud, movie budget $1M.
The big cost driver in making animated movies isn't in the software or tools anymore and hasn't been for some time now; the big cost driver is now in authoring all of the assets and content that you actually need. It takes a ton of really talented people to create all of the photorealistic detailed models and animation and whatnot that you need to make a CG movie look convincing. Artists are way more expensive than computers.
OT: So I was in Marin County in 1991, all excited after my interview with Pixar. Drove my rented car back to the airport and hopped on the shuttle to the main terminal. Sat in my seat just as another guy struggled to get a box I recognized as a small computer onto the shuttle. He looked at me and I got up to help him but he had someone he was with who showed up and they lifted the box together.
The first guy was Ed Catmull in the video. I recognized him but didn't know what to say and he got into some deep discussion with the guy he was with.
I got a job offer from Pixar but turned them down cause my first child was just born and, at the time, I was concerned about Pixar's stability which turned out to be a correct belief back then. Instead, I accepted an offer from Silicon Graphics and sat next to Jim Clark in the lunch room as much as I could.
Do you have any advice for non-graphics programmers (save for a few undergrad/grad classes) who want to get into graphics? (still fairly early in career)
To add a note, if other readers don't know who Catmull and Clark are, you may have heard of Catmull-Clark subdivision, which is (I think) one of the more interesting geometry algorithms out there, as simple as it is. Pic below:
The current popular actively developed ones are: V-Ray (which we use in http://Clara.io), Arnold, Maxwell, Pixar's RenderMan, and KeyShot.
Less popular but actively developed ones are: RedShift, Furry Ball, 3Delight, NVIDIA iRay, Octane/Bridge...
And then the ones that are integrated into the 3D packages themselves like Blender's Cycles, Modo's renderer, Cinema 4D's renderer, Houdini's Mantra, Mental Images (included in most Autodesk products.)
Then the smaller opensource ones: Sunflow, Lux, Corona, Mitsuba, Pixie...
That is a lot of renderers and I am sure that I am missing quite a few.