Unbiased GPU Rendering – What’s the Big Deal?
Following yesterday’s article about Unbiased GPU Rendering with FStorm by Daniel Reuterswärd, is this first article by Johannes Lindqvist out of three he is going to publish here about the same topic. So what is the big deal with Unbiased GPU Rendering? Let’s find out!
My name is Johannes Lindqvist, a Swedish architectural visualization artist focusing mainly on interiors. I’ve been into 3D the past 14 years of which 8 have been professional.
About 5 of those, I’ve spent working for IKEA, doing 3D images for their catalog and website. Today I’m working as a solo 3D artist for an awesome advertising agency in Sweden called “Creative Army”.
This article in three parts will focus on two renderers, Octane Render and FStorm render.
Part one, this one, is a bit general about GPU rendering as a concept, and part two is more of a comparison between the two. The images you see through part one and part two are just a selection of my works made in those two renderers.
In Part three, I will briefly go through how to use Octane; set it up; where to find stuff; how to add an HDRI; set up the camera and so on, so you guys easily know what you need to know to be able to test it properly.
Also make sure to check out Daniel Reuterswärds article about FStorm from yesterday
What got me into unbiased GPU rendering
Everyone is different. Everyone prefers different ways to work, and everyone needs different tools. As I’ve been in the interior arch-viz market for most of my 3D career, I’ve been struggling for a long time to evolve and become better.
Before resigning from IKEA, I’ve always been using V-Ray. This was the only certain choice of renderer if I wanted to have a future in the business, and for many it still is. V-Ray was, and is an industry standard, and I never questioned it, nor believed things could be better than it. It’s a top of the line renderer and when it comes to features and workflows, my belief was that if V-Ray doesn’t do it, then it can’t be done.
I would, of course, find out that I was very wrong, and my struggle for improvement would soon turn into a really fun, rocket fuelled journey.
In 2014, while working at IKEA, I figured I wanted to try my wings, ‘in the real world’, outside of IKEA’s safe and comfortable shelter. So I resigned and took employment at my current workplace, being the first and only 3D artist in the company.
This, of course, meant that I had no rules to follow, no demands to live up to. I could write my own rules and take my own decisions. As long as I deliver a result, no one cares about how I do it. So I quickly decided to try Octane Render. Without any high expectations, that very decision turned out to be the best I’ve done in my whole 3D career.
“In V-Ray, I’m a technician, in Octane/FStorm I’m a digital photographer”
Starting out with Octane, the first thing that hit me as a core V-Ray user for the past 12 years, is that there’re no settings. Well, of course, there are settings, but compared to V-Ray, there are very few.
Unbiased renderers (even CPU ones like Maxwell Render) are meant to be physically correct, meaning that they don’t cheat. And since they don’t cheat, there isn’t really many settings we have to care about. Simply, they are what they are and they are bloody good at it.
As I work mainly in the architectural visualization field focusing on interiors, the most important thing for me is to be able to work as a photographer. As I usually say, in V-Ray, I always felt like a technician, there are so many settings to take care of, so much knowledge needed to solve different bugs and problems.
In Octane, however, I could be that digital photographer. I don’t have to care about the technical stuff, I could spend all of my project time on shaders, finding camera angles, lighting, and composition instead of trying to work out the correct image sampler settings. I could focus entirely on the creative part and be an artist, instead of troubleshooting things.
What Octane and FStorm gave me that V-Ray couldn’t
- A fantastic real-time renderer with a live render region.
- Insanely fast previews of the result while working.
- Unbiased: Super easy setup, hardly any settings to care about, it just gives me realism out of the box.
- More or less final images directly from the renderer.
- Change exposure, white balance, and other camera settings without having to re-render.
- Beautiful, error free GI calculations, no splotches or glitches.
- Lens effects like glow and glare calculated in real time.
- White balance and camera focus pickers directly in the frame buffer.
Let me explain a few of those :
Real Time Rendering
The most important feature Octane and FStorm, the real-time rendering, is in no way unique. V-Ray and Corona have their versions of it as well. Of course, I’ve tried them, but for reasons that I have no space to talk about in this article, I didn’t find them nearly as nice to work with.
However, the real-time rendering in Octane and FStorm is very responsive and allows me to “see my scene” with correct lighting and shading while I work, instead of just a wire frame viewport. It really speeds up the lighting and shading workflow.
With classic V-Ray, my material creation process used to look something like this :
- Make a material.
- Hit render.
- Wait for the scene to load.
- See that the material needs tweaking.
- Abort the render.
- Wait for V-Ray to stop rendering and 3Ds Max to unfreeze.
- Tweak my material.
- Hit render…
…and like that, the loop went on until that material was done, then it was time to make the next one.
In Octane and FStorm however, the process looks more like this :
- Start the RT renderer.
- Make my material.
- Tweak it to perfection, all while seeing the result in real time as I change the values.
- The material is done, and since it goes so fast, I actually have time to perfect it way better than I would in V-Ray.
No waiting, no wasting time, just effectiveness defined. Now, of course, this is possible to do with V-Ray RT as well, but I find the workflow much more convenient and responsive in Octane and FStorm.
Camera, Glow and Glare
A great thing with both of those renderers is that we can change almost every camera settings on the fly without the image having to re-render. The only setting we can’t change without re-rendering is the depth of field.
The same applies to the built in glow and glare effects, which I’d say are some of the most important features. Having this directly in the render really allows me to almost entirely skip the post processing. Usually, my final PSD files consist of 2-4 layers in total, and all of them usually is nothing more than small color corrections.
White Balance and Focus Pickers
Yep, that’s right. We can, directly in the real-time frame buffer, just click on the ongoing rendered image where we want the camera focus point to be, and it jumps there directly. No need at all to care about camera target position or measure the distance from the camera to the object that should be in focus. One click, and it’s done, the image is instantly re-rendered with the focus on the right spot.
Same goes with the white balance. Just select the white balance picker, and click around on the image until the colors are where you want them.
Hardware and Money
For now, both renderers must have one or several CUDA-enabled GPUs (Nvidia). The best and most valuable one this far have been the GeForce GTX 980ti, but now Nvidia is rolling out the new 10xx series (1070/1080) which will make a huge improvement. An important thing to consider while building a rig for GPU rendering, is that all the geometry and textures will be loaded into the GPU memory, so 6 GB GPU-memory is a minimum. Also, the GPU memory is not additive, so having two 6 GB GPU’s will still give you 6 GB and not 12 GB. This is because each GPU needs to load all scene assets to be able to render the image correctly.
Although, the performance scales linearly, so doubling the amount of GPU’s will double the render speed. Let’s say that you have one GTX 980ti in your workstation, which according to Daniel Reuterswärd’s (not so) scientific tests is more or less equal to how a decent Intel i7-processor performs in Corona, and you want more render power. If you were rendering with V-Ray, you’d have to buy a second computer with all that it means; Motherboard, processor, PSU, cooling, hard drives, windows license, 3Ds Max license (or whatever you use), a second V-Ray license, extra licenses for all your plugins etc. You do the math’s, it could cost you a small fortune.
With GPU rendering , however (given that you have free PCI-E slots on your motherboard), all you have to do is buy another GPU. And if you want 4x render power, buy 3 new GPU’s instead of 3 computers and 3 of every license you use. And to use network rendering in Octane, all you have to do is install an Octane standalone on that render node. It doesn’t need to be able to reach the file paths or textures, it doesn’t need any extra plugins, or 3Ds Max, or anything else. Octane packs the scene, including textures, down to an ORBX file; Octanes standalone’s own file format, and sends that to the render node.
As for FStorm and network rendering, that is not yet supported. Don’t forget that FStorm is much younger than Octane.
Negative aspects of GPU rendering
As you probably figured out by now, the biggest (and according to me; the only) real downside with GPU rendering is the limited memory. 6GB in 980 ti compared to, let’s say, 64 GB of DDR memory can be a serious issue. Even though I’ve noticed that GPU renderers seem to be far more memory-efficient than V-Ray, 6 GB is sometimes not enough which have forced me to do some texture optimizations. Although spending 15 minutes to optimize all the scene textures, I’ve managed to lower the memory consumption down to about 35-40% of the initial consumption without it actually making any visible difference in the image. More about texture optimizing in part three.
Also, in the very near future, with Nvidia’s new Pascal architecture, we can expect a whole bunch of new 12-24 GB GPU’s or even more, and suddenly the only real downside of GPU rendering will soon be history.
Many believe that GPU renders are supposed to be super fast. And yes, they can be, in certain situations. What unbiased renderers have the most problem with is indirect lighting. Directly lit scenes, they eat like a beast, but indirect lighting can be a pain for them.
Studio images, product images, outdoor images are all examples of things that can go super fast with an unbiased GPU renderer. I’ve rendered product shots with heavy metal blend materials in 5000px resolution in less than 10 seconds, and I’ve rendered a city exterior in 5000px in just over 4 minutes. But for interiors, which has very much indirect light, unbiased renderers tends not to be faster than any other renderer at all as many still seem to believe. There are of course ways to optimize interior scenes in unbiased GPU renderers like deleting walls, adding fill lights inside the room etc, but I don’t like to work like that, I want to make it just as it was shot in a real room.
But, the big thing here which I’ve explained so many times, is that everyone seems to think that render times is the only thing that matters. And this is where I tell those people to wake up because it’s certainly not.
My render times on 4x GTX 980 ti, are usually between 2-4 hours in delivery-resolution per image, sometimes even more. Many shout out that my render times are too long, and that V-Ray could render it faster. And yes, they are right, but let’s not forget that the final rendering itself is the smallest, simplest and shortest task of any project.
Even though my render times are long according to some, the GPU real time rendering allows me to set lighting and create shaders so much faster than I ever could in V-Ray, and since there are hardly any settings, I save a lot of time on that as well.
So in the end, even considering long render times, my TOTAL production time is what matters, and it has gotten so much shorter today than it was before I started GPU rendering. Now I theoretically can (and actually have) make complete, photorealistic interiors from scratch in not more than 4 hours + rendering. Of course, that’s not very common since it all depends on what kind of material you get and how whiny the client is, but it can definitely be done.
Thank’s for reading this far and don’t forget to read the next two parts.
Don’t forget to be awesome!
Link to part three (TBD)
My Facebook page : https://www.facebook.com/JohannesL.Visualisation/
Download FStorm (free) : http://www.fstormrender.com/downloads
Download Octane (watermarked trial) : https://home.otoy.com/render/octane-render/demo/
FStorm Facebook group : www.facebook.com/groups/FStormGroup/
Octane Facebook group : www.facebook.com/groups/OctaneRender
Great write up! Lovely realistic renders too.
I only have a GTX 970 4gb. Could you give me a ball park figure of what the polygon count limit would be for this size of card? Could I ,for example, render one of the interior scenes you show here with the black leather or grey sofa but just with longer render times?
That’s all well & good but all the (lovely) scenes shown are basic and are nothing compared to what we render in house on a daily basis in terms of memory consumption.
mohinder Definately that is possible. My scenes in unoptimized state usually takes about 3,5-5 gb or memory, but doing some optimizing I can easliy get every single one of them down to 2,5 gb or less without it being too noticable in the final result.
4gb is really on the low side, even my 980ti 6gb is on the low side. I am still limited since I can’t do huge detailed scenes without optimizing.
But since FStorm still is free, I can only suggest you try it out. In part 2 I’m explaining how to optimize the scenes (usually reduces memory consumption with up to 50-70%). I strongly believe that GPU rendering is the future and since the amount of VRAM is getting increased every time they launch a new card, those limitations will be gone in a few years and then it’s not bad if you’ve already tried it out 🙂
macker2021 As long as there are memory limitations, GPU rendering isn’t for everyone, we all choose our tools based on the kind of work we do. Just like V-Ray (or whayever you use) is a better choice than unbiased GPU for your kind of work, unbiased GPU is better than V-Ray for my kind of work. 🙂
Though, we can expect upwards 24gb GPU cards coming in descent price ranges the coming years, and that development will certainly not stop there, which will undoubtlessly make it possible to use GPU renderers even for those really heavy scenes.
Remember it wasn’t long ago that 24gb DDR memory was concidered much.
GPU renderers are still in their very beginning of development but it’s very likely that CPU rendering will be nothing but a memory in the future, those who have the possibility to change to GPU should definately try it out. 🙂
” V-Ray and Corona have their versions of it as well. Of course, I’ve
tried them, but for reasons that I have no space to talk about in this
article, I didn’t find them nearly as nice to work with.”
Maybe you can explain what is the big diffrence and why i should switch and learn a other / new render engine instead of trying to optimize my scene in VRay for VRay RT because what I’ve read early this week is that Vlado is turning VRay into a IRP Render what sounds quite intresting so far. http://www.evermotion.org/articles/show/10261/oldest
Hubl That thing must have a huge cpu renderfarm behind 🙂
.Pixel Hubl ok but whats about V-Ray RT i dont think that chaos group will miss the chance to improve this technology also. So what is it that makes the Author (http://www.livefyre.com/profile/34321787/) feel unconfortable about it?
Hubl Bare in mind that everything in my article is my highly personal opinions and there is no guarantee that everyone will agree with what I wrote. The best is of course to just try it out and create your own opinion about it, because ultimately, it’s always up to everyone to form their own opinions of things.
But I gladly discribe why I personally feel the way I do.
First off, V-Ray being an Interactive Photorealistic Renderer is a great step, but I persume it will still contain the exact same things that made me leave V-Ray; a shitload of settings. It’s not that I can’t handle the settings because I was with V-Ray since maybe 2004 until two years ago, but having to spend time on render settings, tweaking and trouble shooting means less time for the creative parts. In V-Ray I always had to do a lot of tweaking, small test rendering etc before hitting final render and it wasn’t unusual that I had to abort the final render at some point as well, because I found something was wrong.
In both FStorm and Octane, I never spend more than tops 1 minute on render settings and the result always comes out perfect. As I wrote in the article, since an unbiased renderer doesn’t cheat, there is nothing that can go wrong, and nothing to care about tweaking. They just work out of the box and they give a fabulous result.
Now I don’t have to care about that at all and I can spend all my time on making nice lighting and shaders.
A second reason to why I think GPU rendering is superior is that I don’t have to buy a new computer or any new licenses just to upgrade my rendering speed. CPU rendering will always be very expencive to increase rendering speed.
The third reason and problably most important is one I can’t describe, it’s just about the feeling. The interactive rendering in for example Octane just feels so much more responsive and quicker than VrayRT. I tried VrayRT rescently and after half an hour I gave up. I can’t really explain it, but it just didn’t feel as good. It felt sloppy, laggy and unresponsive, like walking in mud, and almost everyone I’ve talked to that have tried both, agree with me.
A side note to this is that I’ve always felt that the GI light looks more real in unbiased renderers than in V-Ray, which of course also applies to Corona and Maxwell.
I can of course not say anything about the video you linked however since that it’s not released, but it sure looks a bit slow.
All about personal opinions as mentioned, but as far as I know, every single one that I’ve introduced to Octane have made the change full time 🙂
Thanks for your reply. I look forward to part 2.
I agree with your insights and about GPU rendering being the way forward. Memory restrictions exist for the time being but this is overridden by my desire to create truly photorealistic images with minimal post. And your exceptional renders speak for themselves.
Very interesting insight. The workflow you’re presenting would definitely suit me more I think. Although, being a mac user I have to wait for Octane Render 3.1 for it to support Open CL. Any idea on when it will be released?
Looking forward to reading the next parts of your article.
Since you are so informed on GPU rendering maybe you can help me out on a hardware choice.
I am trying to decide between a GTX 970 4gb or the newer GTX 1060 6tb card.
Does render speed in Fstorm depend mostly on the number of CUDA Cores
the graphics card has? I realise the extra memory the GTX 1060 has will
help to render larger scenes but it has1280 CUDA cores compared to the
1664 of the GTX 970’s
Which card would faster to render with using Fstorm and how much of a
difference in speed can I expect between the the 970 & 1060?
If the GTX 1060 is slower is the extra memory a good trade off?
Which would you recommend?
Mind if you post your Workstation specifications? I would be interested what kind of hardware you are working with. Thanks in advance.
FlorianH Absolutely. Nothing fancy, the only thing that matters really is the 7x 980ti’s. Sadly FStorm doesn’t have network rendering atm so when using FStorm, I can ‘only’ use 4.
Other than that it’s an old x79 mainboard with an Intel i7-4930k, 64gb ram and two PSU’s to power it all.
Horoma No idea, what I do know however is that you of course can get a couple of NVidia card in an external GPU box and use that in your mac. Nothing that I’ve had any need for so sadly I can’t give you any names or model suggestions. If you go to the Octane facebook group and ask for Tom Glimps however, he can definately point you into the right direction.
mohinder I don’t know about informed, sure I’ve been using it for a couple of years but that doesn’t mean that everyine agrees with what I have to say 🙂
Anyway, at least for me, 4gb would not be enough. Well we can optimize our textures heavily, lowering the VRAM consumption with up to 60-70% without it actually being to visible in the render, but it’s not very funny having to think about that all the time.
So I’d say that 6gb is a minimum and that may even be to small for some kind of works.
About render speed, it does. The amount of CUDA cores more or less determines how fast it will be, but that’s not the whole truth because they can have different clock frequencies making a card with less cores a bit faster. Although there shouldn’t be any gigantic differences.
I haven’t actually seen any benchmarks of the 1060 yet sadly. It has less cuda cores than the 970, but a higher clock frequency so I guess we can expect them to render more or less the same speed.
However, as the 1060 has 2gb more memory which I’d say is crucial, I would personally not doubt about getting the 1060 instead of the 970 even if it might be a little bit slower.
JohannesL FlorianH What kind of ventilation do you use to cool down these seven(! – crazy) 980ti’s?
For me, it is really difficult to understand which component works best with each other. Years ago I dealed with it, but nowadays I am out of business with it – so I do not have any clues 😉 Are there any prefabricated workstation with this specs around the Internet, any ideas?
Thanks! Makes sense to be able to render larger scenes at the expense of some possible loss in render time.
Maybe by the time the 1060’s are available from back order there will be a few benchmarks to compare speeds between the two cards in renderers like octane/fstorm..
You are producing great renderings! Would be nice to play around with FStorm or Octane a bit soon. I’am a Vray RT GPU user myself so I find the article a bit sad. I’am not that experienced at al yet but I have to say that I’am really happy with Vray GPU with the latest 3.4 update with denoiser and great support from the Chaosgroup forum. Using a Titan X myself and have to say it takes a while to load it all files before render starts, and since I just have 24gb ram I’am in a bit of a trouble, should buy a 2011-3 motherboard and try 64gb or even 128gb soon I think, still an architect student however so I’am not in a hurry. How much internal ram do you use when loading say 6gb Vram onto your GPU? (ofcourse know that there are) But is there any Vray & Vray GPU user still out there? Of all articles I read here and there nobody promotes Vray GPU.
allemyr Thanks mate! 🙂
Vray and Vray RT is not in any way bad renderers, and of course most people are still using it. It’s an industry standard and will continue to be so. I can’t speak for others, but the reason to why I don’t promote VRay is because I don’t like VRay. In fact, I hate it. I’ve been using vray since maybe 2004 until I left IKEA in 2014, which means 10 years.
In comparison with Octane for example, VRay RT feels uninteractive and sloppy and even if it procudes great results with GPU rendering speeds, for me there is only downsides. Personally, I want it unbiased. Octane and FStorm offers me a world where I don’t have to care about settings.
In Octane/FStorm I spend 30 seconds on render settings and then I more or less don’t ever have to touch it again in that project. VRay is not unbiased, meaning that it’s cheating. And that’s a good thing for many businesses, like animation, VFX etc where you need short render times. But I don’t care at all about render times, only thing I care about is quality. And VRay simply just doesn’t give me quality out of the box. There’s always tweaking and tweaking and even now when I’ve learned how to reach photo realism better than I ever could before, I still can’t achieve it in VRay.
As to the question about how much DDR ram I use, the answer is none. FStorm doesn’t use DDR ram. Octane however have the possibility to place textures into DDR ram, but Andrey have chosen not to spend time on developing it for FStorm since the GPU’s VRAM are getting bigger all the time.
I really suggest you try an Unbiased GPU renderer. Not saying that you will love it, but there’s a big chance you will even if you’re pleased with VRay. As far as I know, every VRay user that I’ve helped getting started and testing Octane (people doing the same kind of work that I do), have abandoned VRay completely. 🙂
JohannesL allemyr Thanks for you response! 12 years is a pretty long time using the same renderengine. I’ve only spent 5 and not used it regularly. I think Vray GPU is unbiased aswell if you use Brute Force/Brute Force, with RT they simplfied things a lot, the only settings which is left is a global noise threshold sort of.
Happy to see your great work, and would be to cool to get there some time aswell. For now Vray GPU suites me fine but will try some other unbiased GPU engine soon. Well first of will be upgrading på rig a bit, because internal ram peakes when doing my GPU renders on my Titan X + I might add a second one.
Amazing article, i’ve using Octane since first beta and neve been happier. Thank you for amazing piece.
allemyr Yes doing BF+BF will be unbiased (I suppose), but the I feel we lost the whole point of using VRay 🙂 I tried RT as late as a few weeks ago and sure, they have done a few things with it, but in my opinion, the feeling is just not there yet. Still feels sloppy and laggy. I’m always careful to say that it’s all my personal opinions but more or less everyone that have tried both Octane and VRay RT say the same thing.
Are you doing heavy scenes? If my experiences are correct, octane is more memory efficient than VRay. in VRay I could more or less never get under 10-12gb Ram, but fitting my scenes in 6gb VRAM is no problem at all.
Try it whenever you can, I don’t think you will be disappointed. It takes years to learn vray, it takes hours to learn Octane. 🙂
Thanks @JohannesL for this one – interesting read.
🙂 What I really find important to say – and you do it also: GPU rendering is not all about speeeedd.
Of courese, scalling up your farm quite cost-efficient comes to mind first by “simply” throwing in some GPUs into your rig, but
in times of cheap xeon-chips it is quite easy to stay on par with CPU renderers like corona speed wise.
I am constantly checking developements in the enginge-world – also octane and fstorm and find it really just a matter of
taste for the biggest part. Achieving photo-realism (whatever this is) is not a question of the engine you use anymore, it is infact quite easy, it more and more again depends on the content – which in my opinion is a good thing.
(For better understanding of what i mean simply set up a simple scene in mental reay as you would do in octane or vray and hit f10 :)) )
I know a lot of peolpe that “love” the tweaking possibilty in engines like vray as much as you (and me) hate it.
And for some cases having an open system whitout the restriction of “reaslism” is totally justified as well.
And of course it is absolutely possible to use super-hyper-realistic-GGX-PBR-Unbiased-SSS-realisticIOR technology
to produce images that look absolutely cr*p.
In this sense: render on 🙂
JohannesL thank you for your reply. I’ll have a better look at the FB page and the possibility of getting extra hardware.
Love it! I’m switching to GPU render in the past 2 months, had one GTX970 and bought a GTX1080, using the GTX970 only for display, and planning to get another 1080 now. Once you go GPU there is no way turning back to the slowish CPU workflow, its just so good to not be used. Can’t wait for the parts about optimization!!
JohannesL allemyr I would say it “used to” take years to learn vray. They’re really trying to streamline it to get it to a point where you can just press render without doing anything. No longer do you have to chose your subdivisions for materials are the GI, it’s all done automatically.
I’m currently testing Fstorm with a scene that I used Vray for and I’m amazed at how fast it is though. We currently use VRED to light and shade our cars and its an insanely fast CPU realtime render engine, but with the cost of the software you could buy 30 GTX TITAN X’s. Then you need a pretty good CPU to boot.
The only big limitation at the moment is GPU memory. We recently had to upgrade all our workstations to 128gb so we could fit all the data in one VRED scene. Once we start seeing affordable 24gb cards then I believe CPU rendering will soon be a thing of the past.
Amazing renderings ! I love the quote “In V-Ray, I’m a technician, in Octane/FStorm I’m a digital photographer”
But you are lucky to have the work that allows that, and if some clients gives you projects that doesn’t look good if rendered realistically you better be a tehcnician (a cheater if you want) if you want to make them happy.
I don’t get all the “hate” for VRay, and why it makes you want to “puke” (even if you repeat that it’s a personal opinion) I think I’ve not touched our presets since a long time, it’s probably still 5% biased but they’ve come a long way from irradiance maps & light caches.
Autodesk is a pukable company, not Chaosgroup, please….
Hello! Can you give a leaves(trees) scene (.fbx or .3DS or .abc) include very realistic texture and your render that scene with your best capacity ? example like this :
I really want to have test the same scene ( same camera position ) between Arnold/Renderman vs Fstorm/Octane but I need scenes from a expert Fstorm/Octane user. I really want a a trustworthy comparison ! Please help me if you don’t mind . Thank you!
Hi Johannes, Really thanks for this article. Saw your work (Old bedroom) on .behance. There you mentioned, those images took almost 2-3 hr. As you explained on this article, GPU Render is really fast when it comes to setup Shader, Lights, Camera angle, composition and scenario where indirect lights contribution is almost nil for the look of final image. But when it comes to indirect lighting calculation the time taken is mostly close to cpu ? Your reply will help a lot in many ways. Like , a studio which already have shaders and objects like sofa, chair are ready with material, They just have to dump then in the interior scene , will have to invest time in lighting and rendering time will almost be same as GPU ?
Thank you so much for taking the time and sharing this amazingly useful info of your experience, I’m on the process of changing my workflow to GPU rendering and I’d been a Maxwell Render user for a long time now so knowing that Octane goes pretty well is great to read, I would give a try to Redshift too just that I don’t agree and find useful all that time consuming and even strange tweaking on V-Ray or Mental Ray, but considering I work on animation Redshift and the super fast results I’ve heard and barely see by myself sounds really useful.
Thank you again!