Huge RAM usage during kernel compilation
Forum rules
Please upload a testscene that allows developers to reproduce the problem, and attach some images.
Please upload a testscene that allows developers to reproduce the problem, and attach some images.
Re: Huge RAM usage during kernel compilation
A screenshot of the node tree would be nice, maybe it can be optimized.
Re: Huge RAM usage during kernel compilation
For future reference, also procedural textures sometime make OpenCL compilers mad.
Re: Huge RAM usage during kernel compilation
It was related to the nodes attached to that material. The material itself is a matte one...so nothing problematic.
The "problem" arise cause attached to the material there are 30 image nodes + 30 "greater than" nodes + 30 mix RGB nodes and 1 raw object node.
At the moment I've split it into 4 different materials with just the 4 textures I need.
Now, I've questions for Dade.
1 - When you speak about "features" of a material to compile...are you referecing to the quality&quantity or just quality? I mean, if I have 30 mix nodes in a material, during the kernel compilation, the compiler "sees" 30 mix nodes or just "a mix node" ?
2 - In my opinion, and for my daily experience and workflow, I consider really useful and timesaver to have a node that can handle multi textures with an assignment driven by faceID or Object ID. I'm referring to this Vray texture . It's useful because you can reduce the materials number and you can switch quickly to another texture just changing the ID number. In my project I have often some sets that differ just for a graphic or a color. In this way I can create a pool of textures I add to a single material and then, following the design needs, I go to switch just the ID and it's done. Or , for example, with the color of the shoes (or apparel) I can use a random script to change the object IDs and I'm fine with a reduced time spent.
If I use materials, I have to naming them one by one and spent more time for random assignments
About kernel compiling
Ok, now I know that Nvidia needs a complete recompiling for every new render, at least everytime I add a single new feature...but I really hope you'll find a way for a onetime kernel compilation like Vray did. It's super annoying and time consuming to have it everytime you press render...and I think that this block users to use the viewport render GPU accelerated for the same reason and it's a pity cause it will be useful to have a fast less noiser preview.
Re: Huge RAM usage during kernel compilation
30 different mix nodes.marcatore wrote: ↑Wed Aug 07, 2019 11:54 am Now, I've questions for Dade.
1 - When you speak about "features" of a material to compile...are you referecing to the quality&quantity or just quality? I mean, if I have 30 mix nodes in a material, during the kernel compilation, the compiler "sees" 30 mix nodes or just "a mix node" ?
It is something to do statically at export time by BlendLuxCore, doing it at runtime, during the rendering, for every single sample, is very very wrong and a performance killer.marcatore wrote: ↑Wed Aug 07, 2019 11:54 am 2 - In my opinion, and for my daily experience and workflow, I consider really useful and timesaver to have a node that can handle multi textures with an assignment driven by faceID or Object ID. I'm referring to this Vray texture . It's useful because you can reduce the materials number and you can switch quickly to another texture just changing the ID number. In my project I have often some sets that differ just for a graphic or a color. In this way I can create a pool of textures I add to a single material and then, following the design needs, I go to switch just the ID and it's done. Or , for example, with the color of the shoes (or apparel) I can use a random script to change the object IDs and I'm fine with a reduced time spent.
This kind of stuff must be resolved statically.
This is planned for v2.3.marcatore wrote: ↑Wed Aug 07, 2019 11:54 am About kernel compiling
Ok, now I know that Nvidia needs a complete recompiling for every new render, at least everytime I add a single new feature...but I really hope you'll find a way for a onetime kernel compilation like Vray did. It's super annoying and time consuming to have it everytime you press render...and I think that this block users to use the viewport render GPU accelerated for the same reason and it's a pity cause it will be useful to have a fast less noiser preview.
Re: Huge RAM usage during kernel compilation
1. Ok. Understood. Now it's more clearDade wrote: ↑Wed Aug 07, 2019 12:52 pm
30 different mix nodes.
It is something to do statically at export time by BlendLuxCore, doing it at runtime, during the rendering, for every single sample, is very very wrong and a performance killer.
This kind of stuff must be resolved statically.
This is planned for v2.3.
2. Yes... runtime it seems really useless, I hope that will be done statically. What do you think BYOB?
3. Nice. It will be a great step for users.
Thank you
Re: Huge RAM usage during kernel compilation
I can't look into it right now, can you create a github issue about it to remind me in the future?