Hi,
when I read that the hetrogenous volume rendering was reworked, I started doing some tests with it.
Here's what I came up with:
It's working really well now, also thanks to the new adaptive Sobol sampler.
A thing that I found a bit hard was that exporting a simulation done with adaptive domain isn't possible yet. Also editing the material settings is a bit hard because the export takes a while, so in the end I didn't use as much high resolution as would probably have been necessary to make the smoke come out sharper.
Nevertheless, great work, brings smoke rendering to the range of the possible, it doesn't take for ever and looks better than ever.
Render settings:
total depth: 6
engine: Path OpenCL + Sobol
adaptive strength: 0.8
rendertime: about half an hour
Fire and Smoke test
Re: Fire and Smoke test
Beautiful test
How much GPU memory takes to render an image like this (see below)
1) use OpenVDB to store the data in memory too (however this is a CPU-only option because OpenVDB is not available for GPUs/OpenCL).
2) add the support for some kind of adaptive grid to save memory.
#2 looks like the best option but it is going to be a not trivial amount of work.
How much GPU memory takes to render an image like this (see below)
The upcoming support for OpenVDB should improve a lot the export/import times. However OpenVDB will be used only as storage format and the data will be still baked in memory as a regular grid so the memory usage will be be still high. The further steps from there will be:Enty wrote: ↑Thu Apr 05, 2018 8:17 am A thing that I found a bit hard was that exporting a simulation done with adaptive domain isn't possible yet. Also editing the material settings is a bit hard because the export takes a while, so in the end I didn't use as much high resolution as would probably have been necessary to make the smoke come out sharper.
1) use OpenVDB to store the data in memory too (however this is a CPU-only option because OpenVDB is not available for GPUs/OpenCL).
2) add the support for some kind of adaptive grid to save memory.
#2 looks like the best option but it is going to be a not trivial amount of work.
Re: Fire and Smoke test
Do you have a sample scene? Here the export with adaptive domain works as expected.
Re: Fire and Smoke test
That said this is the best smoke sim i've ever seen in Luxwhen I read that the hetrogenous volume rendering was reworked, I started doing some tests with it.
Here's what I came up with:
Re: Fire and Smoke test
When I remember correctly I tested with beta1. What version did you use?
I'm also baking again with adaptive and see if it works with beta2.
Ok, so in beta2 it also works here with the adaptive domain simulation. That's it with the adaptive smoke domain. It looks different, but I think that's due to the smokesim itself being different.
(also moved the area light out of view)
Here's a screenshot showing the rendering of the adaptive sim as well as the GPU stats and the node setup for the volume used.
Re: Fire and Smoke test
It would be interesting to check the amount of GPU memory used (only by LuxCore). I'm afraid it is a statistics currently available only in LuxCoreUI and not yet supported BlendLuxCore. To access LuxCoreUI, you have to download the stand alone version of LuxCoreRender, export the scene in text or binary format from BlendLuxCore and then load the scene in LuxCoreUI to start the render and finally press "j" to access the complete statistics.
@B.Y.O.B., how is handled the translation from adaptive grid to regular grid, is the resolution defined by the user or Blender ? Or the Blender adaptive domain simulation still generate a regular grid at the end ?
Re: Fire and Smoke test
It is completely transparent to us. In our code we don't even notice that the user is using an adaptive domain, Blender hides this perfectly from us.
Smoke export code: https://github.com/LuxCoreRender/BlendL ... t/smoke.py
About data structures, isn't a mostly-empty smoke simulation basically a sparse matrix?
Maybe there's an efficient data structure for those that we could implement?
Or we could look at openvdb.
Re: Fire and Smoke test
Here's the scene in LuxCoreUI: Are the 847M in the first line of the used intersection devices the GPU memory used?Dade wrote: ↑Thu Apr 05, 2018 2:03 pm It would be interesting to check the amount of GPU memory used (only by LuxCore). I'm afraid it is a statistics currently available only in LuxCoreUI and not yet supported BlendLuxCore. To access LuxCoreUI, you have to download the stand alone version of LuxCoreRender, export the scene in text or binary format from BlendLuxCore and then load the scene in LuxCoreUI to start the render and finally press "j" to access the complete statistics.
Side note:
Before loading my scene, I tried the test scene shipping with LuxCoreUI, but it wasn't loading.
When I then loaded my scene, the headline wasn't there.
Re: Fire and Smoke test
Yes, exactly. It is not a small amount of memory, you can render about 4-5 millions of triangles + texture maps with about the some amount of memory but it is somewhat expected because of the regular grid storage. I'm going also to add another pretty simple feature to reduce memory usage: store values in half format (16bit floating point, it is trivial to do and cut the amount of memory required in half).
It supposed to be read from the main directory like explained in https://github.com/LuxCoreRender/LuxCor ... /README.md with a:
Code: Select all
./bin/luxcoreui scenes/cornell/cornell.cfg
Do you mean the menu bar ? It seems a bug in ImGUI happening only on Windows (I have still to investigate): the menu is still there and will pop up as soon as you click there.
Re: Fire and Smoke test
Or you could add a fallback check during the scene loading that searches the directory of the .cfg for the other files.