Page 1 of 2

Fire and Smoke test

Posted: Thu Apr 05, 2018 8:17 am
by Enty
Hi,

when I read that the hetrogenous volume rendering was reworked, I started doing some tests with it.
Here's what I came up with:
2018-04-03_FireAndSmoke_LCR.png
It's working really well now, also thanks to the new adaptive Sobol sampler.

A thing that I found a bit hard was that exporting a simulation done with adaptive domain isn't possible yet. Also editing the material settings is a bit hard because the export takes a while, so in the end I didn't use as much high resolution as would probably have been necessary to make the smoke come out sharper.

Nevertheless, great work, brings smoke rendering to the range of the possible, it doesn't take for ever and looks better than ever.

Render settings:
total depth: 6
engine: Path OpenCL + Sobol
adaptive strength: 0.8
rendertime: about half an hour

Re: Fire and Smoke test

Posted: Thu Apr 05, 2018 8:42 am
by Dade
Beautiful test :!:

How much GPU memory takes to render an image like this (see below) :?:
Enty wrote: Thu Apr 05, 2018 8:17 am A thing that I found a bit hard was that exporting a simulation done with adaptive domain isn't possible yet. Also editing the material settings is a bit hard because the export takes a while, so in the end I didn't use as much high resolution as would probably have been necessary to make the smoke come out sharper.
The upcoming support for OpenVDB should improve a lot the export/import times. However OpenVDB will be used only as storage format and the data will be still baked in memory as a regular grid so the memory usage will be be still high. The further steps from there will be:

1) use OpenVDB to store the data in memory too (however this is a CPU-only option because OpenVDB is not available for GPUs/OpenCL).

2) add the support for some kind of adaptive grid to save memory.

#2 looks like the best option but it is going to be a not trivial amount of work.

Re: Fire and Smoke test

Posted: Thu Apr 05, 2018 11:20 am
by neo2068
Enty wrote: Thu Apr 05, 2018 8:17 am A thing that I found a bit hard was that exporting a simulation done with adaptive domain isn't possible yet.
Do you have a sample scene? Here the export with adaptive domain works as expected.

Re: Fire and Smoke test

Posted: Thu Apr 05, 2018 11:58 am
by Sharlybg
when I read that the hetrogenous volume rendering was reworked, I started doing some tests with it.
Here's what I came up with:
That said this is the best smoke sim i've ever seen in Lux :ugeek:

Re: Fire and Smoke test

Posted: Thu Apr 05, 2018 1:39 pm
by Enty
neo2068 wrote: Thu Apr 05, 2018 11:20 am
Enty wrote: Thu Apr 05, 2018 8:17 am A thing that I found a bit hard was that exporting a simulation done with adaptive domain isn't possible yet.
Do you have a sample scene? Here the export with adaptive domain works as expected.
When I remember correctly I tested with beta1. What version did you use?
I'm also baking again with adaptive and see if it works with beta2.

Ok, so in beta2 it also works here with the adaptive domain simulation.
2018-04-05_FireAndSmoke_adaptive_LCR.png
That's it with the adaptive smoke domain. It looks different, but I think that's due to the smokesim itself being different.
(also moved the area light out of view)

Dade wrote: Thu Apr 05, 2018 8:42 am How much GPU memory takes to render an image like this (see below) :?:
Here's a screenshot showing the rendering of the adaptive sim as well as the GPU stats and the node setup for the volume used.
FireAndSmoke_snapshot.jpeg

Re: Fire and Smoke test

Posted: Thu Apr 05, 2018 2:03 pm
by Dade
Enty wrote: Thu Apr 05, 2018 1:39 pm Here's a screenshot showing the rendering of the adaptive sim as well as the GPU stats and the node setup for the volume
It would be interesting to check the amount of GPU memory used (only by LuxCore). I'm afraid it is a statistics currently available only in LuxCoreUI and not yet supported BlendLuxCore. To access LuxCoreUI, you have to download the stand alone version of LuxCoreRender, export the scene in text or binary format from BlendLuxCore and then load the scene in LuxCoreUI to start the render and finally press "j" to access the complete statistics.

@B.Y.O.B., how is handled the translation from adaptive grid to regular grid, is the resolution defined by the user or Blender ? Or the Blender adaptive domain simulation still generate a regular grid at the end ?

Re: Fire and Smoke test

Posted: Thu Apr 05, 2018 3:38 pm
by B.Y.O.B.
Dade wrote: Thu Apr 05, 2018 2:03 pm @B.Y.O.B., how is handled the translation from adaptive grid to regular grid, is the resolution defined by the user or Blender ? Or the Blender adaptive domain simulation still generate a regular grid at the end ?
It is completely transparent to us. In our code we don't even notice that the user is using an adaptive domain, Blender hides this perfectly from us.
Smoke export code: https://github.com/LuxCoreRender/BlendL ... t/smoke.py

About data structures, isn't a mostly-empty smoke simulation basically a sparse matrix?
Maybe there's an efficient data structure for those that we could implement?
Or we could look at openvdb.

Re: Fire and Smoke test

Posted: Thu Apr 05, 2018 4:05 pm
by Enty
Dade wrote: Thu Apr 05, 2018 2:03 pm It would be interesting to check the amount of GPU memory used (only by LuxCore). I'm afraid it is a statistics currently available only in LuxCoreUI and not yet supported BlendLuxCore. To access LuxCoreUI, you have to download the stand alone version of LuxCoreRender, export the scene in text or binary format from BlendLuxCore and then load the scene in LuxCoreUI to start the render and finally press "j" to access the complete statistics.
Here's the scene in LuxCoreUI:
FireAndSmoke_LCRUI_snapshot.jpeg
Are the 847M in the first line of the used intersection devices the GPU memory used?

Side note:
Before loading my scene, I tried the test scene shipping with LuxCoreUI, but it wasn't loading.
When I then loaded my scene, the headline wasn't there.

Re: Fire and Smoke test

Posted: Thu Apr 05, 2018 4:55 pm
by Dade
Enty wrote: Thu Apr 05, 2018 4:05 pm Are the 847M in the first line of the used intersection devices the GPU memory used?
Yes, exactly. It is not a small amount of memory, you can render about 4-5 millions of triangles + texture maps with about the some amount of memory but it is somewhat expected because of the regular grid storage. I'm going also to add another pretty simple feature to reduce memory usage: store values in half format (16bit floating point, it is trivial to do and cut the amount of memory required in half).
Enty wrote: Thu Apr 05, 2018 4:05 pm Before loading my scene, I tried the test scene shipping with LuxCoreUI, but it wasn't loading.
It supposed to be read from the main directory like explained in https://github.com/LuxCoreRender/LuxCor ... /README.md with a:

Code: Select all

./bin/luxcoreui scenes/cornell/cornell.cfg
However the 101% of the users will try to directly load the render.cfg and the referenced paths from there will be than wrong. I must really change the demo scene paths.
Enty wrote: Thu Apr 05, 2018 4:05 pm When I then loaded my scene, the headline wasn't there.
Do you mean the menu bar ? It seems a bug in ImGUI happening only on Windows (I have still to investigate): the menu is still there and will pop up as soon as you click there.

Re: Fire and Smoke test

Posted: Thu Apr 05, 2018 5:19 pm
by B.Y.O.B.
Dade wrote: Thu Apr 05, 2018 4:55 pm However the 101% of the users will try to directly load the render.cfg and the referenced paths from there will be than wrong. I must really change the demo scene paths.
Or you could add a fallback check during the scene loading that searches the directory of the .cfg for the other files.