Can I get the same result as I have now with glossy translucent? It's working fine for the curtains without envolving SSS... How should I get the same with Disney mat?
PhotonGI cache
Re: PhotonGI cache
This is a glossy Cornell box with 8.0:
and this with 0.0:
The difference is pretty obvious, how is in your case ?
The red zone with 0.0 are there because Cornell scene is open and the rays bouncing over the front faces and hit the sky (as I wrote, the indirect cache can kick in only from the second bounce).
Is your room scene open ?
Re: PhotonGI cache
This is the scene you sent me with 8.0:
and this with 0.0:
Re: PhotonGI cache
About the room you've already checked how it's done. In any case it's a normal two closed room with one window (opened) each room.
About your image...well.... I'd like to have something like you've got.
I've opened the scene I've sent you.
To avoid possible problems with the 2.9, I've opened the scene with 2.83.4 and, as the previous tests, I'm using latest BlendLuxCore 2.5 alpha.
I've changed "normal angle" to default 10° and set brute force radius scale to 0 and this is what I've got.
About your image...well.... I'd like to have something like you've got.
I've opened the scene I've sent you.
To avoid possible problems with the 2.9, I've opened the scene with 2.83.4 and, as the previous tests, I'm using latest BlendLuxCore 2.5 alpha.
I've changed "normal angle" to default 10° and set brute force radius scale to 0 and this is what I've got.
Re: PhotonGI cache
This is with 8 inside blender with your parameters:
and this with 0:
The difference is there and is large.
There is a difference with the results I posted because they were rendered outside Blender without the color curve re-map/post-processing your scene seems to have.
This is a normal rendering outside Vs inside Blender:
As you see, you are doing some kind of heavy tone mapping/color curve re-mpapping/post-processing. Indeed, you should render the debug images without any post-processing or your perception may be heavily screwed.
Re: PhotonGI cache
you're right. It's the color management.
I'll try to come back to a normal setup but I'm finding problem with camera tonemapping. I've tried to use the rehinard but it's not so intuitive.
The balance overburned area/general lighting is difficult to find with the camera tonemapping settings.
BYOB, Is it possible to avoid this behaviour inside Blender? As it's a debug tool, it shouldn't be modified by the Blender color management.
I'll try to come back to a normal setup but I'm finding problem with camera tonemapping. I've tried to use the rehinard but it's not so intuitive.
The balance overburned area/general lighting is difficult to find with the camera tonemapping settings.
BYOB, Is it possible to avoid this behaviour inside Blender? As it's a debug tool, it shouldn't be modified by the Blender color management.
Re: PhotonGI cache
I already have measures in place to counter Blender's exposure: https://github.com/LuxCoreRender/BlendL ... ine.py#L18
And the tonemapper is set to linear.
What else is happening in your case? It would be good to get a stripped-down version of the scene that shows the color management and imagepipeline settings.
Re: PhotonGI cache
I'm playing with this idea of having an RTX accelerated PhotonGI rendering. I'm not sure if I can pull it but it should be possible. I will check after I finished a couple of pending stuff.
Re: PhotonGI cache
Not sure to understand. Wich part do you want to speed up ?
Cache computing ?
Btw i still wonder if there is no possibility to try the online learning approach.
Yesterday i was reading about it ( hyperion render corona render and Mitsuba implementation).
Will be very help full for all the case where PGi can't be used. Like :
Animation with moving object
Animation with animated light ( color / position)
Still or animation with mostly reflective surface
It is a lot of scenario cases where we miss the speed bump
Provided by PGI.
You're probably not very attracted to this " pratical path guiding " thing maybe because it isn't GPU friendly but
There is a lot of room it can open to us. Not only the cases i mention about animation and reflective surface but also this can speed up PGI as well.
I also read that Thomas muller from Nvidia write an implementation for hyperion and the method have a link with AI so probably it can be done on GPUs soon.
It can also speed up Bidir ( making the door to spectral rendering not so expanssive in the futur).
Edit
Here it is a paper with path guiding better at volume rendering to from 2020.
https://link.springer.com/article/10.10 ... 020-0160-1