Adaptive sampling improvements
Re: Adaptive sampling improvements
I can imagine, that the only problem in the case of my lamp scene is the lamp itself, so it has overly bright parts that eats up the samples, although they will be simply white in the final image.
Re: Adaptive sampling improvements
Well, no. I've blocked out most of the lampshade, and the adaptive render still looks noisier than non-adaptive:
(Is there or could be there a time of build or GIT revision hash of LuxCore and BlendLuxCore in Blender to make it easy to make sure that we use the latest version?)
Re: Adaptive sampling improvements
Can you upload your scene?
- alpistinho
- Developer
- Posts: 198
- Joined: Thu Jul 05, 2018 11:38 pm
- Location: Rio de Janeiro
Re: Adaptive sampling improvements
Hi,
I think the issue here is that we're using the pixels from the image after it has been tonemapped.
Am I correct in that assumption? I've noticed that changing the tonemapper gain drastically changes the NOISE channel computed.
The paper doesn't mention anything about what values it is using for computation, but I think we should not be using the values after tonemapping for this. Which Framebuffer on the Film class stores the raw values? I think I've got lost.
I believe that this is interfering with what @provisory is reporting, with the current pathological case being scenes with some strong highlights that appear almost as but have very little noise
I think the issue here is that we're using the pixels from the image after it has been tonemapped.
Code: Select all
const float *img = film->channel_IMAGEPIPELINEs[0]->GetPixels();
The paper doesn't mention anything about what values it is using for computation, but I think we should not be using the values after tonemapping for this. Which Framebuffer on the Film class stores the raw values? I think I've got lost.
I believe that this is interfering with what @provisory is reporting, with the current pathological case being scenes with some strong highlights that appear almost as but have very little noise
- alpistinho
- Developer
- Posts: 198
- Joined: Thu Jul 05, 2018 11:38 pm
- Location: Rio de Janeiro
Re: Adaptive sampling improvements
Isn't it the other way around?Dade wrote: ↑Thu Apr 11, 2019 9:02 pm It is intended and the result of this line of code: https://github.com/LuxCoreRender/LuxCor ... t.cpp#L121
The color "noise" is divided by the square of the color intensity: darker color have a larger denominator so they receive less samples. The paper explanation is:Code: Select all
const float imgSum = imgR + imgG + imgB; const float diff = (imgSum != 0.f) ? ((dr + dg + db) / sqrt(imgR + imgG + imgB)) : 0.f;
The square root in the denominator is motivated by the logarithmic response of the human visual system to luminance. The term here behaves similarly, is easier to evaluate and was found to yield slightly better results.
The brighter regions have higher values, so the differences are numerically bigger. To compensate, the division uses the square root to compensate for that, since the brighter regions will be divided by an larger value.
- alpistinho
- Developer
- Posts: 198
- Joined: Thu Jul 05, 2018 11:38 pm
- Location: Rio de Janeiro
Re: Adaptive sampling improvements
Another thing to consider implementing is that the paper proposes using a second framebuffer where just the every second sample is stored and using these two buffers to calculate the metric.
This may work better and maybe it is easier to use since the step parameter would become less critical. Would it be hard to implement this? If you can give some pointers on where to look I would be grateful. Mainly where the samples are added to the Film
This may work better and maybe it is easier to use since the step parameter would become less critical. Would it be hard to implement this? If you can give some pointers on where to look I would be grateful. Mainly where the samples are added to the Film
Re: Adaptive sampling improvements
I've uploaded the - slightly modified - scene. (I've removed the textures, hdri, and other unrelated things.)
This time I only used 50 samples for the sake of quickness.
The re-rendered test images:
This time I only used 50 samples for the sake of quickness.
The re-rendered test images:
- Attachments
-
- ElskaLamp-adaptivity.blend
- (4.35 MiB) Downloaded 121 times
Re: Adaptive sampling improvements
You need to split the "convergence" and "adaptive sampling input" process:alpistinho wrote: ↑Sun Apr 14, 2019 12:28 am I think the issue here is that we're using the pixels from the image after it has been tonemapped.
Am I correct in that assumption? I've noticed that changing the tonemapper gain drastically changes the NOISE channel computed.Code: Select all
const float *img = film->channel_IMAGEPIPELINEs[0]->GetPixels();
1) convergence test must be done at the end of the image pipeline;
2) adaptive sampling input can be the result of whatever you want;
The first step of this split was to have 2 separate AOVs (i.e. Convergence + Noise AOVs) now they can be the result of different computations.
Re: Adaptive sampling improvements
Yes, it is a way of computing variance and variance is noise. However samplers like Metropolis have no concept of even/odd passes so it is not applicable for convergence test. It is also impossible to write in a distribute environment like when rendering whit multiple GPUs or/and CPU+GPU.alpistinho wrote: ↑Sun Apr 14, 2019 1:12 am Another thing to consider implementing is that the paper proposes using a second framebuffer where just the every second sample is stored and using these two buffers to calculate the metric.
The result was the current implementation.
- alpistinho
- Developer
- Posts: 198
- Joined: Thu Jul 05, 2018 11:38 pm
- Location: Rio de Janeiro
Re: Adaptive sampling improvements
I think I am going to split it into two classes so there is no more relationship between the two anymore.Dade wrote: ↑Sun Apr 14, 2019 10:08 am You need to split the "convergence" and "adaptive sampling input" process:
1) convergence test must be done at the end of the image pipeline;
2) adaptive sampling input can be the result of whatever you want;
The first step of this split was to have 2 separate AOVs (i.e. Convergence + Noise AOVs) now they can be the result of different computations.