Page 7 of 15

Re: Adaptive sampling improvements

Posted: Sat Apr 13, 2019 7:07 am
by provisory
I can imagine, that the only problem in the case of my lamp scene is the lamp itself, so it has overly bright parts that eats up the samples, although they will be simply white in the final image.

Re: Adaptive sampling improvements

Posted: Sat Apr 13, 2019 8:01 am
by provisory
provisory wrote: Sat Apr 13, 2019 7:07 am I can imagine, that the only problem in the case of my lamp scene is the lamp itself, so it has overly bright parts that eats up the samples, although they will be simply white in the final image.
Well, no. I've blocked out most of the lampshade, and the adaptive render still looks noisier than non-adaptive:
adaptivity-test3.png

(Is there or could be there a time of build or GIT revision hash of LuxCore and BlendLuxCore in Blender to make it easy to make sure that we use the latest version?)

Re: Adaptive sampling improvements

Posted: Sat Apr 13, 2019 11:46 am
by B.Y.O.B.
provisory wrote: Sat Apr 13, 2019 7:07 am I can imagine, that the only problem in the case of my lamp scene is the lamp itself, so it has overly bright parts that eats up the samples, although they will be simply white in the final image.
Can you upload your scene?

Re: Adaptive sampling improvements

Posted: Sun Apr 14, 2019 12:28 am
by alpistinho
Hi,

I think the issue here is that we're using the pixels from the image after it has been tonemapped.

Code: Select all

const float *img = film->channel_IMAGEPIPELINEs[0]->GetPixels();
Am I correct in that assumption? I've noticed that changing the tonemapper gain drastically changes the NOISE channel computed.

The paper doesn't mention anything about what values it is using for computation, but I think we should not be using the values after tonemapping for this. Which Framebuffer on the Film class stores the raw values? I think I've got lost.

I believe that this is interfering with what @provisory is reporting, with the current pathological case being scenes with some strong highlights that appear almost as but have very little noise

Re: Adaptive sampling improvements

Posted: Sun Apr 14, 2019 12:43 am
by alpistinho
Dade wrote: Thu Apr 11, 2019 9:02 pm It is intended and the result of this line of code: https://github.com/LuxCoreRender/LuxCor ... t.cpp#L121

Code: Select all

const float imgSum = imgR + imgG + imgB;
const float diff = (imgSum != 0.f) ?
	((dr + dg + db) / sqrt(imgR + imgG + imgB)) : 0.f;
The color "noise" is divided by the square of the color intensity: darker color have a larger denominator so they receive less samples. The paper explanation is:
The square root in the denominator is motivated by the logarithmic response of the human visual system to luminance. The term here behaves similarly, is easier to evaluate and was found to yield slightly better results.
Isn't it the other way around?
The brighter regions have higher values, so the differences are numerically bigger. To compensate, the division uses the square root to compensate for that, since the brighter regions will be divided by an larger value.

Re: Adaptive sampling improvements

Posted: Sun Apr 14, 2019 1:12 am
by alpistinho
Another thing to consider implementing is that the paper proposes using a second framebuffer where just the every second sample is stored and using these two buffers to calculate the metric.

This may work better and maybe it is easier to use since the step parameter would become less critical. Would it be hard to implement this? If you can give some pointers on where to look I would be grateful. Mainly where the samples are added to the Film

Re: Adaptive sampling improvements

Posted: Sun Apr 14, 2019 8:13 am
by provisory
I've uploaded the - slightly modified - scene. (I've removed the textures, hdri, and other unrelated things.)
This time I only used 50 samples for the sake of quickness.

The re-rendered test images:
adaptivity-test4.png
adaptivity-test4b.png

Re: Adaptive sampling improvements

Posted: Sun Apr 14, 2019 10:08 am
by Dade
alpistinho wrote: Sun Apr 14, 2019 12:28 am I think the issue here is that we're using the pixels from the image after it has been tonemapped.

Code: Select all

const float *img = film->channel_IMAGEPIPELINEs[0]->GetPixels();
Am I correct in that assumption? I've noticed that changing the tonemapper gain drastically changes the NOISE channel computed.
You need to split the "convergence" and "adaptive sampling input" process:

1) convergence test must be done at the end of the image pipeline;

2) adaptive sampling input can be the result of whatever you want;

The first step of this split was to have 2 separate AOVs (i.e. Convergence + Noise AOVs) now they can be the result of different computations.

Re: Adaptive sampling improvements

Posted: Sun Apr 14, 2019 10:14 am
by Dade
alpistinho wrote: Sun Apr 14, 2019 1:12 am Another thing to consider implementing is that the paper proposes using a second framebuffer where just the every second sample is stored and using these two buffers to calculate the metric.
Yes, it is a way of computing variance and variance is noise. However samplers like Metropolis have no concept of even/odd passes so it is not applicable for convergence test. It is also impossible to write in a distribute environment like when rendering whit multiple GPUs or/and CPU+GPU.
The result was the current implementation.

Re: Adaptive sampling improvements

Posted: Sun Apr 14, 2019 11:37 am
by alpistinho
Dade wrote: Sun Apr 14, 2019 10:08 am You need to split the "convergence" and "adaptive sampling input" process:

1) convergence test must be done at the end of the image pipeline;

2) adaptive sampling input can be the result of whatever you want;

The first step of this split was to have 2 separate AOVs (i.e. Convergence + Noise AOVs) now they can be the result of different computations.
I think I am going to split it into two classes so there is no more relationship between the two anymore.