Adaptive sampling improvements

Discussion related to the Engine functionality, implementations and API.
Post Reply
provisory
Posts: 224
Joined: Wed Aug 01, 2018 4:26 pm

Re: Adaptive sampling improvements

Post by provisory » Sat Apr 13, 2019 7:07 am

I can imagine, that the only problem in the case of my lamp scene is the lamp itself, so it has overly bright parts that eats up the samples, although they will be simply white in the final image.

provisory
Posts: 224
Joined: Wed Aug 01, 2018 4:26 pm

Re: Adaptive sampling improvements

Post by provisory » Sat Apr 13, 2019 8:01 am

provisory wrote:
Sat Apr 13, 2019 7:07 am
I can imagine, that the only problem in the case of my lamp scene is the lamp itself, so it has overly bright parts that eats up the samples, although they will be simply white in the final image.
Well, no. I've blocked out most of the lampshade, and the adaptive render still looks noisier than non-adaptive:
adaptivity-test3.png

(Is there or could be there a time of build or GIT revision hash of LuxCore and BlendLuxCore in Blender to make it easy to make sure that we use the latest version?)

User avatar
B.Y.O.B.
Developer
Developer
Posts: 2962
Joined: Mon Dec 04, 2017 10:08 pm
Location: Germany
Contact:

Re: Adaptive sampling improvements

Post by B.Y.O.B. » Sat Apr 13, 2019 11:46 am

provisory wrote:
Sat Apr 13, 2019 7:07 am
I can imagine, that the only problem in the case of my lamp scene is the lamp itself, so it has overly bright parts that eats up the samples, although they will be simply white in the final image.
Can you upload your scene?
Support LuxCoreRender project with salts and bounties

User avatar
alpistinho
Developer
Developer
Posts: 157
Joined: Thu Jul 05, 2018 11:38 pm
Location: Rio de Janeiro

Re: Adaptive sampling improvements

Post by alpistinho » Sun Apr 14, 2019 12:28 am

Hi,

I think the issue here is that we're using the pixels from the image after it has been tonemapped.

Code: Select all

const float *img = film->channel_IMAGEPIPELINEs[0]->GetPixels();
Am I correct in that assumption? I've noticed that changing the tonemapper gain drastically changes the NOISE channel computed.

The paper doesn't mention anything about what values it is using for computation, but I think we should not be using the values after tonemapping for this. Which Framebuffer on the Film class stores the raw values? I think I've got lost.

I believe that this is interfering with what @provisory is reporting, with the current pathological case being scenes with some strong highlights that appear almost as but have very little noise
Support LuxCoreRender project with salts and bounties

User avatar
alpistinho
Developer
Developer
Posts: 157
Joined: Thu Jul 05, 2018 11:38 pm
Location: Rio de Janeiro

Re: Adaptive sampling improvements

Post by alpistinho » Sun Apr 14, 2019 12:43 am

Dade wrote:
Thu Apr 11, 2019 9:02 pm
It is intended and the result of this line of code: https://github.com/LuxCoreRender/LuxCor ... t.cpp#L121

Code: Select all

const float imgSum = imgR + imgG + imgB;
const float diff = (imgSum != 0.f) ?
	((dr + dg + db) / sqrt(imgR + imgG + imgB)) : 0.f;
The color "noise" is divided by the square of the color intensity: darker color have a larger denominator so they receive less samples. The paper explanation is:
The square root in the denominator is motivated by the logarithmic response of the human visual system to luminance. The term here behaves similarly, is easier to evaluate and was found to yield slightly better results.
Isn't it the other way around?
The brighter regions have higher values, so the differences are numerically bigger. To compensate, the division uses the square root to compensate for that, since the brighter regions will be divided by an larger value.
Support LuxCoreRender project with salts and bounties

User avatar
alpistinho
Developer
Developer
Posts: 157
Joined: Thu Jul 05, 2018 11:38 pm
Location: Rio de Janeiro

Re: Adaptive sampling improvements

Post by alpistinho » Sun Apr 14, 2019 1:12 am

Another thing to consider implementing is that the paper proposes using a second framebuffer where just the every second sample is stored and using these two buffers to calculate the metric.

This may work better and maybe it is easier to use since the step parameter would become less critical. Would it be hard to implement this? If you can give some pointers on where to look I would be grateful. Mainly where the samples are added to the Film
Support LuxCoreRender project with salts and bounties

provisory
Posts: 224
Joined: Wed Aug 01, 2018 4:26 pm

Re: Adaptive sampling improvements

Post by provisory » Sun Apr 14, 2019 8:13 am

I've uploaded the - slightly modified - scene. (I've removed the textures, hdri, and other unrelated things.)
This time I only used 50 samples for the sake of quickness.

The re-rendered test images:
adaptivity-test4.png
adaptivity-test4b.png
Attachments
ElskaLamp-adaptivity.blend
(4.35 MiB) Downloaded 23 times

User avatar
Dade
Developer
Developer
Posts: 3144
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

Re: Adaptive sampling improvements

Post by Dade » Sun Apr 14, 2019 10:08 am

alpistinho wrote:
Sun Apr 14, 2019 12:28 am
I think the issue here is that we're using the pixels from the image after it has been tonemapped.

Code: Select all

const float *img = film->channel_IMAGEPIPELINEs[0]->GetPixels();
Am I correct in that assumption? I've noticed that changing the tonemapper gain drastically changes the NOISE channel computed.
You need to split the "convergence" and "adaptive sampling input" process:

1) convergence test must be done at the end of the image pipeline;

2) adaptive sampling input can be the result of whatever you want;

The first step of this split was to have 2 separate AOVs (i.e. Convergence + Noise AOVs) now they can be the result of different computations.
Support LuxCoreRender project with salts and bounties

User avatar
Dade
Developer
Developer
Posts: 3144
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

Re: Adaptive sampling improvements

Post by Dade » Sun Apr 14, 2019 10:14 am

alpistinho wrote:
Sun Apr 14, 2019 1:12 am
Another thing to consider implementing is that the paper proposes using a second framebuffer where just the every second sample is stored and using these two buffers to calculate the metric.
Yes, it is a way of computing variance and variance is noise. However samplers like Metropolis have no concept of even/odd passes so it is not applicable for convergence test. It is also impossible to write in a distribute environment like when rendering whit multiple GPUs or/and CPU+GPU.
The result was the current implementation.
Support LuxCoreRender project with salts and bounties

User avatar
alpistinho
Developer
Developer
Posts: 157
Joined: Thu Jul 05, 2018 11:38 pm
Location: Rio de Janeiro

Re: Adaptive sampling improvements

Post by alpistinho » Sun Apr 14, 2019 11:37 am

Dade wrote:
Sun Apr 14, 2019 10:08 am
You need to split the "convergence" and "adaptive sampling input" process:

1) convergence test must be done at the end of the image pipeline;

2) adaptive sampling input can be the result of whatever you want;

The first step of this split was to have 2 separate AOVs (i.e. Convergence + Noise AOVs) now they can be the result of different computations.
I think I am going to split it into two classes so there is no more relationship between the two anymore.
Support LuxCoreRender project with salts and bounties

Post Reply