Ok, so you have just draw the random numbers even if you are skipping the pixel This should be fix the problem at a trivial cost of generating just 3 more random numbers.alpistinho wrote: ↑Fri Apr 19, 2019 4:08 pm The adaptive sampling is used, the stochastic nature breaks this and the values uses vary across passes and somehow this leads to a worse sampling distribution.
Adaptive sampling improvements
Re: Adaptive sampling improvements
- alpistinho
- Developer
- Posts: 198
- Joined: Thu Jul 05, 2018 11:38 pm
- Location: Rio de Janeiro
Re: Adaptive sampling improvements
Thanks for the suggestion!
I just implemented and I think it is working as expected now. I will send the PR.
This probably affected the old adaptive sampler too, but the adaptiveness possibly compensates it enough it was hard to notice in most scenes.
@provisory, thanks for the help!
- alpistinho
- Developer
- Posts: 198
- Joined: Thu Jul 05, 2018 11:38 pm
- Location: Rio de Janeiro
Re: Adaptive sampling improvements
Comparing with the initial results, it seems this was an issue all along:
Fixed distribution: Wrong distribution: Uniform sampling:
Fixed distribution: Wrong distribution: Uniform sampling:
- alpistinho
- Developer
- Posts: 198
- Joined: Thu Jul 05, 2018 11:38 pm
- Location: Rio de Janeiro
Re: Adaptive sampling improvements
Would you be able to redo this test?
Besides, keep in mind that the tonemap you choose is considered during the noise estimation, so rendering with some given tonemap settings and changing the image exposure or some other setting afterwards should lead to sub-optimal results
Last edited by alpistinho on Fri Apr 19, 2019 8:06 pm, edited 1 time in total.
Re: Adaptive sampling improvements
Great news, thank you for your efforts!
I'll test it tomorrow.
I think this is a good direction and it'll give the best results in most cases.alpistinho wrote: ↑Fri Apr 19, 2019 6:51 pm Besides, keep in mind that the tonemap you chose is considered during the noise estimation...
Re: Adaptive sampling improvements
If I'm right, BlendLuxCore still didn't catch up with the new adaptive sampling parameters, so I used LuxCore UI.
I've found the parameters in the code: film.noiseestimation.warmup and film.noiseestimation.step.
Blender's color management (Filmic) is really missing here, so these images don't look like the original ones.
I used Gimp's Stretch Contrast for samplecount passes.
renderengine.type = "BIDIRCPU", batch.haltspp = 50, film.noiseestimation.warmup = 5, film.noiseestimation.step = 5
There are interesting dots in the non-adaptive samplecount image.
The two combined outputs are so similar, that I wasn't sure if there is any difference at all, so I made a diff of them (I used Stretch Contrast here too):
I've found the parameters in the code: film.noiseestimation.warmup and film.noiseestimation.step.
Blender's color management (Filmic) is really missing here, so these images don't look like the original ones.
I used Gimp's Stretch Contrast for samplecount passes.
renderengine.type = "BIDIRCPU", batch.haltspp = 50, film.noiseestimation.warmup = 5, film.noiseestimation.step = 5
There are interesting dots in the non-adaptive samplecount image.
The two combined outputs are so similar, that I wasn't sure if there is any difference at all, so I made a diff of them (I used Stretch Contrast here too):
Re: Adaptive sampling improvements
Don't do the tests with BIDIRCPU, use PATHCPU instead: the only real/effective adaptive sampler for BiDir is Metropolis because Adaptive Sobol can work only on eye paths while light paths are unaffected.
I always write that LuxCore is BIDIRCPU+Metropolis or PATHOCL+Sobol(+PhotonGI) all other combinations are more an "academic" exercise and/or corner cases that something really useful.
Re: Adaptive sampling improvements
Thank you for your advice!
However in my (not too extensive) experience, it's harder to get noise-free result with metropolis than with sobol (except in the case of caustics). Shadow areas remain noisy for a longer time.
Couldn't adaptivity be used for large mutation in MLT?
Re: Adaptive sampling improvements
The problem would still be the light paths: when you connect a light path to the camera, the affected pixel can be anywhere on the image plane. Metropolis is the only one that can drive light paths according a predefined behavior (i.e the luminance of the obtained sample).
If BIDIRCPU+Sobol is faster than BIDIRCPU+Metropolis, you are probably rendering a scene that will converge even faster with PATHCPU+Sobol