lacilaci wrote: Sat Feb 02, 2019 7:57 am
In my last example you see pretty big difference cause in the extremely noisy render there is simply not enough samples to properly show the lighting.
But I asure you (since I've rendered this scene fully) that the lighting in denoised result is much closer to how a final raw render looks like than the observable lighting in the noisy example. It looks weird cause no postprocessing aside shitty tonemapping preset is used, but if I'd render to 4000 samples the denoised and non-denoised result would look pretty much the same in terms of lighting
That makes sense.
And as you have the comparison with higher sample renders this is very encouraging.
I'm just not used to this behaviour from my experience with cycles denoiser, which is optimised for consistent light balance in animation.
lacilaci wrote: Sat Feb 02, 2019 7:57 am
In my last example you see pretty big difference cause in the extremely noisy render there is simply not enough samples to properly show the lighting.
But I asure you (since I've rendered this scene fully) that the lighting in denoised result is much closer to how a final raw render looks like than the observable lighting in the noisy example. It looks weird cause no postprocessing aside shitty tonemapping preset is used, but if I'd render to 4000 samples the denoised and non-denoised result would look pretty much the same in terms of lighting
That makes sense.
And as you have the comparison with higher sample renders this is very encouraging.
I'm just not used to this behaviour from my experience with cycles denoiser, which is optimised for consistent light balance in animation.
Cycles's denosier doesn't do any prediction, so if you have very little samples and things look dark due to that, so will the denoised result, which is expected but there is no lighting recovery.
This denoiser does recover lighting so while it may look like you're getting different lighting, you are actually getting predicted lighting which is from my limited testing actualy very acurate! And let's not forget these my test are extreme cases so a quick preview at best, you wouldn't use this as your final output so no need to worry about lighting imperfection from ~30 samples rendering
lacilaci wrote: Sat Feb 02, 2019 6:59 am
And one more example.
This is a crop from 10minute 4K rendering(on my outdated 4770K!) that was denoised in 7.5 seconds.
Again, I think that the shading normal pass is rendering very slowly, especially texture bump maps are very noisy for too long, if it was cleaner the details would be even better.
So if I think about your own interior benchmark scene. It is now set for final quality at 3000 samples. With this denoiser you could get the same quality with only 300! samples, that's 10x speedup for rendering. But shading normal needs to provide cleaner edges, it's still crap at 300 samples and it shows in the denoising.
The 10x speed up to get a good looking rendered image is a huge step forward and this just implementing (congrat developers! this happend in a very short time) a further open source software which considers, long time available, 'side' information for its denoising algorithms. How ever, the result and speed are realy awsome!
I guess, there is still more data computed (but forgot) during rendering process wating for being mined.
Side note: Well, i7 4770K is more a veteran than outdated, it is still serves well, doesn't it. Price per computing power was and is still in a reasonable relation. Besides the computing speed, prices and thermal design power (TDP) of 2019s dekas core CPUs are in relation to their computing power neither economic nor oeconomic.
Light and Word designing Creator - aka quantenkristall || #luxcorerender
MacBook Air with M1
lacilaci wrote: Sat Feb 02, 2019 6:59 am
And one more example.
This is a crop from 10minute 4K rendering(on my outdated 4770K!) that was denoised in 7.5 seconds.
Again, I think that the shading normal pass is rendering very slowly, especially texture bump maps are very noisy for too long, if it was cleaner the details would be even better.
So if I think about your own interior benchmark scene. It is now set for final quality at 3000 samples. With this denoiser you could get the same quality with only 300! samples, that's 10x speedup for rendering. But shading normal needs to provide cleaner edges, it's still crap at 300 samples and it shows in the denoising.
The 10x speed up to get a good looking rendered image is a huge step forward and this just implementing (congrat developers! this happend in a very short time) a further open source software which considers, long time available, 'side' information for its denoising algorithms. How ever, the result and speed are realy awsome!
I guess, there is still more data computed (but forgot) during rendering process wating for being mined.
Side note: Well, i7 4770K is more a veteran than outdated, it is still serves well, doesn't it. Price per computing power was and is still in a reasonable relation. Besides the computing speed, prices and thermal design power (TDP) of 2019s dekas core CPUs are in relation to their computing power neither economic nor oeconomic.
It's not implemented yet afaik. Wonder how's that going though.
PS: I wanted to upgrade month or so back to some ryzen build + some 2080... But I'll wait to see how vega7 does and later probably for even newer ryzens...
lacilaci wrote: Sat Feb 02, 2019 6:10 am
However, the shading normal pass is very jagged/aliased so it actually ruins edges instead helping them. This is hopefuly cause I did this test in LDR... So, next I'll try exr's and we'll see. (maybe it would still make sense to have an option for antialiased normal pass?)
Could you possibly render a scene, then render again with doubled/quadrupled resolution, extract the shading normal AOV and scale it down (bicubic in PS or something) to original resolution (that would effectively make an antialiased normal AOV) and try with the original scene if it works better?
lacilaci wrote: Sat Feb 02, 2019 9:26 am It's not implemented yet afaik. Wonder how's that going though.
PS: I wanted to upgrade month or so back to some ryzen build + some 2080... But I'll wait to see how vega7 does and later probably for even newer ryzens...
My fault, looks like I did not read some important details the like implementation. Some times hope exeeds reality..
PS: This little mobile Ryzen 3 2300U 2.0 GHz, with TDP of max. 25 W renders CPU bidir pictures with about half the speed of 4770K 3.5 GHz with 84 W TDP.
Light and Word designing Creator - aka quantenkristall || #luxcorerender
MacBook Air with M1
lacilaci wrote: Sat Feb 02, 2019 6:10 am
However, the shading normal pass is very jagged/aliased so it actually ruins edges instead helping them. This is hopefuly cause I did this test in LDR... So, next I'll try exr's and we'll see. (maybe it would still make sense to have an option for antialiased normal pass?)
Could you possibly render a scene, then render again with doubled/quadrupled resolution, extract the shading normal AOV and scale it down (bicubic in PS or something) to original resolution (that would effectively make an antialiased normal AOV) and try with the original scene if it works better?
can't do that now.
Also I'd rather wait and see if dade or byob have something to say about that normal aov. Maybe I'm just wrong and it's all good. I just feel like the pass is too noisy and progressing very slowly.
I looked into the code, the shading normal AOV does not converge. Every new sample just replaces the old value: https://github.com/LuxCoreRender/LuxCor ... e.cpp#L203
(SetPixel() replaces, AddPixel() adds to the old value, and AddWeightedPixel() adds with anti-aliasing)
I also did a test with albedo and shading normal AOV, and the latter made the result worse.
At the moment, it looks like our shading normal AOV is not yet suited for this denoiser.
Has anyone found example data from Intel? Do they explain how the shading normal AOV should look like, how it should be computed?
But maybe I also made a mistake saving the normal AOV? I saved as EXR, then converted to PFM with imagemagick.
(the test below was done in HDR mode)
Attachments
noisy, 64 samples
albedo
normal (note: since this is a PNG, you can't use this for denoising, it's just for illustrative purposes)
Well wouldn't it be enough if it actually did converge?
Right now you're using noisy normal pass to drive details so maybe just a clean shading normal pass would work fine. Or do you think the whole pass should be computed differently?
This is the only info regarding normal in oidn docs:
"input image containing the normal (world-space or view-space, arbitrary length) of the first hit per pixel; optional, requires setting the albedo image too"
B.Y.O.B. wrote: Sat Feb 02, 2019 10:39 am
I looked into the code, the shading normal AOV does not converge. Every new sample just replaces the old value
Just found the same. I think this is a bug in Luxcore.
I made a comparison with Cycles normal pass which somehow doesn't contain texture normals, only object's, but it is definitely smooth and not aliased as the Lux AOV.