CGI tech news box

General project and community related discussions and offtopic threads.
User avatar
Dade
Developer
Developer
Posts: 3288
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

Re: Adaptive Environment Sampling on CPU and GPU

Post by Dade » Tue Oct 15, 2019 1:06 pm

epilectrolytics wrote:
Tue Oct 15, 2019 10:51 am
It sounds like this does not only help with direct light like ELVC but indirect light also?
Would this work with BiDir(VM) too?
(Looks like a simple way of path guiding (MIS) to me.)
Direct light sampling is done for each path vertex so after the first bounce, it contributes to indirect lighting too (here and in PhotonGI).
Support LuxCoreRender project with salts and bounties

User avatar
FarbigeWelt
Donor
Donor
Posts: 877
Joined: Sun Jul 01, 2018 12:07 pm
Location: Switzerland
Contact:

Re: Adaptive Environment Sampling on CPU and GPU

Post by FarbigeWelt » Thu Oct 17, 2019 12:51 pm

Dade wrote:
Mon Oct 14, 2019 11:13 pm
However the good news is this paper has the piece of the puzzle I was missing for ELVC: the last step. I'm using an env. light visibility map (i.e. a pixel image). ...

I sample the tile according the usual light intensity (i.e. classic importance sampling), I can use visibility maps that are a loooot smaller and so also a loooot faster to build.

Now, it is optimal: ...

I ... need ... to ... write this code ....
Maybe you understand there is further puzzle 🧩 for an additional performing boost, at least according to my thought experiments.
I know the following is bit short. I hope you catch my idea anyway.
If you replace the classic importance sampling in a second step by convergence forecast sampling (i.e. probability map of fastest convergence) then rendering focuses first on areas converging fast and after these parts are finished rendering cares gradually for lower probable converging areas. The setup in the paper explains pretty close but indirectly the required ‚learning phase‘: the connection of camera to converging probability tiles.
Light and Word designing Creator - www.farbigewelt.ch - aka quantenkristall || #luxcorerender
160.8 | 42.8 (10.7) Gfp | Windows 10 Pro, intel i7 4770K@3.5, 32 GB
2 AMD Radeon RX 5700 XT, 8 GB || Gfp = SFFT Gflops

kintuX
Posts: 474
Joined: Wed Jan 10, 2018 2:37 am

Re: CGI tech news box

Post by kintuX » Fri Oct 18, 2019 1:25 am

Oh, oh...
Clarisse IFX Rendering improvements: Fireflies Filtering

Some information from Sam:

The attribute is called Fireflies Filtering, this new solution works all the time even on LPEs. It removes 1st generation fireflies and other generation too.


Which means that we will remove clamping: it gives results that are far beyond clamping while minimizing the loss of energy. More importantly the more sample, the more it converges to the ideal result. It can also reduce considerably noise (albeit loosing some energy). The great thing is that it always gives results that are way better than clamping and is only applied when detecting problematic path. Which means that it has no effect whatsoever on general paths that are "easy" to sample unlike clamping. It's the best solution to the problem, plus no render hit. Not planned for the next SP.
Image

Image
8-) eagerly awaiting further news :D

User avatar
Sharlybg
Donor
Donor
Posts: 1531
Joined: Mon Dec 04, 2017 10:11 pm
Location: Ivory Coast

Re: CGI tech news box

Post by Sharlybg » Fri Oct 18, 2019 9:02 am

This is very special :idea:
Support LuxCoreRender project with salts and bounties

Portfolio : https://www.behance.net/DRAVIA

User avatar
FarbigeWelt
Donor
Donor
Posts: 877
Joined: Sun Jul 01, 2018 12:07 pm
Location: Switzerland
Contact:

Re: CGI tech news box

Post by FarbigeWelt » Fri Oct 18, 2019 1:19 pm

kintuX wrote:
Fri Oct 18, 2019 1:25 am
8-) eagerly awaiting further news :D
Hhm.
Actually, fireflies can be avoided with a simple probability check of a sample‘s energy. It only needs the information of each material involved in the bouncings or dispersion and the energy of the light source. Additionally, material‘s properties must be analysed to get probability of output direction, i.e. roughness disperses energy in space.
With these information you can calculate the theoretically correct energy maximum. If sample exceeds this maximum by factors the sample is most probable a firefly.
Now you can cut the sample to the expected maximum.

The check calculations are just a chain multiplication of which precalculation of usual material output and its probability of output per angle respectively dispersion contribution is possible, exceptions e.g. anisotropics or coat films require input angle depending calculations, i.e. they cannot be precalculated.

I wonder why clarisse developed a from x to 100% adjustable filter.
Light and Word designing Creator - www.farbigewelt.ch - aka quantenkristall || #luxcorerender
160.8 | 42.8 (10.7) Gfp | Windows 10 Pro, intel i7 4770K@3.5, 32 GB
2 AMD Radeon RX 5700 XT, 8 GB || Gfp = SFFT Gflops

kintuX
Posts: 474
Joined: Wed Jan 10, 2018 2:37 am

Re: CGI tech news box

Post by kintuX » Fri Oct 18, 2019 1:57 pm

IDK, but I imagine that, with such a vast number of PTs around and minds tackling same issues, if fireflies & noise were so simple to solve, than everyone would be doing it... right? ;)

BTW, now I see that my post is written in wrong order, was tired and loosing attention... in short - I'm eagerly awaiting Dade's news on EVLC 8-)

User avatar
FarbigeWelt
Donor
Donor
Posts: 877
Joined: Sun Jul 01, 2018 12:07 pm
Location: Switzerland
Contact:

Re: CGI tech news box

Post by FarbigeWelt » Fri Oct 18, 2019 4:15 pm

kintuX wrote:
Fri Oct 18, 2019 1:57 pm
, if fireflies & noise were so simple to solve, than everyone would be doing it... right? ;)

BTW, now I see that my post is written in wrong order, was tired and loosing attention... in short - I'm eagerly awaiting Dade's news on EVLC 8-)
Well, sometimes you need a simple mind to get a simple solution. Smart people care about complex questions, they see the horizon but not the fruits in bushes along their way.
Noise is different, it is obvious easier to (learn to) see it than get rid of it using mathematics / statistics, although they are good in value noise.

Sure! Who is not looking for to getting Dade‘s improvement for environment lighting renders - no portals - settings - and flashing fast. 😀
Light and Word designing Creator - www.farbigewelt.ch - aka quantenkristall || #luxcorerender
160.8 | 42.8 (10.7) Gfp | Windows 10 Pro, intel i7 4770K@3.5, 32 GB
2 AMD Radeon RX 5700 XT, 8 GB || Gfp = SFFT Gflops

patrickawalz
Supporting Users
Posts: 34
Joined: Tue Dec 05, 2017 1:45 pm

Re: CGI tech news box

Post by patrickawalz » Wed Oct 30, 2019 3:15 am

Surface Gradient Based Bump Mapping Framework


Since better bump mapping is planned as an upcoming feature, this might actually be a useful read...

User avatar
lacilaci
Donor
Donor
Posts: 1637
Joined: Fri May 04, 2018 5:16 am

Re: CGI tech news box

Post by lacilaci » Wed Oct 30, 2019 1:30 pm

+1 for a good firefly removal, it's a bit of problem to mix denoised and non-denoised result when there are strong fireflies.

User avatar
Sharlybg
Donor
Donor
Posts: 1531
Joined: Mon Dec 04, 2017 10:11 pm
Location: Ivory Coast

Re: Adaptive Environment Sampling on CPU and GPU

Post by Sharlybg » Fri Nov 08, 2019 3:06 pm

Dade wrote:
Mon Oct 14, 2019 11:13 pm
Ohoh, their solution, when compare to LuxCore Env. Light Visibility Cache, is a classic trade off between memory usage and quality: ELVC requires a LOT more memory/pre-processing to works well but it deliver better results (if you use enough memory/pre-processing). Otherwise the 2 solutions are quite similar.

However the good news is this paper has the piece of the puzzle I was missing for ELVC: the last step. I'm using an env. light visibility map (i.e. a pixel image). The ELVC problem is you need very high resolution maps to work well. This cost both a LOT of memory and pre-processing time: more pixels, more shadow rays to trace.

This is 1 level hierarchy solution. If I use 2 levels hierarchy where one map pixel point to an env. light tile and I sample the tile according the usual light intensity (i.e. classic importance sampling), I can use visibility maps that are a loooot smaller and so also a loooot faster to build.

Now, it is optimal:

- one visibility map pixel => one HDR pixel

While, for instance, with tiles:

- one visibility map pixel => one HDR 8x8 tile (64 times less pixels to stores and 64 times faster pre-processing !!!!!!) => a HDR tile pixel picked according importance sampling

I ... need ... to ... write this code ....
Is last Build contain this improvement ?
Support LuxCoreRender project with salts and bounties

Portfolio : https://www.behance.net/DRAVIA

Post Reply