Direct light sampling is done for each path vertex so after the first bounce, it contributes to indirect lighting too (here and in PhotonGI).epilectrolytics wrote: ↑Tue Oct 15, 2019 10:51 am It sounds like this does not only help with direct light like ELVC but indirect light also?
Would this work with BiDir(VM) too?
(Looks like a simple way of path guiding (MIS) to me.)
CGI tech news box
Re: Adaptive Environment Sampling on CPU and GPU
- FarbigeWelt
- Donor
- Posts: 1046
- Joined: Sun Jul 01, 2018 12:07 pm
- Location: Switzerland
- Contact:
Re: Adaptive Environment Sampling on CPU and GPU
Maybe you understand there is further puzzle for an additional performing boost, at least according to my thought experiments.Dade wrote: ↑Mon Oct 14, 2019 11:13 pmHowever the good news is this paper has the piece of the puzzle I was missing for ELVC: the last step. I'm using an env. light visibility map (i.e. a pixel image). ...
I sample the tile according the usual light intensity (i.e. classic importance sampling), I can use visibility maps that are a loooot smaller and so also a loooot faster to build.
Now, it is optimal: ...
I ... need ... to ... write this code ....
I know the following is bit short. I hope you catch my idea anyway.
If you replace the classic importance sampling in a second step by convergence forecast sampling (i.e. probability map of fastest convergence) then rendering focuses first on areas converging fast and after these parts are finished rendering cares gradually for lower probable converging areas. The setup in the paper explains pretty close but indirectly the required ‚learning phase‘: the connection of camera to converging probability tiles.
Light and Word designing Creator - www.farbigewelt.ch - aka quantenkristall || #luxcorerender
MacBook Air with M1
MacBook Air with M1
Re: CGI tech news box
Oh, oh...
eagerly awaiting further newsClarisse IFX Rendering improvements: Fireflies Filtering
Some information from Sam:
The attribute is called Fireflies Filtering, this new solution works all the time even on LPEs. It removes 1st generation fireflies and other generation too.
Which means that we will remove clamping: it gives results that are far beyond clamping while minimizing the loss of energy. More importantly the more sample, the more it converges to the ideal result. It can also reduce considerably noise (albeit loosing some energy). The great thing is that it always gives results that are way better than clamping and is only applied when detecting problematic path. Which means that it has no effect whatsoever on general paths that are "easy" to sample unlike clamping. It's the best solution to the problem, plus no render hit. Not planned for the next SP.
Re: CGI tech news box
This is very special
- FarbigeWelt
- Donor
- Posts: 1046
- Joined: Sun Jul 01, 2018 12:07 pm
- Location: Switzerland
- Contact:
Re: CGI tech news box
Hhm.
Actually, fireflies can be avoided with a simple probability check of a sample‘s energy. It only needs the information of each material involved in the bouncings or dispersion and the energy of the light source. Additionally, material‘s properties must be analysed to get probability of output direction, i.e. roughness disperses energy in space.
With these information you can calculate the theoretically correct energy maximum. If sample exceeds this maximum by factors the sample is most probable a firefly.
Now you can cut the sample to the expected maximum.
The check calculations are just a chain multiplication of which precalculation of usual material output and its probability of output per angle respectively dispersion contribution is possible, exceptions e.g. anisotropics or coat films require input angle depending calculations, i.e. they cannot be precalculated.
I wonder why clarisse developed a from x to 100% adjustable filter.
Light and Word designing Creator - www.farbigewelt.ch - aka quantenkristall || #luxcorerender
MacBook Air with M1
MacBook Air with M1
Re: CGI tech news box
IDK, but I imagine that, with such a vast number of PTs around and minds tackling same issues, if fireflies & noise were so simple to solve, than everyone would be doing it... right?
BTW, now I see that my post is written in wrong order, was tired and loosing attention... in short - I'm eagerly awaiting Dade's news on EVLC
BTW, now I see that my post is written in wrong order, was tired and loosing attention... in short - I'm eagerly awaiting Dade's news on EVLC
- FarbigeWelt
- Donor
- Posts: 1046
- Joined: Sun Jul 01, 2018 12:07 pm
- Location: Switzerland
- Contact:
Re: CGI tech news box
Well, sometimes you need a simple mind to get a simple solution. Smart people care about complex questions, they see the horizon but not the fruits in bushes along their way.
Noise is different, it is obvious easier to (learn to) see it than get rid of it using mathematics / statistics, although they are good in value noise.
Sure! Who is not looking for to getting Dade‘s improvement for environment lighting renders - no portals - settings - and flashing fast.
Light and Word designing Creator - www.farbigewelt.ch - aka quantenkristall || #luxcorerender
MacBook Air with M1
MacBook Air with M1
-
- Supporting Users
- Posts: 35
- Joined: Tue Dec 05, 2017 1:45 pm
Re: CGI tech news box
Surface Gradient Based Bump Mapping Framework
Since better bump mapping is planned as an upcoming feature, this might actually be a useful read...
Since better bump mapping is planned as an upcoming feature, this might actually be a useful read...
Re: CGI tech news box
+1 for a good firefly removal, it's a bit of problem to mix denoised and non-denoised result when there are strong fireflies.
Re: Adaptive Environment Sampling on CPU and GPU
Is last Build contain this improvement ?Dade wrote: ↑Mon Oct 14, 2019 11:13 pmOhoh, their solution, when compare to LuxCore Env. Light Visibility Cache, is a classic trade off between memory usage and quality: ELVC requires a LOT more memory/pre-processing to works well but it deliver better results (if you use enough memory/pre-processing). Otherwise the 2 solutions are quite similar.
However the good news is this paper has the piece of the puzzle I was missing for ELVC: the last step. I'm using an env. light visibility map (i.e. a pixel image). The ELVC problem is you need very high resolution maps to work well. This cost both a LOT of memory and pre-processing time: more pixels, more shadow rays to trace.
This is 1 level hierarchy solution. If I use 2 levels hierarchy where one map pixel point to an env. light tile and I sample the tile according the usual light intensity (i.e. classic importance sampling), I can use visibility maps that are a loooot smaller and so also a loooot faster to build.
Now, it is optimal:
- one visibility map pixel => one HDR pixel
While, for instance, with tiles:
- one visibility map pixel => one HDR 8x8 tile (64 times less pixels to stores and 64 times faster pre-processing !!!!!!) => a HDR tile pixel picked according importance sampling
I ... need ... to ... write this code ....