Page 19 of 25

Re: Adaptive Environment Sampling on CPU and GPU

Posted: Tue Oct 15, 2019 1:06 pm
by Dade
epilectrolytics wrote: Tue Oct 15, 2019 10:51 am It sounds like this does not only help with direct light like ELVC but indirect light also?
Would this work with BiDir(VM) too?
(Looks like a simple way of path guiding (MIS) to me.)
Direct light sampling is done for each path vertex so after the first bounce, it contributes to indirect lighting too (here and in PhotonGI).

Re: Adaptive Environment Sampling on CPU and GPU

Posted: Thu Oct 17, 2019 12:51 pm
by FarbigeWelt
Dade wrote: Mon Oct 14, 2019 11:13 pm
However the good news is this paper has the piece of the puzzle I was missing for ELVC: the last step. I'm using an env. light visibility map (i.e. a pixel image). ...

I sample the tile according the usual light intensity (i.e. classic importance sampling), I can use visibility maps that are a loooot smaller and so also a loooot faster to build.

Now, it is optimal: ...

I ... need ... to ... write this code ....
Maybe you understand there is further puzzle 🧩 for an additional performing boost, at least according to my thought experiments.
I know the following is bit short. I hope you catch my idea anyway.
If you replace the classic importance sampling in a second step by convergence forecast sampling (i.e. probability map of fastest convergence) then rendering focuses first on areas converging fast and after these parts are finished rendering cares gradually for lower probable converging areas. The setup in the paper explains pretty close but indirectly the required ‚learning phase‘: the connection of camera to converging probability tiles.

Re: CGI tech news box

Posted: Fri Oct 18, 2019 1:25 am
by kintuX
Oh, oh...
Clarisse IFX Rendering improvements: Fireflies Filtering

Some information from Sam:

The attribute is called Fireflies Filtering, this new solution works all the time even on LPEs. It removes 1st generation fireflies and other generation too.


Which means that we will remove clamping: it gives results that are far beyond clamping while minimizing the loss of energy. More importantly the more sample, the more it converges to the ideal result. It can also reduce considerably noise (albeit loosing some energy). The great thing is that it always gives results that are way better than clamping and is only applied when detecting problematic path. Which means that it has no effect whatsoever on general paths that are "easy" to sample unlike clamping. It's the best solution to the problem, plus no render hit. Not planned for the next SP.
Image

Image
8-) eagerly awaiting further news :D

Re: CGI tech news box

Posted: Fri Oct 18, 2019 9:02 am
by Sharlybg
This is very special :idea:

Re: CGI tech news box

Posted: Fri Oct 18, 2019 1:19 pm
by FarbigeWelt
kintuX wrote: Fri Oct 18, 2019 1:25 am 8-) eagerly awaiting further news :D
Hhm.
Actually, fireflies can be avoided with a simple probability check of a sample‘s energy. It only needs the information of each material involved in the bouncings or dispersion and the energy of the light source. Additionally, material‘s properties must be analysed to get probability of output direction, i.e. roughness disperses energy in space.
With these information you can calculate the theoretically correct energy maximum. If sample exceeds this maximum by factors the sample is most probable a firefly.
Now you can cut the sample to the expected maximum.

The check calculations are just a chain multiplication of which precalculation of usual material output and its probability of output per angle respectively dispersion contribution is possible, exceptions e.g. anisotropics or coat films require input angle depending calculations, i.e. they cannot be precalculated.

I wonder why clarisse developed a from x to 100% adjustable filter.

Re: CGI tech news box

Posted: Fri Oct 18, 2019 1:57 pm
by kintuX
IDK, but I imagine that, with such a vast number of PTs around and minds tackling same issues, if fireflies & noise were so simple to solve, than everyone would be doing it... right? ;)

BTW, now I see that my post is written in wrong order, was tired and loosing attention... in short - I'm eagerly awaiting Dade's news on EVLC 8-)

Re: CGI tech news box

Posted: Fri Oct 18, 2019 4:15 pm
by FarbigeWelt
kintuX wrote: Fri Oct 18, 2019 1:57 pm , if fireflies & noise were so simple to solve, than everyone would be doing it... right? ;)

BTW, now I see that my post is written in wrong order, was tired and loosing attention... in short - I'm eagerly awaiting Dade's news on EVLC 8-)
Well, sometimes you need a simple mind to get a simple solution. Smart people care about complex questions, they see the horizon but not the fruits in bushes along their way.
Noise is different, it is obvious easier to (learn to) see it than get rid of it using mathematics / statistics, although they are good in value noise.

Sure! Who is not looking for to getting Dade‘s improvement for environment lighting renders - no portals - settings - and flashing fast. 😀

Re: CGI tech news box

Posted: Wed Oct 30, 2019 3:15 am
by patrickawalz
Surface Gradient Based Bump Mapping Framework


Since better bump mapping is planned as an upcoming feature, this might actually be a useful read...

Re: CGI tech news box

Posted: Wed Oct 30, 2019 1:30 pm
by lacilaci
+1 for a good firefly removal, it's a bit of problem to mix denoised and non-denoised result when there are strong fireflies.

Re: Adaptive Environment Sampling on CPU and GPU

Posted: Fri Nov 08, 2019 3:06 pm
by Sharlybg
Dade wrote: Mon Oct 14, 2019 11:13 pm
Ohoh, their solution, when compare to LuxCore Env. Light Visibility Cache, is a classic trade off between memory usage and quality: ELVC requires a LOT more memory/pre-processing to works well but it deliver better results (if you use enough memory/pre-processing). Otherwise the 2 solutions are quite similar.

However the good news is this paper has the piece of the puzzle I was missing for ELVC: the last step. I'm using an env. light visibility map (i.e. a pixel image). The ELVC problem is you need very high resolution maps to work well. This cost both a LOT of memory and pre-processing time: more pixels, more shadow rays to trace.

This is 1 level hierarchy solution. If I use 2 levels hierarchy where one map pixel point to an env. light tile and I sample the tile according the usual light intensity (i.e. classic importance sampling), I can use visibility maps that are a loooot smaller and so also a loooot faster to build.

Now, it is optimal:

- one visibility map pixel => one HDR pixel

While, for instance, with tiles:

- one visibility map pixel => one HDR 8x8 tile (64 times less pixels to stores and 64 times faster pre-processing !!!!!!) => a HDR tile pixel picked according importance sampling

I ... need ... to ... write this code ....
Is last Build contain this improvement ?