Different Works in Progress

Post your tests, experiments and unfinished renderings here.
User avatar
FarbigeWelt
Donor
Posts: 883
Joined: Sun Jul 01, 2018 12:07 pm
Location: Switzerland
Contact:

What about a Volume Cache?

Post by FarbigeWelt » Wed Jun 19, 2019 12:15 pm

Caching very improves look of openCL paths renders. There are already three caches in LuxCoreRender: direct light, indirect light and caustics.
Looking at the images of my former post I got the idea that there is missing another cache, one for scattering volumes.

Obviously openCL without any cache cannot render well scenes with scattering volumes. Although OIDN did a good job, you should have seen the raw image, there are nervous noise pattern resuming. Indirect cache renders a rather foggy image but without beams of the reflector spots and without caustic spots on the floor. Whereas caustic cache renders these spots almost like BiDir does but also without the beams. Interestingly, default spot lamps‘ beams are rendered with or without caches. Please note BiDir is much less foggy than openCL with indirect cache.

These observations lead me to the affiliate question.
Is it possible to improve openCL further with the help of a cache for scattering volumes? If yes, openCL path gets another boost and, thinking about BiDir SDS issue, ahead of BiDir regarding time and realism.
Light and Word designing Creator - www.farbigewelt.ch - aka quantenkristall || #luxcorerender
160.8 | 42.8 (10.7) Gfp | Windows 10 Pro, intel i7 4770K@3.5, 32 GB
2 AMD Radeon RX 5700 XT, 8 GB || Gfp = SFFT Gflops

User avatar
epilectrolytics
Donor
Posts: 594
Joined: Thu Oct 04, 2018 6:06 am

Re: What about a volume cache?

Post by epilectrolytics » Wed Jun 19, 2019 3:46 pm

It's already part of indirect cache! :D

I have yet to test it and even don't know how to activate it :?
MBPro 15" 16GB i7-4850HQ GT750M, MacOS 10.13.6 & Win10Pro PC 16GB Ryzen 2700X, 2 x RTX 2070

User avatar
B.Y.O.B.
Developer
Posts: 3010
Joined: Mon Dec 04, 2017 10:08 pm
Location: Germany
Contact:

Re: Different Works in Progress

Post by B.Y.O.B. » Wed Jun 19, 2019 4:14 pm

epilectrolytics wrote:
Wed Jun 19, 2019 3:46 pm
I have yet to test it and even don't know how to activate it
It's in the volume output node (only visible if cache is enabled).
Attachments
2019-06-19_18-13-42.png
Support LuxCoreRender project with salts and bounties

User avatar
epilectrolytics
Donor
Posts: 594
Joined: Thu Oct 04, 2018 6:06 am

Re: Different Works in Progress

Post by epilectrolytics » Wed Jun 19, 2019 5:44 pm

Thanks for explaining!
MBPro 15" 16GB i7-4850HQ GT750M, MacOS 10.13.6 & Win10Pro PC 16GB Ryzen 2700X, 2 x RTX 2070

User avatar
FarbigeWelt
Donor
Posts: 883
Joined: Sun Jul 01, 2018 12:07 pm
Location: Switzerland
Contact:

volume cache requires implementation of path depth>0

Post by FarbigeWelt » Sat Jun 22, 2019 11:59 am

B.Y.O.B. wrote:
Wed Jun 19, 2019 4:14 pm
epilectrolytics wrote:
Wed Jun 19, 2019 3:46 pm
It's already part of indirect cache! :D

I have yet to test it and even don't know how to activate it :?
epilectrolytics wrote:
Wed Jun 19, 2019 3:46 pm
I have yet to test it and even don't know how to activate it
It's in the volume output node (only visible if cache is enabled).
Thank you for the remember. I recognized Dade's picture.

Indeed the volume cache is already implemented but in my opinion there is an improvement required. Obviously volume cache only works good before the first bounce and very fairly after one bounce. More than one bounce is definitely not supported.

Camera and World Volume is the same clear volume. There is a solidified cube with material Null around the spots. Its inner volume is homogeneous.
Material_Null
Material_Null
Material_Mirror
Material_Mirror
Spots in Scattering Volume, the scene
Spots in Scattering Volume, the scene
Spots in Scattering Volume, the scene.png (10.52 KiB) Viewed 1158 times
Ligth sources are spot lamps and a area lamp type laser. Four of the spots head down, one of them up. In two cases mirror/s is/are used to reflect the spot's beam. The laser is reflected several times between two mirrors.
Spots in Scattering Volume, Bidir depth 10, OIDIN 0.85, 252 samples 6m04s
Spots in Scattering Volume, Bidir depth 10, OIDIN 0.85, 252 samples 6m04s
With BiDir mirror seems to be almost blind for the scattering laser beam. All reflections get scattered by the homogeneous volume. Color shades are smooth over the whole image.
Spots in Scattering Volume, openCL depth 10, PGIC, OIDIN 0.75, 4135 samples 6m32s
Spots in Scattering Volume, openCL depth 10, PGIC, OIDIN 0.75, 4135 samples 6m32s
With openCL mirror shows the reflection of scattered laser beam. There is barely any reflection visible as scattered beam although reflection is rendered as visible with the spots on the floor. Even with 4000 samples color shades are quite hard and the image rather noisy and there are kind of fireflies without clamping.

Conclusion: openCL volume cache requires implementation of path depth>0.
Light and Word designing Creator - www.farbigewelt.ch - aka quantenkristall || #luxcorerender
160.8 | 42.8 (10.7) Gfp | Windows 10 Pro, intel i7 4770K@3.5, 32 GB
2 AMD Radeon RX 5700 XT, 8 GB || Gfp = SFFT Gflops

User avatar
FarbigeWelt
Donor
Posts: 883
Joined: Sun Jul 01, 2018 12:07 pm
Location: Switzerland
Contact:

Rainbow Chains V0.6, Interactive Blender Fun Script

Post by FarbigeWelt » Sun Jun 23, 2019 4:22 pm

Since first post of Rainbow Chains V0.1 I have revised and extended the script as described in a former post.

There are several classes and methods containing all the data and logic. Three classes get registered the others are used to initiate some objects on main level of the script. If layout properties are changed manually or by animation key method for updating data and drawing objects get called.

The blender file attached comes with 1200 objects (cube primitives) and a material. Glass material's color is defined with the use of a LuxCoreRender "Object ID" node. There are also keys set (outer frequency, camera position, clamping circle position). Camera track to an empty object and is clamped to a bezier circle.

The open blender file shows script and objector editor as well as timeline, outliner and properties.
Clicking * "Run Script" on script editor's bar menu appends the new tab "Misc" in the tool panel of object editor's window.
Properties' tool tips are supported to help using Rainbow Chain.
Objects can be deleted manually and easily added with the script now. After check marking "Add objects" re-enter value of any property.
Depending on computer's performance, recommended loop settings lead to less than ** 1'250 objects.
RainbowChainsV0.6
RainbowChainsV0.6
* Because the script is not an add-on yet, the script has to be run each time the blender file is opened. Avoid running script more than one time. Doing so may lead to delays if frame has been changed because every run of the script adds an animation event handler. Each handler gets called if frame changes, i.e. objects could be drawn several times.
** Intel core i7 4770K animates 1'200 objects with ~5.7 fps (wire frame) or ~0.6 fps (LuxCoreRender View port). Adding 1'200 objects take circa 23 s. Adding 2'400 objects takes for unknown reason 123 s (factor 5 instead of 2).
Rainbow Chains V0.6.blend.zip
Rainbow Chains V0.6.blend.zip
(587.98 KiB) Downloaded 24 times
Light and Word designing Creator - www.farbigewelt.ch - aka quantenkristall || #luxcorerender
160.8 | 42.8 (10.7) Gfp | Windows 10 Pro, intel i7 4770K@3.5, 32 GB
2 AMD Radeon RX 5700 XT, 8 GB || Gfp = SFFT Gflops

User avatar
FarbigeWelt
Donor
Posts: 883
Joined: Sun Jul 01, 2018 12:07 pm
Location: Switzerland
Contact:

Suzannes Wave

Post by FarbigeWelt » Tue Jun 25, 2019 4:36 pm

One frame took 3m10s, 300 samples per pixel, for the array of 900 Suzannes.
Enjoy the short animated GIFs after clicking on their image.
Suzannes-Wave
Suzannes-Wave
There are really lots of Suzannes, see.
Suzannes-Wave,-zoomed
Suzannes-Wave,-zoomed
During the last days I revised the code of the Rainbow Chains V0.6 script, removed little mistakes, cleaned the code at most lines according to "Do not repeat yourself" and added an option build the object array based on the active object. After implementation of a more flexible core part, drawing different base patterns, I plan to make the final step, the possibility to implement the script as add-on.
Light and Word designing Creator - www.farbigewelt.ch - aka quantenkristall || #luxcorerender
160.8 | 42.8 (10.7) Gfp | Windows 10 Pro, intel i7 4770K@3.5, 32 GB
2 AMD Radeon RX 5700 XT, 8 GB || Gfp = SFFT Gflops

User avatar
FarbigeWelt
Donor
Posts: 883
Joined: Sun Jul 01, 2018 12:07 pm
Location: Switzerland
Contact:

A Kitchen. Why to prefer Metropolis for openCL

Post by FarbigeWelt » Tue Jul 02, 2019 5:54 pm

Based on many LuxCoreRender Forums' posts reading I learned two main approaches.
CPU BiDir and Metropolis
openCL Path and Sobol.

Following this basic knowledge I rendered my kitchen scene after, a few improvements of the models, with one the latest builds.
Well,this was and is a unbelievable experience. The light, the caustics, the reflections, the glasses and bottles. It all looks so great now just with the default openCL settings. Sitting in the heat of summer plus two graphic cards in a small bad ventilated room I suddenly felt odd.
I this correct what I am seeing or just the slowly appearing effects a close by heat stroke?
Then I recalled some experience with Sobol. Dade minimized my observations a few weeks ago. But Sobol still renders the way I do not like it. Sobol as implemented currently produces scan line similar patterns. One does not need much of eye training to see the Sobol-Lines jumping at one out of any picture with any number of samples at least up to 1500. You do not know what I am talking about? Do not mind and have a look at the pictures in this post.
As assumed OIDN is not any fan of this Sobol-Lines too. OIDN develops more or less creative patterns instead of denoising the image. Thoughts of artificial neurons arrays, stacked in processing layers, maybe even with feedback loops in its networked, trained with most probably stochastic perfect noise picture and its noise free counterpart picture, are hard to guess. But Sobol-Lines are not stochastic and obviously this horizontal line patterns lead OIDN to interpret the incoming as data as pattern of some kind.
First have a look at the raw pictures. I have added some red horizontal lines in some pictures to show you what I mean with Sobol-Lines.
In another setup of the scene I got many few points sized caustics spread over the half of the scene. Guess what. Most of these caustics had been horizontal lines made from 2 to 4 pixels. Only few had been vertical but only with not more than 2 pixels. Reducing value for clamping seemed to have done the trick to get rid of most of them.
openCL Sobol 250 to 1500 Samples raw
openCL Sobol 250 to 1500 Samples raw
And can you guess how the denoised pictures will look like based on these raw pictures?
Well, in any case, have a look at OIDN results here.
openCL Sobol 250 to 1500 Samples OIDN
openCL Sobol 250 to 1500 Samples OIDN
Because I trust LuxCoreRenders capabilities and trust me they are far beyond what basic knowledge advises you.
No matter what Dade told us, I switch to Metropolis. I made my first mistake in decreasing consecutive rejects. The result was pretty almost like I guessed but light cone intersects and spots' surfaces has been too dark. Based on the observation I increased consecutive reject to 2048 even if this is not the perfect value for openCL. However my next render experiment showed exactly what I was looking for: well distributed noise patterns. Even with as low as 250 samples, OIDN will have the perfect meal. (There must have been an auditive fata morgana or the ball bearing of my graphic card is worn out but it seemed like OIDN tittered when processing the following raw image.)
openCL Metropolis 250 Samples
openCL Metropolis 250 Samples
openCL Metropolis 250 Samples OIDN
openCL Metropolis 250 Samples OIDN
Side note
The scene is part of a much larger scene. When I modeled the scene I never thought about rendering time optimization. The first render took a session init of approx. 9 minutes.
A second render based on filed cache for PGIC (indirect light only) took a session init time of approx. 3m30s.
A third render finally using cached kernel files, took a session time of unbelievable 11.2s! :mrgreen:
2*full HD picture tooks 13m03s for 250 Samples with openCL and Metropolis (lmp 75%, mcr 2048, imr 7.5%).

Conclusion
Before LuxCoreRender 2.2 can be released, Sobol-Line issue must be resolved.
Otherwise the outcome of the standard openCL Sobol render does not meet the quality LuxCoreRender is able to deliver well and superb, i.e. with the help of openCL Metropolis.
Light and Word designing Creator - www.farbigewelt.ch - aka quantenkristall || #luxcorerender
160.8 | 42.8 (10.7) Gfp | Windows 10 Pro, intel i7 4770K@3.5, 32 GB
2 AMD Radeon RX 5700 XT, 8 GB || Gfp = SFFT Gflops

kintuX
Posts: 477
Joined: Wed Jan 10, 2018 2:37 am

Re: A Kitchen. Why to prefer Metropolis for openCL

Post by kintuX » Tue Jul 02, 2019 9:09 pm

FarbigeWelt wrote:
Tue Jul 02, 2019 5:54 pm
Conclusion
Before LuxCoreRender 2.2 can be released, Sobol-Line issue must be resolved.
Otherwise the outcome of the standard openCL Sobol render does not meet the quality LuxCoreRender is able to deliver well and superb, i.e. with the help of openCL Metropolis.
Yay... yup, seeing same deterioration in quality. But i guess in democracy, where majority (or mostly the loud ones :P) are cheering, striving and wanting speed & quantity, we all must accept to trade it for less quality - it's universal law. :|

User avatar
FarbigeWelt
Donor
Posts: 883
Joined: Sun Jul 01, 2018 12:07 pm
Location: Switzerland
Contact:

openCL Metropolis, a closer Look

Post by FarbigeWelt » Tue Jul 02, 2019 10:47 pm

Okay, 57m07s for a 2HD image to be rendered with openCL is not what you can call fast, or could you :?:
Also the scene does not look like it is hungry for computing power. Well, the configuration tells total depth 12, diffuse 6, glossy 8 and specular 12. There are these glasses and bottles covering each other, i.e. this adds fast up to 12 depths for spectacular bounces.
However, 250 Samples should be enough in most cases, i.e. approx. 14m45s 2HD or 3m41s 1HD. Less than 4 minutes seems again okay (if we forget the 9 m for session init for this kitchen in a larger room on in a four rooms apartment incl. bathroom with balcony and a four stories building, although only one apartment is with walls and furniture, but there are all the apartments' windows, balconies, six levels stairs, the elevator and the roof, including some surrounding sketched houses, streets and railways).
There is one part in the image that is not very easy for OIDN, I think the explanation is obvious. I added a red line to the difficult part. With 1000 Samples this part is almost cleared but not perfectly. Glasses standing on glass do not tend to render clear shaped shadow lines, again for obvious reasons.
openCL Metropolis compare 250 Samples raw with OIDN, 2HD
openCL Metropolis compare 250 Samples raw with OIDN, 2HD
openCL Metropolis compare 1000 Samples raw with OIDN, 2HD
openCL Metropolis compare 1000 Samples raw with OIDN, 2HD
I cannot tell it enough times, how the resulting render is more than I expected. The image's quality is a hit.

As the result of many ideas and many developers and testers work and time, I think this renderer can compete in quality and speed with many others. And for today's LuxCoreRender hybrid solution, its handling is so easy, almost ridiculous simple. To render a similar picture one needs just the lights, the objects and their material definition in a Blender 2.79 file. Just a few clicks later everything is setup in BlendLuxCore UI and the render ready to be started. :D
Light and Word designing Creator - www.farbigewelt.ch - aka quantenkristall || #luxcorerender
160.8 | 42.8 (10.7) Gfp | Windows 10 Pro, intel i7 4770K@3.5, 32 GB
2 AMD Radeon RX 5700 XT, 8 GB || Gfp = SFFT Gflops

Post Reply