OIDN animations

Post your tests, experiments and unfinished renderings here.
epilectrolytics
Donor
Donor
Posts: 790
Joined: Thu Oct 04, 2018 6:06 am

Re: OIDN animations

Post by epilectrolytics »

IIRC the "session init time" was 3s for a frame of this animation, including simple visibility map.
I think the initialisation for the very first frame was longer though (without persistent cache building which was done separately).
Then there is denoising (~3s) and PNG file saving (?s) which alltogether makes ~10s longer render time.
I think that's ok for HD resolution.
User avatar
FarbigeWelt
Donor
Donor
Posts: 1046
Joined: Sun Jul 01, 2018 12:07 pm
Location: Switzerland
Contact:

Re: OIDN animations

Post by FarbigeWelt »

B.Y.O.B. wrote: Wed Jun 05, 2019 11:12 am
FarbigeWelt wrote: Wed Jun 05, 2019 11:05 am If only camera is moving, why compiling openCL kernel each time?
I'm pretty sure the kernels are cached and re-used if only the camera is moving.
I have compared the openCL console log of the first 4 frames of animation. I have not found any difference. This means first and following frames are dealt the same way.

Difference were here
[Device Hawaii Intersect] BVH mesh vertices buffer size: 1687Kbytes
[Device Hawaii Intersect] BVH nodes buffer size: 3453Kbytes
[Device Hawaii Intersect] BVH mesh vertices buffer size: 1687Kbytes
[Device Hawaii Intersect] BVH nodes buffer size: 3453Kbytes
The above took 1 s longer for the first frame than for the following.

and first frame takes
[PathOCLBaseRenderThread::0] Kernels compilation time: 46ms
instead of for the third
[PathOCLBaseRenderThread::0] Kernels compilation time: 32ms
but second and forth hat the again
[PathOCLBaseRenderThread::1] Kernels compilation time: 47ms

EDIT
Just started another animation and my observation was that the first frame took several seconds more time than the following frames. I will later have a closer look at the console log.
Light and Word designing Creator - www.farbigewelt.ch - aka quantenkristall || #luxcorerender
MacBook Air with M1
epilectrolytics
Donor
Donor
Posts: 790
Joined: Thu Oct 04, 2018 6:06 am

subtext: studying animation

Post by epilectrolytics »

epilectrolytics wrote: Fri Feb 08, 2019 1:49 pm First attempt, 500 samples PathOCL, 6min/frame.
New attempt, 400 samples PGI+OIDN, 1.4min/frame.
Last attempt, 500 samples PGI+OIDN, Disney materials (except translucent lampshades), 197s/frame.
1000 frames in 3k resolution, no DaVinci Resolve.

Disney renders faster (samples/s) but noisier.
Still it is possible to render a 3k animation in around 3min per frame with top graphics thanks to Dade's ingenious persistent PhotonGI cache if one can be bothered to launch Blender 2.79.
And no monthly fee required like with E-Cycles, let alone Octane.

(Warning: video contains inappropriate student's humor!!)
User avatar
FarbigeWelt
Donor
Donor
Posts: 1046
Joined: Sun Jul 01, 2018 12:07 pm
Location: Switzerland
Contact:

Re: subtext: studying animation

Post by FarbigeWelt »

epilectrolytics wrote: Fri Jun 14, 2019 6:35 am 500 samples PGI+OIDN, Disney materials (except translucent lampshades), 197s/frame.
1000 frames in 3k resolution, no DaVinci Resolve.
This means a total render time of approx. 2 days and 7 hours.
Would it be possible to render this in a cloud?
Does anybody have experience with cloud computing and prices?
Theoretically render time in a a big enough cloud render whole animation in max time for one frame.

However, epilectrolytics, you have done a good job with your migration to Disney material shader and the resolution and sharpness of your video is amazing.
Light and Word designing Creator - www.farbigewelt.ch - aka quantenkristall || #luxcorerender
MacBook Air with M1
epilectrolytics
Donor
Donor
Posts: 790
Joined: Thu Oct 04, 2018 6:06 am

Re: OIDN animations

Post by epilectrolytics »

Thanks!

I think LuxCore cloud rendering on Azure is possible, whenever a latest build is made it takes 2-3 hours because many test scenes are rendered automatically.
But I have no clue how a normal user could initiate this from their own file.
For small projects like these I prefer my own hardware.

And concerning hardware, there is still a lot potential though I'm sceptical LuxCore can utilize it, such as the ray trace and tensor cores in Turing RTX cards.
These can't be accessed via OpenCL but have their own API (nVidia Optix) so I guess a translation of the render code would be necessary, a massive effort that would need a new developer.
But maybe DLSS could be integrated as a post process like OIDN?

Speaking of anti-aliasing, when animating with DOF and motion blur I notice that most parts of a frame aren't sharp and could possibly be rendered without supersampling but a filter instead, if they could be properly detected.

But the main point of my work here is to show that Dade has created a massive speed improvement that comes for free and can be used by everyone, now even with principled shader!
How that is not getting a bigger response (compared with the E-Cycles hype f.i.) is beyond me :?
User avatar
Dade
Developer
Developer
Posts: 5672
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

Re: OIDN animations

Post by Dade »

epilectrolytics wrote: Sat Jun 15, 2019 8:36 am How that is not getting a bigger response (compared with the E-Cycles hype f.i.) is beyond me :?
Because Cycles is included in Blender and LuxCore is not. Every single Blender user first learns to use Cycles than, may be, if it find some limit, will look for something else. It isn't exactly an even match.

Eevee may change a bit the situation, people may start to learn first Eevee and than looks for something else. Cycles will still be there as first, easy to pick, candidate but the may be some more room for others.
Support LuxCoreRender project with salts and bounties
User avatar
Sharlybg
Donor
Donor
Posts: 3101
Joined: Mon Dec 04, 2017 10:11 pm
Location: Ivory Coast

Re: OIDN animations

Post by Sharlybg »

How that is not getting a bigger response (compared with the E-Cycles hype f.i.) is beyond me :?
I also turn crazy for that same observation. But i think things could have been diferent without Blender 2.8 project launch. Now we have to be compatible to this new blender to be considearated as a valid option no matter how fast we are.
Support LuxCoreRender project with salts and bounties

Portfolio : https://www.behance.net/DRAVIA
epilectrolytics
Donor
Donor
Posts: 790
Joined: Thu Oct 04, 2018 6:06 am

Re: OIDN animations

Post by epilectrolytics »

Also I forgot that we are still beta testing and manually patching the Blender add-on with latest builds,
once v2.2 is finally released and can be easily installed from the regular download page things may change.
User avatar
Sharlybg
Donor
Donor
Posts: 3101
Joined: Mon Dec 04, 2017 10:11 pm
Location: Ivory Coast

Re: OIDN animations

Post by Sharlybg »

epilectrolytics wrote: Sat Jun 15, 2019 3:29 pm Also I forgot that we are still beta testing and manually patching the Blender add-on with latest builds,
once v2.2 is finally released and can be easily installed from the regular download page things may change.
You just remind me that we still have to found a viable solution to render good Quality animation with online Cloud services as Luxcore 2.2 Final is coming.
We will need a serious Demoreel. And Easy to setup Cloud computing service will play a lot here.
Support LuxCoreRender project with salts and bounties

Portfolio : https://www.behance.net/DRAVIA
epilectrolytics
Donor
Donor
Posts: 790
Joined: Thu Oct 04, 2018 6:06 am

first light traced

Post by epilectrolytics »

First try with PathOCL + light tracing + PGI persistent indirect cache + OIDN.
220 samples, 1.5min/frame with Ryzen 2700X + RTX 2070.
Motion blur didn't work yet with light tracing, so I doubled the frame rate with DaVinci Resolve.

Persistent indirect cache and light tracing fit together perfectly and make a real killer combination.
Both are insanely fast, stable in animation and don't get in each others way, it's like BiDir on steroids.
Post Reply