Page 4 of 5

Re: OIDN animations

Posted: Wed Jun 05, 2019 12:03 pm
by epilectrolytics
IIRC the "session init time" was 3s for a frame of this animation, including simple visibility map.
I think the initialisation for the very first frame was longer though (without persistent cache building which was done separately).
Then there is denoising (~3s) and PNG file saving (?s) which alltogether makes ~10s longer render time.
I think that's ok for HD resolution.

Re: OIDN animations

Posted: Wed Jun 05, 2019 10:19 pm
by FarbigeWelt
B.Y.O.B. wrote: Wed Jun 05, 2019 11:12 am
FarbigeWelt wrote: Wed Jun 05, 2019 11:05 am If only camera is moving, why compiling openCL kernel each time?
I'm pretty sure the kernels are cached and re-used if only the camera is moving.
I have compared the openCL console log of the first 4 frames of animation. I have not found any difference. This means first and following frames are dealt the same way.

Difference were here
[Device Hawaii Intersect] BVH mesh vertices buffer size: 1687Kbytes
[Device Hawaii Intersect] BVH nodes buffer size: 3453Kbytes
[Device Hawaii Intersect] BVH mesh vertices buffer size: 1687Kbytes
[Device Hawaii Intersect] BVH nodes buffer size: 3453Kbytes
The above took 1 s longer for the first frame than for the following.

and first frame takes
[PathOCLBaseRenderThread::0] Kernels compilation time: 46ms
instead of for the third
[PathOCLBaseRenderThread::0] Kernels compilation time: 32ms
but second and forth hat the again
[PathOCLBaseRenderThread::1] Kernels compilation time: 47ms

EDIT
Just started another animation and my observation was that the first frame took several seconds more time than the following frames. I will later have a closer look at the console log.

subtext: studying animation

Posted: Fri Jun 14, 2019 6:35 am
by epilectrolytics
epilectrolytics wrote: Fri Feb 08, 2019 1:49 pm First attempt, 500 samples PathOCL, 6min/frame.
New attempt, 400 samples PGI+OIDN, 1.4min/frame.
Last attempt, 500 samples PGI+OIDN, Disney materials (except translucent lampshades), 197s/frame.
1000 frames in 3k resolution, no DaVinci Resolve.

Disney renders faster (samples/s) but noisier.
Still it is possible to render a 3k animation in around 3min per frame with top graphics thanks to Dade's ingenious persistent PhotonGI cache if one can be bothered to launch Blender 2.79.
And no monthly fee required like with E-Cycles, let alone Octane.

(Warning: video contains inappropriate student's humor!!)

Re: subtext: studying animation

Posted: Fri Jun 14, 2019 6:57 am
by FarbigeWelt
epilectrolytics wrote: Fri Jun 14, 2019 6:35 am 500 samples PGI+OIDN, Disney materials (except translucent lampshades), 197s/frame.
1000 frames in 3k resolution, no DaVinci Resolve.
This means a total render time of approx. 2 days and 7 hours.
Would it be possible to render this in a cloud?
Does anybody have experience with cloud computing and prices?
Theoretically render time in a a big enough cloud render whole animation in max time for one frame.

However, epilectrolytics, you have done a good job with your migration to Disney material shader and the resolution and sharpness of your video is amazing.

Re: OIDN animations

Posted: Sat Jun 15, 2019 8:36 am
by epilectrolytics
Thanks!

I think LuxCore cloud rendering on Azure is possible, whenever a latest build is made it takes 2-3 hours because many test scenes are rendered automatically.
But I have no clue how a normal user could initiate this from their own file.
For small projects like these I prefer my own hardware.

And concerning hardware, there is still a lot potential though I'm sceptical LuxCore can utilize it, such as the ray trace and tensor cores in Turing RTX cards.
These can't be accessed via OpenCL but have their own API (nVidia Optix) so I guess a translation of the render code would be necessary, a massive effort that would need a new developer.
But maybe DLSS could be integrated as a post process like OIDN?

Speaking of anti-aliasing, when animating with DOF and motion blur I notice that most parts of a frame aren't sharp and could possibly be rendered without supersampling but a filter instead, if they could be properly detected.

But the main point of my work here is to show that Dade has created a massive speed improvement that comes for free and can be used by everyone, now even with principled shader!
How that is not getting a bigger response (compared with the E-Cycles hype f.i.) is beyond me :?

Re: OIDN animations

Posted: Sat Jun 15, 2019 9:52 am
by Dade
epilectrolytics wrote: Sat Jun 15, 2019 8:36 am How that is not getting a bigger response (compared with the E-Cycles hype f.i.) is beyond me :?
Because Cycles is included in Blender and LuxCore is not. Every single Blender user first learns to use Cycles than, may be, if it find some limit, will look for something else. It isn't exactly an even match.

Eevee may change a bit the situation, people may start to learn first Eevee and than looks for something else. Cycles will still be there as first, easy to pick, candidate but the may be some more room for others.

Re: OIDN animations

Posted: Sat Jun 15, 2019 2:05 pm
by Sharlybg
How that is not getting a bigger response (compared with the E-Cycles hype f.i.) is beyond me :?
I also turn crazy for that same observation. But i think things could have been diferent without Blender 2.8 project launch. Now we have to be compatible to this new blender to be considearated as a valid option no matter how fast we are.

Re: OIDN animations

Posted: Sat Jun 15, 2019 3:29 pm
by epilectrolytics
Also I forgot that we are still beta testing and manually patching the Blender add-on with latest builds,
once v2.2 is finally released and can be easily installed from the regular download page things may change.

Re: OIDN animations

Posted: Sat Jun 15, 2019 3:35 pm
by Sharlybg
epilectrolytics wrote: Sat Jun 15, 2019 3:29 pm Also I forgot that we are still beta testing and manually patching the Blender add-on with latest builds,
once v2.2 is finally released and can be easily installed from the regular download page things may change.
You just remind me that we still have to found a viable solution to render good Quality animation with online Cloud services as Luxcore 2.2 Final is coming.
We will need a serious Demoreel. And Easy to setup Cloud computing service will play a lot here.

first light traced

Posted: Tue Jun 25, 2019 7:31 pm
by epilectrolytics
First try with PathOCL + light tracing + PGI persistent indirect cache + OIDN.
220 samples, 1.5min/frame with Ryzen 2700X + RTX 2070.
Motion blur didn't work yet with light tracing, so I doubled the frame rate with DaVinci Resolve.

Persistent indirect cache and light tracing fit together perfectly and make a real killer combination.
Both are insanely fast, stable in animation and don't get in each others way, it's like BiDir on steroids.