Dade wrote: ↑Wed Mar 18, 2020 1:07 pm
However this thread was about reducing lag and you seem to want to move exactly in the opposite direction: increasing the bandwidth (and lag).
To an extent, yes, but to quote the first post:
My only complain is how laggy the whole software becomes while using opencl during viewport rendering.
Renders really fast, but it's just impossible to tweak things while rendering. So much delay.
From my understanding this is about responsiveness of the Blender interface, for example being able to still move sliders around, open/close menus etc. while the viewport render is running. Not so much about the latency when editing the scene. With the old settings i used for RTPATHOCL, the whole interface was indeed very laggy, which is fixed by using the settings you suggested.
Dade wrote: ↑Wed Mar 18, 2020 1:07 pm
I have also the feeling that you are using a too simple test scene (you are talking of 10+M samples/sec) and if you test the same settings with a complex scene, the lag will be insane.
I'm using the DanishMood example scene. While samples/sec might seem high, it resolves pretty slowly.
Sometimes I also use the scene I posted, with the lake and clouds, which has about 3.5M samples/sec.
Indeed, I should test it with a more complex scene.
Dade wrote: ↑Wed Mar 18, 2020 1:10 pm
This is what the "preview" settings are about: you can use a preview phase where, for instance, you render 1 samples every 8x8 and than a normal phase where you render 1 samples every pixel: low lag, low bandwidth preview and high lag, high bandwidth normal rendering.
That is why my settings for RTPATHOCL where like this for the longest time:
Code: Select all
# The user can choose how low-res the first frame is
definitions["rtpath.resolutionreduction.preview"] = resolutionreduction
# Only do one frame at low-resolution, render full-res afterwards
definitions["rtpath.resolutionreduction.step"] = 1
definitions["rtpath.resolutionreduction"] = 1
But it does not really work. With step = 1, I get no low-res preview at all even during camera movements. And with higher steps, e.g. 2, it takes forever to blend from blocky to fine-grained.
I think what I would find best is if the viewport would stop refreshing for 0.5 seconds after the blocky preview, then completely replace the blocks with the samples that were rendered in those 0.5 seconds at 100% GPU load.
(And no matter how long it takes to render 1 sample/pixel, the Blender interface should stay fluid)