NVIDIA DLSS in 3D rendering

General computer graphic news and papers.
User avatar
Theo_Gottwald
Posts: 109
Joined: Fri Apr 24, 2020 12:01 pm

Re: NVIDIA DLSS in 3D rendering

Post by Theo_Gottwald »

If we spin this DLSS thing forward, we can render in 640x400 and the Neural Networks "expand it" to 8000x6000 Pixels.
However, you have to understand, that these neural networks generate surfaces based on their "Learning".
Means they will just complete the graphic using things they have seen before.

Is it really "Rendering" in the classic case then?

Its not Denoising, maybe its "Upscaling" but somehow the AI makes the final picture.

The result will never be identical to a really rendered result.

Unless you have already trained the NN with the real outcome.

Will we get a Neural Rendering Engine in the future, which does not really trace the light but just combine
elements that are pre-rendered into a scene using AI (maybe with a bit light tracing)?

Spinning this forward will lead to completely new rendering technology.
Where the outcome looks great (may have interesting bugs however)
and can only be predicted to some degree.
Get Blender-Quickbuttons and configure over 100 Blender-Buttons individual to your needs.
Visit my YouTube-Channel: Theo's Fun Videos and watch several Blender related Videos.
Join @Dreamstime and sell your Renderings to the world.
epilectrolytics
Donor
Donor
Posts: 790
Joined: Thu Oct 04, 2018 6:06 am

Re: NVIDIA DLSS in 3D rendering

Post by epilectrolytics »

If albedo and shading normal pass were rendered with double/quadruple resolution it should be possible to scale up the render without missing details, similar to what denoisers already do.

Also I don't really understand why DOF rendering isn't much faster.
Anything within out of focus blur doesn't need anti-alias supersampling and could be done via simple mean filter instead.
A more sophisticated algorithm could render with variable resolution within a frame ...

For animations there are already algorithms to interpolate frames.
Again these could be improved with rendering simple passes like albedo or object ID for the intermediates only.

Probably nVidia Optix will not provide simple libraries for such things, from my limited knowledge DLSS isn't flexible enough.

Someone should try to nudge the Intel OIDN guys into those directions ...
provisory
Posts: 235
Joined: Wed Aug 01, 2018 4:26 pm

Re: NVIDIA DLSS in 3D rendering

Post by provisory »

epilectrolytics wrote: Tue Nov 03, 2020 8:26 am If albedo and shading normal pass were rendered with double/quadruple resolution it should be possible to scale up the render without missing details, similar to what denoisers already do.
I think DLSS isn't needed, when we have such good quality denoisers.
I agree, that the full resolution albedo and normal passes are needed anyways (or at least recommended), so I don't see, what is the point of making a noiseless smaller image first than upscale it, when you can render a noisy large image than denoising it in one step.
User avatar
Sharlybg
Donor
Donor
Posts: 3101
Joined: Mon Dec 04, 2017 10:11 pm
Location: Ivory Coast

Re: NVIDIA DLSS in 3D rendering

Post by Sharlybg »

The point of upscaling technology is that it save a lot of computation power. going from 1k render to 2k increase rendertime by 4X.
Meanning that if you can render at 1k inside of Lux and output a 2k will give you a 4X time speed up. And 1k upscale to 4k = 16X speed up.

I hope AMD upscaling is much more general and can be easily incorporate to any none path tracer.
Support LuxCoreRender project with salts and bounties

Portfolio : https://www.behance.net/DRAVIA
provisory
Posts: 235
Joined: Wed Aug 01, 2018 4:26 pm

Re: NVIDIA DLSS in 3D rendering

Post by provisory »

It's clear, but why not render the full resolution image (with albedo + normal passes) for 1/4 or 1/16 of the time that required for a noise free image, then use denoiser?
User avatar
Sharlybg
Donor
Donor
Posts: 3101
Joined: Mon Dec 04, 2017 10:11 pm
Location: Ivory Coast

Re: NVIDIA DLSS in 3D rendering

Post by Sharlybg »

provisory wrote: Tue Nov 03, 2020 5:58 pm It's clear, but why not render the full resolution image (with albedo + normal passes) for 1/4 or 1/16 of the time that required for a noise free image, then use denoiser?
Not sure to understand :?
Support LuxCoreRender project with salts and bounties

Portfolio : https://www.behance.net/DRAVIA
provisory
Posts: 235
Joined: Wed Aug 01, 2018 4:26 pm

Re: NVIDIA DLSS in 3D rendering

Post by provisory »

You wrote:
The point of upscaling technology is that it save a lot of computation power.
Put simply, I think you can get the same computational power saving with denoising instead of upscaing.
User avatar
Sharlybg
Donor
Donor
Posts: 3101
Joined: Mon Dec 04, 2017 10:11 pm
Location: Ivory Coast

Re: NVIDIA DLSS in 3D rendering

Post by Sharlybg »

provisory wrote: Tue Nov 03, 2020 6:23 pm You wrote:
The point of upscaling technology is that it save a lot of computation power.
Put simply, I think you can get the same computational power saving with denoising instead of upscaing.
Ahh yes for sure. My issue with denoising is quality i'm still not impress by OIDN and Optix. :|
Support LuxCoreRender project with salts and bounties

Portfolio : https://www.behance.net/DRAVIA
CodeHD
Donor
Donor
Posts: 437
Joined: Tue Dec 11, 2018 12:38 pm
Location: Germany

Re: NVIDIA DLSS in 3D rendering

Post by CodeHD »

Theo_Gottwald wrote: Tue Nov 03, 2020 7:06 am The result will never be identical to a really rendered result.

Unless you have already trained the NN with the real outcome.
From what I have read about this topic, the above statement is exactly the dealbreaker, and the reason why nvidia doesn't offer a general DLSS-API: All games featuring it have to pre-train for the game content, you can't just use it on random content.
epilectrolytics
Donor
Donor
Posts: 790
Joined: Thu Oct 04, 2018 6:06 am

Re: NVIDIA DLSS in 3D rendering

Post by epilectrolytics »

provisory wrote: Tue Nov 03, 2020 4:56 pm what is the point of making a noiseless smaller image first than upscale it, when you can render a noisy large image than denoising it in one step.
Good point.
I'm assuming that rendering at a smaller resolution with more samples (vs full resolution with less samples) results in less noise with bigger (2x2) grain and this would be easier to denoise and upscale with true detail from the albedo etc passes.
Rendering everything at the same resolution yields no detail advantage in the AOVs, so there would be a worse result with less samples per pixel rendered.

Rendering a low noise level (ready for good denoising results) is computationally expensive and slow.
Whit low resolution you get there earlier and can now use not only quick denoising but also quick AI upscaling (with true detail AOVs) instead of rendering 3x more pixels until acceptable noise level.

Probably DLSS will not do this but generally with AI it should be possible to have good upscaling results without scene dependent training once full resolution detail is available from AOVs.

Basically the render provides lighting/shading which is missing in an albedo pass.
This light distribution should be easier to upscale than details.
Therefore getting the light distribution from a low resolution render and the detail from a high resolution AOV should be advantageous.
Last edited by epilectrolytics on Tue Nov 03, 2020 6:55 pm, edited 1 time in total.
Post Reply