Texture caching... amazing improvement in memory with Cycles, could be great in LuxCore!

Discussion related to the LuxCore functionality, implementations and API.
juangea
Donor
Donor
Posts: 332
Joined: Thu Jan 02, 2020 6:23 pm

Re: Texture caching... amazing improvement in memory with Cycles, could be great in LuxCore!

Post by juangea »

What do you mean with “baking”, like the “good old texture baking”? Or something more modern that will store light information and reuse it in an incredibly efficient way?
User avatar
Dade
Developer
Developer
Posts: 5672
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

Re: Texture caching... amazing improvement in memory with Cycles, could be great in LuxCore!

Post by Dade »

juangea wrote: Thu May 06, 2021 8:07 am However, what do you think about using MipMapping, something that is native for the GPU's and maybe can be easily handled?
I think current Blender code only uses mip maps (not tiles too like Arnold). Anyway, the answer is no, when the GPU detect a missing mip map level, it has to:

1) stop the rendering;
2) rise a flag;
3) the CPU detect the raised flag and load the missing mip map level;
4) the CPU tell the GPU it can resume the rendering.

It is all very cumbersome and slow: the round trip between CPU<->GPU is very slow.

Actually, there isn't really any difference between accessing a missing mip map level and accessing a missing tile so I would say the feature has the same complexity to be implemented.
Support LuxCoreRender project with salts and bounties
juangea
Donor
Donor
Posts: 332
Joined: Thu Jan 02, 2020 6:23 pm

Re: Texture caching... amazing improvement in memory with Cycles, could be great in LuxCore!

Post by juangea »

Oh, I was under the impression that the mipmaps could be generated on the fly by the GPU as they were needed, I thought it was an internal hardware based process, I didn't knew that the CPU needed to be involved in the process.

What a pity, something to reduce memory consumption on the GPU could be very welcome, right now I'm rendering a project with a 2080Ti and I have to disable several things to be able to render it with the GPU, here is a picture showing the project and this goes up to more than 9Gb, and of course with Windows I can't efficiently access more than 9Gb or 9.2Gb of Vram.
test_exterior_003.jpg
User avatar
Sharlybg
Donor
Donor
Posts: 3101
Joined: Mon Dec 04, 2017 10:11 pm
Location: Ivory Coast

Re: Texture caching... amazing improvement in memory with Cycles, could be great in LuxCore!

Post by Sharlybg »

How out of core work on your side when GPU memory isn't enough ? on my side each time GPU couldn't render even out of core crash with lot of memory left on DDR4.
Support LuxCoreRender project with salts and bounties

Portfolio : https://www.behance.net/DRAVIA
User avatar
Dade
Developer
Developer
Posts: 5672
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

Re: Texture caching... amazing improvement in memory with Cycles, could be great in LuxCore!

Post by Dade »

Sharlybg wrote: Thu May 06, 2021 11:01 am How out of core work on your side when GPU memory isn't enough ? on my side each time GPU couldn't render even out of core crash with lot of memory left on DDR4.
It is all up to CUDA, it does it transparently for the application however I have some vague memory of some huge limit on Windows (it may be something really working only on Linux, I would have to check).
Support LuxCoreRender project with salts and bounties
User avatar
Dade
Developer
Developer
Posts: 5672
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

Re: Texture caching... amazing improvement in memory with Cycles, could be great in LuxCore!

Post by Dade »

juangea wrote: Thu May 06, 2021 10:52 am Oh, I was under the impression that the mipmaps could be generated on the fly by the GPU as they were needed, I thought it was an internal hardware based process, I didn't knew that the CPU needed to be involved in the process.
Oh, no, the point is to not have the original (8K, etc.) images on the GPU memory at all but just use the smallest mip map level possible for the current rendering resolution (i.e. requiring a lot less ram in most cases).

You would need the original image on GPU memory to generate the mip map level on-the-fly, ending to use more ram (original image plus mip maps), not less.
Support LuxCoreRender project with salts and bounties
User avatar
Dade
Developer
Developer
Posts: 5672
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

Re: Texture caching... amazing improvement in memory with Cycles, could be great in LuxCore!

Post by Dade »

Dade wrote: Thu May 06, 2021 11:31 am
Sharlybg wrote: Thu May 06, 2021 11:01 am How out of core work on your side when GPU memory isn't enough ? on my side each time GPU couldn't render even out of core crash with lot of memory left on DDR4.
It is all up to CUDA, it does it transparently for the application however I have some vague memory of some huge limit on Windows (it may be something really working only on Linux, I would have to check).
It is this stuff: https://docs.nvidia.com/cuda/cuda-c-pro ... ramming-hd

Windows related limits:
Note that currently these features are only supported on Linux operating systems. Applications running on Windows (whether in TCC or WDDM mode) will use the basic Unified Memory model as on pre-6.x architectures even when they are running on hardware with compute capability 6.x or higher. See Data Migration and Coherency for details.
Support LuxCoreRender project with salts and bounties
User avatar
Sharlybg
Donor
Donor
Posts: 3101
Joined: Mon Dec 04, 2017 10:11 pm
Location: Ivory Coast

Re: Texture caching... amazing improvement in memory with Cycles, could be great in LuxCore!

Post by Sharlybg »

So OOC is completelly irrelevant on Windows ? :shock:
Support LuxCoreRender project with salts and bounties

Portfolio : https://www.behance.net/DRAVIA
juangea
Donor
Donor
Posts: 332
Joined: Thu Jan 02, 2020 6:23 pm

Re: Texture caching... amazing improvement in memory with Cycles, could be great in LuxCore!

Post by juangea »

As far as I understand in that link, the Unified Memory System is workin on windows too, just that there are some advanced features like "features such as on-demand page migration and GPU memory oversubscription" are supported only on Linux, but the Unified Memory System is still working on Windows too, just not as efficient as on Linux it seems.

Or that is what I understand.

Regarding the Mip-Maps. ok, I understand, could be possible (just as an idea) to do some kind of pre-pass for such calcs and then upload to the GPU only the required mipmapped textures based on the camera being rendered?
User avatar
Dade
Developer
Developer
Posts: 5672
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

Re: Texture caching... amazing improvement in memory with Cycles, could be great in LuxCore!

Post by Dade »

juangea wrote: Thu May 06, 2021 12:27 pm Regarding the Mip-Maps. ok, I understand, could be possible (just as an idea) to do some kind of pre-pass for such calcs and then upload to the GPU only the required mipmapped textures based on the camera being rendered?
You can not predict where paths will bounce so it is not something you can do "before", only "after".

A viable solution is to "invalidate" (i.e. throw away the result) the current sample if it requires a mip map level not available and add the request for the required mip map level to a list. The CPU download the list and upload the missing levels. The GPU re-try to render the sample (and it may again miss something). Rise an repeat.

It includes a waste of work and a loop of CPU<->GPU interaction, like I said, it is never going to be fast.

It is a cache, like all caches, it works if 99% of times it includes all the data required otherwise is quite slow (and it is particularly slow in this case with the CPU<->GPU interaction).

It is the reason why having a GPU with an SSD attached, like the model proposed by AMD sometime ago, wasn't such a bad idea (indeed, as usual, AMD was lacking all the software ecosystem required to use such kind of specialized hardware).
I think NVIDIA has also recently introduced a technology to access SSD from the GPU (in the RTX3xxx, may be ?).
Support LuxCoreRender project with salts and bounties
Post Reply