CPU-only features

Discussion related to the LuxCore functionality, implementations and API.
Post Reply
User avatar
Dade
Developer
Developer
Posts: 5672
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

CPU-only features

Post by Dade »

B.Y.O.B. wrote: Tue Jan 23, 2018 8:20 pm It would be much nicer to have a generic "double sided material" which accepts two materials as inputs for front and back, like the mix material.
But I don't know if that's possible to do in the material sampling code in case of translucent materials or glass.
The material is easy to implement and could just throw an error when used with materials supporting any type of transmission. It would be easy on CPU ... because any recursive material/texture is a killer for GPUs.

I plan to add the support for OpenVDB because I used it in the past and it is absolutely amazing, Embree hair support is another good candidate. They are both going to be CPU-only features. The strategic decision here is if/how let the CPU and GPU render engines have different features.

The key point here is the user interface because having different feature sets for CPU and GPU can be very confusing for end users.
Support LuxCoreRender project with salts and bounties
blibli
Posts: 29
Joined: Thu Jan 18, 2018 2:52 pm

Re: BlendLuxCore Development

Post by blibli »

Dade wrote: Tue Jan 23, 2018 8:48 pm
B.Y.O.B. wrote: Tue Jan 23, 2018 8:20 pm It would be much nicer to have a generic "double sided material" which accepts two materials as inputs for front and back, like the mix material.
But I don't know if that's possible to do in the material sampling code in case of translucent materials or glass.
The material is easy to implement and could just throw an error when used with materials supporting any type of transmission. It would be easy on CPU ... because any recursive material/texture is a killer for GPUs.

I plan to add the support for OpenVDB because I used it in the past and it is absolutely amazing, Embree hair support is another good candidate. They are both going to be CPU-only features. The strategic decision here is if/how let the CPU and GPU render engines have different features.

The key point here is the user interface because having different feature sets for CPU and GPU can be very confusing for end users.
OpenVDB is awesome, so I would be happy to have differences between CPU and GPU feature set if it brings such powerfull tools. And maybe on the longterm OpenVDB could be ported to GPU? (I have no Idea here how OpenVDB works)
User avatar
Dade
Developer
Developer
Posts: 5672
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

Re: BlendLuxCore Development

Post by Dade »

blibli wrote: Wed Jan 24, 2018 11:43 am And maybe on the longterm OpenVDB could be ported to GPU? (I have no Idea here how OpenVDB works)
It is a HUUUUUUUUUGE task :!:
Support LuxCoreRender project with salts and bounties
User avatar
B.Y.O.B.
Developer
Developer
Posts: 4146
Joined: Mon Dec 04, 2017 10:08 pm
Location: Germany
Contact:

Re: BlendLuxCore Development

Post by B.Y.O.B. »

Dade wrote: The key point here is the user interface because having different feature sets for CPU and GPU can be very confusing for end users.
It should be doable. It is a bit like in the past where we had warnings "this node does not work in Classic API" all over the place.
The feature set already differs a bit between Bidir and Path engines (e.g. indirect ray visibility).
We can add warning labels and explanations directly in the UI.
blibli
Posts: 29
Joined: Thu Jan 18, 2018 2:52 pm

Re: BlendLuxCore Development

Post by blibli »

Dade wrote: Wed Jan 24, 2018 11:50 am
blibli wrote: Wed Jan 24, 2018 11:43 am And maybe on the longterm OpenVDB could be ported to GPU? (I have no Idea here how OpenVDB works)
It is a HUUUUUUUUUGE task :!:
Then let's do it in a LOOOOOOOOONG LONG TIME (in a far far away galaxy).
First anyway, we should see if it could make any sense. Iirc, the most used operation in OpenVDB are bitwise operarations right? It seems those can be done pretty fast with CUDA http://docs.nvidia.com/cuda/cuda-c-prog ... structions, not sure about OpenCL. So if that's true, OpenVDB could be really fast on GPU?
User avatar
Dade
Developer
Developer
Posts: 5672
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

Re: BlendLuxCore Development

Post by Dade »

blibli wrote: Wed Jan 24, 2018 12:58 pm
Dade wrote: Wed Jan 24, 2018 11:50 am
blibli wrote: Wed Jan 24, 2018 11:43 am And maybe on the longterm OpenVDB could be ported to GPU? (I have no Idea here how OpenVDB works)
It is a HUUUUUUUUUGE task :!:
Then let's do it in a LOOOOOOOOONG LONG TIME (in a far far away galaxy).
First anyway, we should see if it could make any sense. Iirc, the most used operation in OpenVDB are bitwise operarations right? It seems those can be done pretty fast with CUDA http://docs.nvidia.com/cuda/cuda-c-prog ... structions, not sure about OpenCL. So if that's true, OpenVDB could be really fast on GPU?
The core problem is to porting thousands of line of code written in C++ with heavy usage of templates, etc. to OpenCL C ... we are talking of something in the range of 10+ man/year (1 person working on this for 10 years, a team of 5 persons working for 2 years, a team of 10 persons working for 1 year, etc.).
Support LuxCoreRender project with salts and bounties
blibli
Posts: 29
Joined: Thu Jan 18, 2018 2:52 pm

Re: BlendLuxCore Development

Post by blibli »

Dade wrote: Wed Jan 24, 2018 2:53 pm
blibli wrote: Wed Jan 24, 2018 12:58 pm
Dade wrote: Wed Jan 24, 2018 11:50 am

It is a HUUUUUUUUUGE task :!:
Then let's do it in a LOOOOOOOOONG LONG TIME (in a far far away galaxy).
First anyway, we should see if it could make any sense. Iirc, the most used operation in OpenVDB are bitwise operarations right? It seems those can be done pretty fast with CUDA http://docs.nvidia.com/cuda/cuda-c-prog ... structions, not sure about OpenCL. So if that's true, OpenVDB could be really fast on GPU?
The core problem is to porting thousands of line of code written in C++ with heavy usage of templates, etc. to OpenCL C ... we are talking of something in the range of 10+ man/year (1 person working on this for 10 years, a team of 5 persons working for 2 years, a team of 10 persons working for 1 year, etc.).
Yep, even if porting is less time consuming than writing from scratch, I understand it's still a lot of work. And AMD will make consumer CPUs with 32 Cores/64 threads this year, so who cares about GPUs anyway :)
User avatar
Dade
Developer
Developer
Posts: 5672
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

Re: BlendLuxCore Development

Post by Dade »

blibli wrote: Wed Jan 24, 2018 3:36 pm Yep, even if porting is less time consuming than writing from scratch, I understand it's still a lot of work. And AMD will make consumer CPUs with 32 Cores/64 threads this year, so who cares about GPUs anyway :)
Unfortunately, it is even more true if you factor the current (crazy) GPU prices due to bitcoin/crypto-mining bubble.
Support LuxCoreRender project with salts and bounties
User avatar
Sharlybg
Donor
Donor
Posts: 3101
Joined: Mon Dec 04, 2017 10:11 pm
Location: Ivory Coast

Re: CPU-only features

Post by Sharlybg »

And AMD will make consumer CPUs with 32 Cores/64 threads this year, so who cares about GPUs anyway :)
Cpus are not goind to 12nm alone GPUs are also going to get the same technological boost.Look what can maybe possible with the next Nvidia 2080Ti.
Volta.png
Volta.png (11.68 KiB) Viewed 7483 times
But for me the major point about development is :

1/got most used Feature to work strongly better

2/ Create compute strategy to adress everyday 3D scene case:
- Open environment (aka outdoor,product viz,studio lighting)
- Enclosed space (aka indoor/heavy indirect illuminated 3D scene)
then when you have efficient solution for theses typical scénario .

3/ You can now try to improve by adding complicated/time consuming things

Unfortunately, it is even more true if you factor the current (crazy) GPU prices due to bitcoin/crypto-mining bubble.
This is situation is not going to stay infinitly. new GDDR6 generation is comming.
Support LuxCoreRender project with salts and bounties

Portfolio : https://www.behance.net/DRAVIA
Post Reply