Windows Build FAILED

Discussion related to the LuxCore functionality, implementations and API.
neo2068
Developer
Developer
Posts: 260
Joined: Tue Dec 05, 2017 6:06 pm
Location: Germany

Re: Windows Build FAILED

Post by neo2068 »

Yes, it is the same with the cuda version and also the opencl only version.
i7 5820K, 32 GB RAM, NVIDIA Geforce RTX 2080 SUPER + GTX 1080, Windows 10 64bit, Blender 2.83.5
Support LuxCoreRender project with salts and bounties
epilectrolytics
Donor
Donor
Posts: 790
Joined: Thu Oct 04, 2018 6:06 am

Re: Windows Build FAILED

Post by epilectrolytics »

Just updated Blendluxcore with the content of LuxCore commit 071e1cb (tentative fix for an OpenCL crash), but still get a crash when trying to render the Benchmark scene with GPU devices selected as OpenCL (Cuda works).
neo2068
Developer
Developer
Posts: 260
Joined: Tue Dec 05, 2017 6:06 pm
Location: Germany

Re: Windows Build FAILED

Post by neo2068 »

I have installed cuda sdk 10.2 for the new compilation of the new code. The old one I had installed was 8.0, IIRC. Could that be the source of the problem of the error messages?
i7 5820K, 32 GB RAM, NVIDIA Geforce RTX 2080 SUPER + GTX 1080, Windows 10 64bit, Blender 2.83.5
Support LuxCoreRender project with salts and bounties
kintuX
Posts: 809
Joined: Wed Jan 10, 2018 2:37 am

Re: Windows Build FAILED

Post by kintuX »

epilectrolytics wrote: Tue Apr 28, 2020 3:11 pm Just updated Blendluxcore with the content of LuxCore commit 071e1cb (tentative fix for an OpenCL crash), but still get a crash when trying to render the Benchmark scene with GPU devices selected as OpenCL (Cuda works).
Experience on my side...

1. OCL renders only final, but image intensity goes darker as rendering progresses/ viewport goes instantly black
OCL.jpg

2. OCL+CUDA together, show mixed result
OCL+CUDA.jpg
// Viewport is striped, tiled :lol: had this happening before selections were added
OCL+CUDA_viewport.jpg

3. CUDA fine
CUDA.jpg

4. CPU fine
CPU.jpg

BTW
UI wise, I think there should just simply be CUDA Device option added in dropdown menu, which would automatically deselect NVidia's OCL to prevent the confusion and extra settings
User avatar
B.Y.O.B.
Developer
Developer
Posts: 4146
Joined: Mon Dec 04, 2017 10:08 pm
Location: Germany
Contact:

Re: Windows Build FAILED

Post by B.Y.O.B. »

kintuX wrote: Tue Apr 28, 2020 8:48 pm UI wise, I think there should just simply be CUDA Device option added in dropdown menu, which would automatically deselect NVidia's OCL to prevent the confusion and extra settings
The UI will be improved of course.
kintuX
Posts: 809
Joined: Wed Jan 10, 2018 2:37 am

Re: Windows Build FAILED

Post by kintuX »

B.Y.O.B. wrote: Tue Apr 28, 2020 8:54 pm
kintuX wrote: Tue Apr 28, 2020 8:48 pm UI wise, I think there should just simply be CUDA Device option added in dropdown menu, which would automatically deselect NVidia's OCL to prevent the confusion and extra settings
The UI will be improved of course.
Indeed, I have faith in you... was :twisted: just stating the obvious

I really find this engine becoming the first true hybrid in nature (all vendors) simply amazing.
But alongside (offtopic), I'm quite puzzled about how you'll define "Devices" while still offering a choice between OCL/CUDA for NVidia in a mixed system. ;)

Respect

BTW
Should this topic be split as it already builds fine on Windows, and we've started to bug fix the version in development?
mib2berlin
Posts: 53
Joined: Fri Apr 06, 2018 6:29 pm

Re: Windows Build FAILED

Post by mib2berlin »

Hi, I test only small scenes but does anybody have the same performance boost like me?
It is about 1.8 times faster than OCL on my stone age GTX 760.
Thank you for implementing it @developers. :)

Cheers, mib
User avatar
Dade
Developer
Developer
Posts: 5672
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

Re: Windows Build FAILED

Post by Dade »

mib2berlin wrote: Tue Apr 28, 2020 9:28 pm Hi, I test only small scenes but does anybody have the same performance boost like me?
It is about 1.8 times faster than OCL on my stone age GTX 760.
There seem to be a pattern: small GPUs gain a LOT from CUDA :?:
Support LuxCoreRender project with salts and bounties
zuljin3d
Posts: 76
Joined: Sun Apr 08, 2018 12:13 pm
Location: Moscow

Re: Windows Build FAILED

Post by zuljin3d »

mib2berlin wrote: Tue Apr 28, 2020 9:28 pm Hi, I test only small scenes but does anybody have the same performance boost like me?
It is about 1.8 times faster than OCL on my stone age GTX 760.
Thank you for implementing it @developers. :)

Cheers, mib
I have a similar boost result on my old (gtx960 4gb)

Tell me, is cuda also now involved in the viewport rendering? It also feels like a faster viewport (no delays,lags, faster compiling)
Actualy sorry for my google translate english :)
acasta69
Developer
Developer
Posts: 472
Joined: Tue Jan 09, 2018 3:45 pm
Location: Italy

Re: Windows Build FAILED

Post by acasta69 »

zuljin3d wrote: Tue Apr 28, 2020 11:56 pm
mib2berlin wrote: Tue Apr 28, 2020 9:28 pm Hi, I test only small scenes but does anybody have the same performance boost like me?
It is about 1.8 times faster than OCL on my stone age GTX 760.
I have a similar boost result on my old (gtx960 4gb)
neo2068 wrote: Mon Apr 27, 2020 6:21 pm I get errors with latest build and OpenCL and the food demo scene, too, e.g. PATHOCL:clFinish(CL_INVALID_COMMAND_QUEUE) or TILEPATHOCL: clEnqueueReadBuffer(CL_OUT_OF_RESOURCES). With cuda my GTX 780 is 1.5 times faster than the GTX 1080. Is it normal behaviour?
acasta69 wrote: Tue Apr 21, 2020 5:36 pm Now, without native threads and samples/sec as reference, I have 12% faster performance with OpenCL [than with Cuda on GTX970]
It seems we have a wide variety of results, mostly improvements but not all of them.
Also a GTX780 faster than a 1080 is quite strange, isn't it? The 1080 has more Cuda cores and nearly double clock speed and memory bandwitdh than the 780.
Support LuxCoreRender project with salts and bounties

Windows 10 64 bits, i7-4770 3.4 GHz, RAM 16 GB, GTX 970 4GB v445.87
Post Reply