Page 2 of 2

Re: MultiGPU stresstest - GPU(s) stop(s) calculating kernel

Posted: Fri Feb 12, 2021 7:09 am
by Wumme
Martini wrote: Wed Feb 10, 2021 10:02 am
Wumme wrote: Wed Feb 10, 2021 7:15 am
Dade wrote: Tue Feb 09, 2021 11:30 am The latest LuxCore uses C++ code for rendering, by default, with the CPU (i.e. no need to use/have an OpenCL CPU device). To disable the CPU usage, you have to set the number of CPU threads to 0 by setting "native.threads.count" to 0.
...

Code: Select all

LuxCoreUI v2.4 (LuxCore demo: http://www.luxcorerender.org)
[LuxCore][0.000] Configuration: 
[LuxCore][0.000]   opencl.cpu.use = "0"
[LuxCore][0.000]   native.threads.count = "0"
I tried exporting from BlendLuxCore and what it output for me (and worked) is:

Code: Select all

opencl.native.threads.count = 0
I think you are missing the opencl. prefix.
Wumme wrote: Wed Feb 10, 2021 7:15 am

Code: Select all

[LuxCore][0.000]   opencl.cpu.use = "0"
[LuxCore][0.000]   native.threads.count = "0"
[LuxCore][0.000]   opencl.gpu.use = "1"
[LuxCore][0.000]   opencl.devices.select = "1000"
I notice that all your values are quoted. I'm not sure if it makes a difference, but I think if the value is a pure int or float, then it should not be quoted?

Code: Select all

opencl.cpu.use = 0
opencl.native.threads.count = 0
opencl.gpu.use = 1
opencl.devices.select = "1000"
Hope this helps :)
Hey Martini,

yea this is only because the attached output was log-output. The configs in the render.cfg are without quotes.

but thanks anyway. :D

Re: MultiGPU stresstest - GPU(s) stop(s) calculating kernel

Posted: Fri Feb 12, 2021 7:15 am
by Wumme
Dade wrote: Wed Feb 10, 2021 10:48 am
Martini wrote: Wed Feb 10, 2021 10:02 am I think you are missing the opencl. prefix.
Yes, I was wrong "native.threads.count" is for PATHCPU, PATHOCL requires "opencl.native.threads.count".
Yea, that did it. Now the GPU are stressed on their own.
I did 3x 12hrs tests and it failed twice. Next thing I am going to try is a different stresstest. Do you have one to reccomend? I need to stress all GPUs at the same time and do not have a display attached to all of them.

I already tried Furmark but as far as I know I would need for each GPU a display.

Re: MultiGPU stresstest - GPU(s) stop(s) calculating kernel

Posted: Fri Feb 12, 2021 11:32 am
by Dade
Wumme wrote: Fri Feb 12, 2021 7:15 am Next thing I am going to try is a different stresstest. Do you have one to reccomend? I need to stress all GPUs at the same time and do not have a display attached to all of them.
May be AMD ProRender :?: