Is 1070 gpu slower than it should be?

Use this forum for general user support and related questions.
Forum rules
Please upload a testscene that allows developers to reproduce the problem, and attach some images.
TomekK
Posts: 9
Joined: Fri Jan 29, 2021 3:33 pm

Is 1070 gpu slower than it should be?

Post by TomekK » Thu Jun 10, 2021 6:05 pm

So I just figured out two things: one, that my scene had 90 million triangles and that is why it didn’t work on gpu, and two, that gpu rendering is so much faster than cpu that I can actually get something done. I am using a gtx1650, but, as my dad was willing, we added a gtx1070 to my computer-so now I have 2 gpus. Both are in 16x 2.0 pcie slots.

However, for my scene, it seems that the 1070 is quite a bit slower than the 1650. Specific details are below, but the crux is that the 1070 is some 0.8 million samples per second slower. When I made a test render with just Suzanne and a plane, both in cycles and Luxcore everything went as excepted: 1070 a bit faster than 1650, and about double the speed when together.

Just want to ask if this is normal. I would think a more powerful gpu would give faster rendering in every scene, but maybe that’s not the case. Let me know what to do, if anything. Gpu drivers were completely reinstalled if that makes a difference.

Sorry can’t attach the blender file now, but I can attach a photo of a render of my scene-512 sample both-gpus+cpu.

Another bit, the slower 1070 render of my scene seemed to be a bit less noisy in the indirect lighting areas, but I would point to my cpu having more time to render the difficult areas with light tracing. Most areas look the same.

Thanks,
Tomek


My specs:
Blender 2.83
Blend Luxcore 2.4
AMD FX-8320 CPU
1070, 1650 gpus
650 watt psu

Rendering results: test scenes to 200 samples, my scene to 512 samples and 5.45 clamping.

1650 super:
1:03 blender test
Simple Luxcore no light tracing (~4.2M): 1:43
My scene (with cpu 100 percent light tracing): (~2.6M): 7:16

1070:
0:42 blender test
Simple Luxcore no light tracing (~4.2M): 1:43
Simple Luxcore light tracing (~4.8M): 1:33
My scene (with cpu 100 percent light tracing): (~1.8M): 10:28

Both 1070 and 1650 together:
0:26 blender test
Simple Luxcore no light tracing (~7.9M): 56
Simple Luxcore light tracing (~8.4M): 53
My scene (photo attached) (with cpu 100 percent light tracing): (~3.7M): 5:11
Attachments
14B9A856-7606-448C-B6EC-966527619CA7.png

User avatar
Dade
Developer
Posts: 5343
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

Re: Is 1070 gpu slower than it should be?

Post by Dade » Thu Jun 10, 2021 6:53 pm

It doesn't look right. What LuxCore version are you using ? Can you try to run LuxMark on the 1650 and 1070 alone ? What result do you get ?

Note: you are tests are very short (about 1min), they may be affected by many factors like loading time, etc. Do you get the same results with a 5mins long test ?
Support LuxCoreRender project with salts and bounties

User avatar
TAO
Developer
Posts: 673
Joined: Sun Mar 24, 2019 4:49 pm
Location: France
Contact:

Re: Is 1070 gpu slower than it should be?

Post by TAO » Thu Jun 10, 2021 6:56 pm

In addition to dade answer, You need to remember all GPUs and potentially other devices like SSDs using the same bus and all of them use PCI express so the more you put on it less result you will get. PCI uses a shared parallel bus architecture, in which the PCI host and all devices share a common set of address, data, and control lines.
Omid Ghotbi (TAO)
Latest build Download link👇👇
https://github.com/LuxCoreRender/MaxToLux/releases
Last update information
https://forums.luxcorerender.org/viewto ... 590#p30154

TomekK
Posts: 9
Joined: Fri Jan 29, 2021 3:33 pm

Re: Is 1070 gpu slower than it should be?

Post by TomekK » Thu Jun 10, 2021 7:07 pm

Dade wrote:
Thu Jun 10, 2021 6:53 pm
It doesn't look right. What LuxCore version are you using ? Can you try to run LuxMark on the 1650 and 1070 alone ? What result do you get ?

Note: you are tests are very short (about 1min), they may be affected by many factors like loading time, etc. Do you get the same results with a 5mins long test ?
I’ll try to run Luxmark on both gpus later today when I get a chance.

I understand the base tests are too short, but I’m really comparing the samples per second, which seems to fit for both gpus-1070 slightly more than 1650, but not a big difference. About 4.2 million samples per second on the simple Suzanne scene for both gpus. For my more complex scene, the 1650 hovers about 2.6M and the 1070 about 1.8M. Same results with more sample tests for my scene. I’ll check for longer tests of my test scene later today.

Thanks!
Last edited by TomekK on Thu Jun 10, 2021 7:10 pm, edited 1 time in total.

TomekK
Posts: 9
Joined: Fri Jan 29, 2021 3:33 pm

Re: Is 1070 gpu slower than it should be?

Post by TomekK » Thu Jun 10, 2021 7:10 pm

TAO wrote:
Thu Jun 10, 2021 6:56 pm
In addition to dade answer, You need to remember all GPUs and potentially other devices like SSDs using the same bus and all of them use PCI express so the more you put on it less result you will get. PCI uses a shared parallel bus architecture, in which the PCI host and all devices share a common set of address, data, and control lines.
Do you mean that connecting both GPUs together will not take full advantage of both? That I’d except. But would that explain why a 1070 by itself is significantly slower rendering my scene than a 1650 by itself, when the 1070 is slightly faster in a simple test scene (and should be faster on paper)?

Thanks!

User avatar
TAO
Developer
Posts: 673
Joined: Sun Mar 24, 2019 4:49 pm
Location: France
Contact:

Re: Is 1070 gpu slower than it should be?

Post by TAO » Thu Jun 10, 2021 7:24 pm

TomekK wrote:
Thu Jun 10, 2021 7:10 pm
TAO wrote:
Thu Jun 10, 2021 6:56 pm
In addition to dade answer, You need to remember all GPUs and potentially other devices like SSDs using the same bus and all of them use PCI express so the more you put on it less result you will get. PCI uses a shared parallel bus architecture, in which the PCI host and all devices share a common set of address, data, and control lines.
Do you mean that connecting both GPUs together will not take full advantage of both? That I’d except. But would that explain why a 1070 by itself is significantly slower rendering my scene than a 1650 by itself, when the 1070 is slightly faster in a simple test scene (and should be faster on paper)?

Thanks!
As far as I know, the PCI express is a shared bus, and as many devices, you connect to it, you will lose more performance when the devices use it at the same time.
If you read your motherboard manual you will see the same explanation in it too. most of the motherboards will use 1 device at 16x and 2 devices at 8x.
imagine this a network with more connection nodes you get less data if all nodes use all bandwidth at the same time.
We have seen that PCI Express has evolved a lot and that It’s based on the same concepts as a network. To take the best from the PCIe devices it is necessary to understand the fundamentals of the underlying infrastructure.

Failing to choose the right underlying Motherboard, CPU or BUS can lead to major performance bottlenecks and GPU under performance.
Omid Ghotbi (TAO)
Latest build Download link👇👇
https://github.com/LuxCoreRender/MaxToLux/releases
Last update information
https://forums.luxcorerender.org/viewto ... 590#p30154

TomekK
Posts: 9
Joined: Fri Jan 29, 2021 3:33 pm

Re: Is 1070 gpu slower than it should be?

Post by TomekK » Thu Jun 10, 2021 7:53 pm

TAO wrote:
Thu Jun 10, 2021 7:24 pm
TomekK wrote:
Thu Jun 10, 2021 7:10 pm
TAO wrote:
Thu Jun 10, 2021 6:56 pm
In addition to dade answer, You need to remember all GPUs and potentially other devices like SSDs using the same bus and all of them use PCI express so the more you put on it less result you will get. PCI uses a shared parallel bus architecture, in which the PCI host and all devices share a common set of address, data, and control lines.
Do you mean that connecting both GPUs together will not take full advantage of both? That I’d except. But would that explain why a 1070 by itself is significantly slower rendering my scene than a 1650 by itself, when the 1070 is slightly faster in a simple test scene (and should be faster on paper)?

Thanks!
As far as I know, the PCI express is a shared bus, and as many devices, you connect to it, you will lose more performance when the devices use it at the same time.
If you read your motherboard manual you will see the same explanation in it too. most of the motherboards will use 1 device at 16x and 2 devices at 8x.
imagine this a network with more connection nodes you get less data if all nodes use all bandwidth at the same time.
We have seen that PCI Express has evolved a lot and that It’s based on the same concepts as a network. To take the best from the PCIe devices it is necessary to understand the fundamentals of the underlying infrastructure.

Failing to choose the right underlying Motherboard, CPU or BUS can lead to major performance bottlenecks and GPU under performance.
My motherboard is an asus m5a99fx pro r2.0. I believe it has 2 16x slots at 16x, and 2 more 16x at 4x. Both gpus are in 16x slots. I believe both gpus are running at 16x, but, even if they’re not, they are running at the same bandwidth. So shouldn’t the 1070 still be faster than 1650?

Actually, I first used a different motherboard, with the 1070 in what turned out to be a 16x running at 4x slot. That’s why I originally thought it was slower-as you say, a motherboard bottleneck. But now, with both gpus in the same-bandwidth slot, I’m getting the exact same performance-the 1070 0.8 million samples per second slower than the 1650 in my scene, but not in a simple test scene.

I’ll do a luxmark test later today to see if it’s a problem with the system, or just my particular scene (that’s how it looks to me currently).

User avatar
TAO
Developer
Posts: 673
Joined: Sun Mar 24, 2019 4:49 pm
Location: France
Contact:

Re: Is 1070 gpu slower than it should be?

Post by TAO » Thu Jun 10, 2021 9:06 pm

normally 1070 should be faster.
https://gpu.userbenchmark.com/Compare/N ... 4039vs3609
you can simply test them just by removing one GPU and test one at a time. so there were be no confusion in any kind about hardware conflict.
Omid Ghotbi (TAO)
Latest build Download link👇👇
https://github.com/LuxCoreRender/MaxToLux/releases
Last update information
https://forums.luxcorerender.org/viewto ... 590#p30154

TomekK
Posts: 9
Joined: Fri Jan 29, 2021 3:33 pm

Re: Is 1070 gpu slower than it should be?

Post by TomekK » Thu Jun 10, 2021 11:46 pm

Tested both gpus with luxmark. Results below. Seems as though there is no problem with my gpus, the problem must be with my specific scene. Any ideas what could be the problem?

Luxball:
1650: 10,109
1070: 17,211
Both: 27,408

Microphone:
1650: 7,109
1070: 9,805
Both: 16,970

Lobby:
1650: 2,605
1070: 3,850
Both: 6,449

User avatar
Dade
Developer
Posts: 5343
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

Re: Is 1070 gpu slower than it should be?

Post by Dade » Thu Jun 10, 2021 11:51 pm

TomekK wrote:
Thu Jun 10, 2021 11:46 pm
Tested both gpus with luxmark. Results below. Seems as though there is no problem with my gpus, the problem must be with my specific scene. Any ideas what could be the problem?
What LuxCore version are you using ? Have you tried the under development v2.6 ? There may be a problem on old cards when using Optix (i.e. NVIDIA library to use RTX on new GPUs).
Support LuxCoreRender project with salts and bounties

Post Reply