RTX Vulkan OCL 2.0 INTEL AMD GPU IA

General project and community related discussions and offtopic threads.
User avatar
Sharlybg
Donor
Donor
Posts: 3101
Joined: Mon Dec 04, 2017 10:11 pm
Location: Ivory Coast

RTX Vulkan OCL 2.0 INTEL AMD GPU IA

Post by Sharlybg »

Things are Moving pretty fast currently in the tech industry. I remenber past years where i was talking about how close game graphics are to offline renderer while being realtime. I was also wondering if we are not going to see collapse between offline and realtime solution.


Realtime engine tend to be more realistic

Offline engine tend to be Realtime like

And with recent advancement in tech industry it so close and happen so fast that Wonder how this will impact every CG involved area. Many question remain unanswered :



. Do Realtime renderer boosted With IA and ASIIC (RTX) are going to replace Offline solution ? Like Blender render/Mentalray.


. How offline renderer could keep an edge over realtime Is there room for improvement in term of realism ?


. More accurate Caustics effects ? Spectral rendering ?



Some principle remain true :



New tech computer industry boost both Offline and realtime engine :


IA Denoiser

RTX specifics Hardware

Silicon node improvement



Some lines Stay Blurred


Do vulkan is the next OPencl ?

Is it really like mixed OPENGL+OPENCL ?

How on earth hardware can accelerate Path Tracing operation ? I undstand why GPU are faster than CPUs (because of many small core doing parallele operations). But this Dedicated Path tracing core are different beast.


Do you think we are going to see a Cuda like closed RTX vs Open AMD/INTEL version ?

Can we think that special core could be Added to CPUs to speed up path tracing operation ?

Do animation is going to be the next standar ?





HOW DO YOU SEE THE FUTUR ?
Support LuxCoreRender project with salts and bounties

Portfolio : https://www.behance.net/DRAVIA
User avatar
Sharlybg
Donor
Donor
Posts: 3101
Joined: Mon Dec 04, 2017 10:11 pm
Location: Ivory Coast

Re: RTX Vulkan OCL 2.0 INTEL AMD GPU IA

Post by Sharlybg »

Even Minecraft is on the track :mrgreen: :

https://www.youtube.com/watch?v=k1zxxyjo6gE

unbelievable :!:
Support LuxCoreRender project with salts and bounties

Portfolio : https://www.behance.net/DRAVIA
User avatar
Sharlybg
Donor
Donor
Posts: 3101
Joined: Mon Dec 04, 2017 10:11 pm
Location: Ivory Coast

Re: RTX Vulkan OCL 2.0 INTEL AMD GPU IA

Post by Sharlybg »

A nice answer to this RTX things From here : https://corona-renderer.com/forum/index ... c=21433.30
At some point I’ll be preparing a longer post than this but just wanted to quickly offer some insight on ray tracing hardware acceleration and ensure that user expectations are reasonable.

Most renderers work by executing the following three basic operations:

1) They generate rays (initially from the camera),
2) They shoot these rays into the scene (i.e. they do ray tracing),
3) They run shaders at the intersection points of the rays.

Shading typically spawns new rays for reflection/refraction/GI/etc purposes which means going back to step 1.
This 1-2-3 process happens as many times as there are ray bounces.

Hardware accelerated ray tracing primarily speeds up the second step: i.e. the ‘core’ ray tracing. If the renderer uses really simple shading, then the ray tracing step becomes the most expensive part of the renderer. For example, if you use extremely simple shaders that (say) just read a flat texture and return it, you could easily find out that the ray tracing step takes 99% of the entire render time and shading just takes 1%. In that case, accelerating ray tracing 10 times means that the frame renders 10 times faster, since ray tracing takes most of the time.

Unfortunately, production scenes do not use quite as simple shading as that.

Both us and other pro renderer vendors have found cases where shading takes a considerable chunk of the render time. I remember reading a Pixar paper (or maybe it was a presentation) where they were claiming that their (obviously complicated) shaders were actually taking *more* time than ray tracing! Let’s say that, in such a scenario, shading takes 50% of the entire frame time and tracing takes the other 50% (I intentionally ignore ray generation here). In that scenario, speeding up the ray tracer a hundred million times means that you make that 50% ray tracing time go away but you are still left with shading taking the other 50% of the frame! So even though your ray tracer became a hundred million times faster, your entire frame only rendered twice as fast!

All this is to say that when you read claims about a new system making rendering several times times faster, you have to ask yourself: was this with simple shading? Like the kind you see in videogames? Or was it in a scene which (for whatever reason) was spending a lot of time during tracing and not shading?

In more technical terms: the RT cores accelerate ray tracing while the CUDA cores accelerate shading and ray generation. The RT hardware cannot do volume rendering and I think no hair/curve tracing either - so these two techniques would probably also fall back to CUDA cores too - which means no benefit from the RT hardware.

All this is not to say that we’re not excited to see developments on the ray tracing front! On the contrary! But, at the same time, we wanted to ensure that everyone has a clear idea on what they’ll be getting when ray tracing hardware (and the necessary software support) arrives. We have, as explained in other forum posts, already started on supporting it by re-architecting certain parts of Redshift. In fact, this is something we’ve been doing silently (for RS 3.0) during the last few months and in-between other tasks. Hopefully, not too long from now, we’ll get this all working and will have some performance figures to share with you.


Thanks

-Panos
Support LuxCoreRender project with salts and bounties

Portfolio : https://www.behance.net/DRAVIA
User avatar
lacilaci
Donor
Donor
Posts: 1969
Joined: Fri May 04, 2018 5:16 am

Re: RTX Vulkan OCL 2.0 INTEL AMD GPU IA

Post by lacilaci »

Small Vram is always a problem, good out of core features would definitely be great. From what I heard, octane performance halves when runs out of memory. But it's still at least being able to render.

With rtx 2070 I have to be careful to build everything only for specific view and this means some extra time for planning the shots. I still wonder what would happen if I needed large amount of diverse vegetation. I was thinking about getting a 1080ti as a secondary gpu to get at least 11gb of vram, but that's not really that big of a difference.

It only gets interesting with 2080 and nvlink, 22gb vram is something that could be worth it. I'd still like to see some good out of core on luxcore though. Maybe some smart proxy system or idk...
User avatar
Sharlybg
Donor
Donor
Posts: 3101
Joined: Mon Dec 04, 2017 10:11 pm
Location: Ivory Coast

Re: RTX Vulkan OCL 2.0 INTEL AMD GPU IA

Post by Sharlybg »

It only gets interesting with 2080 and nvlink, 22gb vram is something that could be worth it. I'd still like to see some good out of core on luxcore though. Maybe some smart proxy system or idk...
I wonder why Nvidia didn't go with the VEGA like native shared Memory system ?
Support LuxCoreRender project with salts and bounties

Portfolio : https://www.behance.net/DRAVIA
User avatar
FarbigeWelt
Donor
Donor
Posts: 1046
Joined: Sun Jul 01, 2018 12:07 pm
Location: Switzerland
Contact:

Re: RTX Vulkan OCL 2.0 INTEL AMD GPU IA

Post by FarbigeWelt »

Sharlybg wrote: Thu Jun 20, 2019 2:43 pm
It only gets interesting with 2080 and nvlink, 22gb vram is something that could be worth it. I'd still like to see some good out of core on luxcore though. Maybe some smart proxy system or idk...
I wonder why Nvidia didn't go with the VEGA like native shared Memory system ?
Did you look at the transfer rates of vram and pcie interface?
It is 480 to 1000 GB/s vs. 16 to 32 GB/s (v3.0 resp. v4.0).
I think ram sharing is not an option.
Light and Word designing Creator - www.farbigewelt.ch - aka quantenkristall || #luxcorerender
MacBook Air with M1
User avatar
FarbigeWelt
Donor
Donor
Posts: 1046
Joined: Sun Jul 01, 2018 12:07 pm
Location: Switzerland
Contact:

Re: RTX Vulkan OCL 2.0 INTEL AMD GPU IA

Post by FarbigeWelt »

Sharlybg wrote: Thu Jun 20, 2019 12:38 pm A nice answer to this RTX things From here : https://corona-renderer.com/forum/index ... c=21433.30
That is interesting to know. RTX support would not help much in a lot of rendering situations.
Light and Word designing Creator - www.farbigewelt.ch - aka quantenkristall || #luxcorerender
MacBook Air with M1
User avatar
lacilaci
Donor
Donor
Posts: 1969
Joined: Fri May 04, 2018 5:16 am

Re: RTX Vulkan OCL 2.0 INTEL AMD GPU IA

Post by lacilaci »

FarbigeWelt wrote: Fri Jun 21, 2019 1:22 am
Sharlybg wrote: Thu Jun 20, 2019 2:43 pm
It only gets interesting with 2080 and nvlink, 22gb vram is something that could be worth it. I'd still like to see some good out of core on luxcore though. Maybe some smart proxy system or idk...
I wonder why Nvidia didn't go with the VEGA like native shared Memory system ?
Did you look at the transfer rates of vram and pcie interface?
It is 480 to 1000 GB/s vs. 16 to 32 GB/s (v3.0 resp. v4.0).
I think ram sharing is not an option.
Nvidia does allow to use Vram as cache and store main data in ram. I think that's how octane works when scene data doesn't fit into memory. That's also why the performance drops to 50%(Although I didn't try how bad the perf. is myself in that case). Last time I ran out of Vram in luxcore, blender crashed. So I don't know if this basic out of core should work on its own or renderer has to support it in some way. That said, octane does also some crazy texture compression and something with geometry as well, so I don't think they are just relying on nvidia drivers in case you run out of vram.

For gpu it is simply necessary to have good out of core setups, even bigger gpus are nowhere close to 32-128GB amounts of ram ppl use in bigger archviz projects. On other hand, if the performance "only" halves when you run out of vram it can be still really good if you have something like 2x2080 and 64GB of ram + you add some semi-decent cpu in the mix, you can still squeze a lot of nice performance out of it and do relatively complex scenes with this setup.
User avatar
Sharlybg
Donor
Donor
Posts: 3101
Joined: Mon Dec 04, 2017 10:11 pm
Location: Ivory Coast

Re: RTX Vulkan OCL 2.0 INTEL AMD GPU IA

Post by Sharlybg »

At indigo they made a good comparison between GPU and CPU price performance Value here are the result :
GPU CPU analysis.jpg

GPU are far away.And this is for a renderer without caching ability. It become hard to justify CPU rendering. The main relevant are still :

_Good Bidir

_Memory
Support LuxCoreRender project with salts and bounties

Portfolio : https://www.behance.net/DRAVIA
User avatar
lacilaci
Donor
Donor
Posts: 1969
Joined: Fri May 04, 2018 5:16 am

Re: RTX Vulkan OCL 2.0 INTEL AMD GPU IA

Post by lacilaci »

Sharlybg wrote: Fri Jun 21, 2019 4:05 pm At indigo they made a good comparison between GPU and CPU price performance Value here are the result :

GPU CPU analysis.jpg


GPU are far away.And this is for a renderer without caching ability. It become hard to justify CPU rendering. The main relevant are still :

_Good Bidir

_Memory
ok, but that's indigo.

In practice, we should factor in different render engines and their usability and feature sets etc...

Fully featured and heavily optimized renderers for production like vray or corona can squeeze a lot from modern cpu and are also able to use system ram, which can in todays workstations range from 32-128 gb. Something simply not available for gpu rendering.

Another thing is bidir. I think corona will soon make bidir useless with it's caustics development which already is fantastic. We simply need smart ways around complex problems that give plasible results instead of insane simulations.

All in all, price for performance is shitty argument when your scene won't fit into vram or your renderer can't do what the cpu one does.

Don't get me wrong, I like gpu rendering ,there is a huge potential. But it needs to be smart and convincing, instead of some bidir insanity (which has it's own limitation anyways)
Post Reply