CGI tech news box

General project and community related discussions and offtopic threads.
User avatar
Sharlybg
Donor
Donor
Posts: 1441
Joined: Mon Dec 04, 2017 10:11 pm
Location: Ivory Coast

Re: CGI tech news box

Post by Sharlybg » Tue Jul 23, 2019 2:45 pm

like this part :
Painting the reflectivity parameter from reference is also straighrorward. This parameter is equal to the reflective color of the metal at 0 degrees, and is also very close to the reflective colors over a wide range of angles (for most metals, up to 60 degrees or so). This means that the visible color of most of the surface (anything but the edges) can be observed by a skilled artist and painted as a valid reflectivity value.
Support LuxCoreRender project with salts and bounties

Portfolio : https://www.behance.net/DRAVIA

User avatar
FarbigeWelt
Donor
Donor
Posts: 789
Joined: Sun Jul 01, 2018 12:07 pm
Location: Switzerland
Contact:

Re: CGI tech news box

Post by FarbigeWelt » Thu Jul 25, 2019 5:32 pm

Sharlybg wrote:
Tue Jul 23, 2019 2:45 pm
like this part :
This means that the visible color of most of the surface (anything but the edges) can be observed by a skilled artist and painted as a valid reflectivity value.
Do not forget the conclusions;-)
F5E5F2AC-08FC-4076-859D-E7313CC3B759.png
Conclusions
Just to recall...
BD78432A-BECD-4E37-8BAE-58CB4B7ED953.png
parameterized Schlick‘s equotion
...and visualize
A0B22014-5F7D-4A10-B1E8-C5BCCD8E91BB.png
h independent on r
:geek:
160.8 | 42.8 (10.7) Gfp / Windows 10 Pro, intel i7 4770K@3.5, 32 GB | AMD R9 290x+RX 5700 XT, 4/8 GB
17.3 | 19.0 ( 4.7) Gfp / macOS X 13.6, iMac 27'', 2010, intel i7 870@2.93, 24 GB | ATI Radeon HD 5750, 1 GB
#luxcorerender | Gfp = SFFT Gflops

User avatar
epilectrolytics
Donor
Donor
Posts: 498
Joined: Thu Oct 04, 2018 6:06 am

Re: CGI tech news box

Post by epilectrolytics » Tue Jul 30, 2019 1:03 pm

Looks like Cycles will soon have an integration for nVidia OptiX (accelerating ray tracing with RT-cores on Turing cards).
RTX hardware is spreading among Lux users too, so I wonder how complicated a translation of PathOCL code to OptiX would be?
MBPro 15" 16GB i7-4850HQ GT750M, MacOS 10.13.6 & Win10Pro PC 16GB Ryzen 2700X, 2 x RTX 2070

User avatar
lacilaci
Donor
Donor
Posts: 1380
Joined: Fri May 04, 2018 5:16 am

Re: CGI tech news box

Post by lacilaci » Tue Jul 30, 2019 1:08 pm

epilectrolytics wrote:
Tue Jul 30, 2019 1:03 pm
Looks like Cycles will soon have an integration for nVidia OptiX (accelerating ray tracing with RT-cores on Turing cards).
RTX hardware is spreading among Lux users too, so I wonder how complicated a translation of PathOCL code to OptiX would be?
there's already a build if you are willing to also use pre-release nvidia drivers...
https://blender.community/c/graphicall/Cfbbbc/

I haven't tried because of the driver issue, but one of the test shows 2x performance boost!

User avatar
Sharlybg
Donor
Donor
Posts: 1441
Joined: Mon Dec 04, 2017 10:11 pm
Location: Ivory Coast

Re: CGI tech news box

Post by Sharlybg » Tue Jul 30, 2019 1:39 pm

I haven't tried because of the driver issue, but one of the test shows 2x performance boost!
It is only 25% faster in sponza. I'm a bit affraid of the NVIDIA BLACK BOX STRATEGY.
Support LuxCoreRender project with salts and bounties

Portfolio : https://www.behance.net/DRAVIA

User avatar
Dade
Developer
Developer
Posts: 2820
Joined: Mon Dec 04, 2017 8:36 pm

Re: CGI tech news box

Post by Dade » Tue Jul 30, 2019 2:50 pm

epilectrolytics wrote:
Tue Jul 30, 2019 1:03 pm
Looks like Cycles will soon have an integration for nVidia OptiX (accelerating ray tracing with RT-cores on Turing cards).
The next step, after OptiX, is to use directly iRay...
epilectrolytics wrote:
Tue Jul 30, 2019 1:03 pm
RTX hardware is spreading among Lux users too, so I wonder how complicated a translation of PathOCL code to OptiX would be?
It is not possible, OptiX is a CUDA library. Well, the other option is to throw PathOCL out of the window and write PathCUDA (and use directly CUDA RTX support, OptiX is a cage I don't like, it was very rigid and cumbersome the last time I looked into it).
Support LuxCoreRender project with salts and bounties

User avatar
lacilaci
Donor
Donor
Posts: 1380
Joined: Fri May 04, 2018 5:16 am

Re: CGI tech news box

Post by lacilaci » Tue Jul 30, 2019 2:59 pm

amd might bring some realtime raytracing solution... vulkan maybe?

User avatar
FarbigeWelt
Donor
Donor
Posts: 789
Joined: Sun Jul 01, 2018 12:07 pm
Location: Switzerland
Contact:

Re: CGI tech news box

Post by FarbigeWelt » Tue Jul 30, 2019 3:30 pm

lacilaci wrote:
Tue Jul 30, 2019 2:59 pm
amd might bring some realtime raytracing solution... vulkan maybe?
AMD‘s current official opinion is that hardware ray tracing is not flexible enough. They explain a mix of hard- and software snippets is more appropriate. I guess AMD means one can split ray tracing tasks into generic hardware accelerated sub tasks something like RISC vs. CISC. But this step is for next or after next generation.
160.8 | 42.8 (10.7) Gfp / Windows 10 Pro, intel i7 4770K@3.5, 32 GB | AMD R9 290x+RX 5700 XT, 4/8 GB
17.3 | 19.0 ( 4.7) Gfp / macOS X 13.6, iMac 27'', 2010, intel i7 870@2.93, 24 GB | ATI Radeon HD 5750, 1 GB
#luxcorerender | Gfp = SFFT Gflops

User avatar
Sharlybg
Donor
Donor
Posts: 1441
Joined: Mon Dec 04, 2017 10:11 pm
Location: Ivory Coast

Re: CGI tech news box

Post by Sharlybg » Tue Jul 30, 2019 4:45 pm

It is only 25% faster in sponza. I'm a bit affraid of the NVIDIA BLACK BOX STRATEGY.
The next step, after OptiX, is to use directly iRay...

The answer from a guy here : https://code.blender.org/2019/07/accele ... w-homepage

nVidia is like most other companies, with Intel and AMD also having a history of being horrible.

However, the base technologies of real-time RT and new AI/ML features are not nVidia’s, and won’t be locked to nVidia – as the next AMD GPUs will have similar hardware support. (See PS5/XBox 2020)

As for ‘cornering the market’ – RTX features and technologies are already in Vulkan and other full OSS frameworks. These are NOT nVidia specific, even though nVidia uses their CUDA and OPtix technologies in implementation.

The technology nVidia is using in the RTX GPUs is hardware based from Microsoft’s research technologies that were publicly released as DXR technologies for Ray Tracing and WinML(+) for ML/AI in early 2018. nVidia RTX cards are built from Microsoft’s work, and it isn’t exclusive to nVidia at all.

This might sound more dubious, but remember Microsoft in the 2010s is not Microsoft from the 00s – and even though they had a lot of OSS projects and contributions in the old days, they now are shoving tons of proprietary code out with unrestricted OSS licensing.

Which is why it is important to notice that Microsoft has also been working to help Vulkan use the DXR/WinML technologies – making the technologies fully cross platform. *Except for OS X that is still trying to break/beat Vulkan while also killing support for OpenGL or OpenCL.

OpenCL was rather good, but even after Apple pushing it, it was a problem for them, as OS X couldn’t handle ubiquitous OpenCL calls through the OS or in several applications. I also don’t like to see OpenCL go away, but right now it has problems as it can’t support faster GPU technologies that are only being implemented through Vulkan and DX12. OpenCL news a revamp or we all need to move to Vulkan.

Linux also suffers from the issues Apple had with heavy OpenCL and GPU usage. Linux and OS X both need full GPU preemption and SMP features like Windows has offered since 2007. It is this core lack of GPU technologies in all non-Windows operating system technologies that has handed the graphical future to Microsoft. Windows can hit faster GPU performance while still allowing tons of GPU code to execute in tons of applications and throughout the OS without the worry of locks or cooperative multitasking scheduling like Linux and OS X hit.

I digress…
So, the ‘technologies’ that RTX hardware is showcasing, isn’t technically pure nVidia and they also don’t have control over it. This is how/why AMD is planning on RT and AI hardware for their next GPUs next year, along with basic support in upcoming drivers, just as Intel is planning on GPUs and embedded support in the future, and like nVidia, both Intel and AMDs technologies come from Microsoft Research, which Microsoft has provided to the entire hardware industry for free to use.

The XBox and PS5 stories map out AMD’s RT and ML/AI upcoming features, as those AMD GPUs will have the RT technologies, via Vulkan on PS5 and both DX12 and Vulkan on XBox.

So the Optix and Cuda are nVidia implementations, but the originating concepts exist outside nVidia for anyone to use, for free, and is already really strong and doing well with Vulkan impelmentation.

Take care of yourself, this isn’t worth worrying about. Also note you will be able to do this stuff on AMD and possibly Intel GPUs next year – so just hold on and let nVidia do the heavy lifting, which seems to be AMD’s strategy as well.
Support LuxCoreRender project with salts and bounties

Portfolio : https://www.behance.net/DRAVIA

User avatar
lacilaci
Donor
Donor
Posts: 1380
Joined: Fri May 04, 2018 5:16 am

Re: CGI tech news box

Post by lacilaci » Tue Jul 30, 2019 5:38 pm

So hybrid rendering is where amd goes with raytracing..

https://twitter.com/bhsavery/status/115 ... 72065?s=09


these hybrid modes are becoming a thing..
unreal has great renderer + rtx today
blender has eevee for fast realtime feedback + cycles that uses same shaders just switches to pathtracing..

I think for luxcore, eevee could be attractive as an previz renderer/ render mode.. So that users can preview maps, roughness, bumps before they push the trigger and do a full luxcore rendering.... I guess this is on B.Y.O.B. mostly

I don't consider eevee even close to unreal, but it is a fantastic previz tool when working with cycles...
Aside cycles, unreal and now prorender, no other render engine has a proper realtime feedback and requires a ton of preview renderings usually.

Post Reply