Page 13 of 25

Re: CGI tech news box

Posted: Tue Jul 23, 2019 2:45 pm
by Sharlybg
like this part :
Painting the reflectivity parameter from reference is also straighrorward. This parameter is equal to the reflective color of the metal at 0 degrees, and is also very close to the reflective colors over a wide range of angles (for most metals, up to 60 degrees or so). This means that the visible color of most of the surface (anything but the edges) can be observed by a skilled artist and painted as a valid reflectivity value.

Re: CGI tech news box

Posted: Thu Jul 25, 2019 5:32 pm
by FarbigeWelt
Sharlybg wrote: Tue Jul 23, 2019 2:45 pm like this part :
This means that the visible color of most of the surface (anything but the edges) can be observed by a skilled artist and painted as a valid reflectivity value.
Do not forget the conclusions;-)
The attachment F5E5F2AC-08FC-4076-859D-E7313CC3B759.png is no longer available
Just to recall...
parameterized Schlick‘s equotion
parameterized Schlick‘s equotion
...and visualize
Conclusions
Conclusions
:geek:

Re: CGI tech news box

Posted: Tue Jul 30, 2019 1:03 pm
by epilectrolytics
Looks like Cycles will soon have an integration for nVidia OptiX (accelerating ray tracing with RT-cores on Turing cards).
RTX hardware is spreading among Lux users too, so I wonder how complicated a translation of PathOCL code to OptiX would be?

Re: CGI tech news box

Posted: Tue Jul 30, 2019 1:08 pm
by lacilaci
epilectrolytics wrote: Tue Jul 30, 2019 1:03 pm Looks like Cycles will soon have an integration for nVidia OptiX (accelerating ray tracing with RT-cores on Turing cards).
RTX hardware is spreading among Lux users too, so I wonder how complicated a translation of PathOCL code to OptiX would be?
there's already a build if you are willing to also use pre-release nvidia drivers...
https://blender.community/c/graphicall/Cfbbbc/

I haven't tried because of the driver issue, but one of the test shows 2x performance boost!

Re: CGI tech news box

Posted: Tue Jul 30, 2019 1:39 pm
by Sharlybg
I haven't tried because of the driver issue, but one of the test shows 2x performance boost!
It is only 25% faster in sponza. I'm a bit affraid of the NVIDIA BLACK BOX STRATEGY.

Re: CGI tech news box

Posted: Tue Jul 30, 2019 2:50 pm
by Dade
epilectrolytics wrote: Tue Jul 30, 2019 1:03 pm Looks like Cycles will soon have an integration for nVidia OptiX (accelerating ray tracing with RT-cores on Turing cards).
The next step, after OptiX, is to use directly iRay...
epilectrolytics wrote: Tue Jul 30, 2019 1:03 pm RTX hardware is spreading among Lux users too, so I wonder how complicated a translation of PathOCL code to OptiX would be?
It is not possible, OptiX is a CUDA library. Well, the other option is to throw PathOCL out of the window and write PathCUDA (and use directly CUDA RTX support, OptiX is a cage I don't like, it was very rigid and cumbersome the last time I looked into it).

Re: CGI tech news box

Posted: Tue Jul 30, 2019 2:59 pm
by lacilaci
amd might bring some realtime raytracing solution... vulkan maybe?

Re: CGI tech news box

Posted: Tue Jul 30, 2019 3:30 pm
by FarbigeWelt
lacilaci wrote: Tue Jul 30, 2019 2:59 pm amd might bring some realtime raytracing solution... vulkan maybe?
AMD‘s current official opinion is that hardware ray tracing is not flexible enough. They explain a mix of hard- and software snippets is more appropriate. I guess AMD means one can split ray tracing tasks into generic hardware accelerated sub tasks something like RISC vs. CISC. But this step is for next or after next generation.

Re: CGI tech news box

Posted: Tue Jul 30, 2019 4:45 pm
by Sharlybg
It is only 25% faster in sponza. I'm a bit affraid of the NVIDIA BLACK BOX STRATEGY.
The next step, after OptiX, is to use directly iRay...

The answer from a guy here : https://code.blender.org/2019/07/accele ... w-homepage

nVidia is like most other companies, with Intel and AMD also having a history of being horrible.

However, the base technologies of real-time RT and new AI/ML features are not nVidia’s, and won’t be locked to nVidia – as the next AMD GPUs will have similar hardware support. (See PS5/XBox 2020)

As for ‘cornering the market’ – RTX features and technologies are already in Vulkan and other full OSS frameworks. These are NOT nVidia specific, even though nVidia uses their CUDA and OPtix technologies in implementation.

The technology nVidia is using in the RTX GPUs is hardware based from Microsoft’s research technologies that were publicly released as DXR technologies for Ray Tracing and WinML(+) for ML/AI in early 2018. nVidia RTX cards are built from Microsoft’s work, and it isn’t exclusive to nVidia at all.

This might sound more dubious, but remember Microsoft in the 2010s is not Microsoft from the 00s – and even though they had a lot of OSS projects and contributions in the old days, they now are shoving tons of proprietary code out with unrestricted OSS licensing.

Which is why it is important to notice that Microsoft has also been working to help Vulkan use the DXR/WinML technologies – making the technologies fully cross platform. *Except for OS X that is still trying to break/beat Vulkan while also killing support for OpenGL or OpenCL.

OpenCL was rather good, but even after Apple pushing it, it was a problem for them, as OS X couldn’t handle ubiquitous OpenCL calls through the OS or in several applications. I also don’t like to see OpenCL go away, but right now it has problems as it can’t support faster GPU technologies that are only being implemented through Vulkan and DX12. OpenCL news a revamp or we all need to move to Vulkan.

Linux also suffers from the issues Apple had with heavy OpenCL and GPU usage. Linux and OS X both need full GPU preemption and SMP features like Windows has offered since 2007. It is this core lack of GPU technologies in all non-Windows operating system technologies that has handed the graphical future to Microsoft. Windows can hit faster GPU performance while still allowing tons of GPU code to execute in tons of applications and throughout the OS without the worry of locks or cooperative multitasking scheduling like Linux and OS X hit.

I digress…
So, the ‘technologies’ that RTX hardware is showcasing, isn’t technically pure nVidia and they also don’t have control over it. This is how/why AMD is planning on RT and AI hardware for their next GPUs next year, along with basic support in upcoming drivers, just as Intel is planning on GPUs and embedded support in the future, and like nVidia, both Intel and AMDs technologies come from Microsoft Research, which Microsoft has provided to the entire hardware industry for free to use.

The XBox and PS5 stories map out AMD’s RT and ML/AI upcoming features, as those AMD GPUs will have the RT technologies, via Vulkan on PS5 and both DX12 and Vulkan on XBox.

So the Optix and Cuda are nVidia implementations, but the originating concepts exist outside nVidia for anyone to use, for free, and is already really strong and doing well with Vulkan impelmentation.

Take care of yourself, this isn’t worth worrying about. Also note you will be able to do this stuff on AMD and possibly Intel GPUs next year – so just hold on and let nVidia do the heavy lifting, which seems to be AMD’s strategy as well.

Re: CGI tech news box

Posted: Tue Jul 30, 2019 5:38 pm
by lacilaci
So hybrid rendering is where amd goes with raytracing..

https://twitter.com/bhsavery/status/115 ... 72065?s=09


these hybrid modes are becoming a thing..
unreal has great renderer + rtx today
blender has eevee for fast realtime feedback + cycles that uses same shaders just switches to pathtracing..

I think for luxcore, eevee could be attractive as an previz renderer/ render mode.. So that users can preview maps, roughness, bumps before they push the trigger and do a full luxcore rendering.... I guess this is on B.Y.O.B. mostly

I don't consider eevee even close to unreal, but it is a fantastic previz tool when working with cycles...
Aside cycles, unreal and now prorender, no other render engine has a proper realtime feedback and requires a ton of preview renderings usually.