Interrupt Async Imagepipeline

Discussion related to the LuxCore functionality, implementations and API.
User avatar
B.Y.O.B.
Developer
Developer
Posts: 4146
Joined: Mon Dec 04, 2017 10:08 pm
Location: Germany
Contact:

Interrupt Async Imagepipeline

Post by B.Y.O.B. »

Hi,

to use the denoiser in the viewport I need the ability to interrupt the denoising process (all threads).
Easiest would be of course to use the already existing imagepipeline plugin, but I would need a InterruptAsyncExecuteImagePipeline() function for it. I tried to implement it myself, but I found that with boost threads, you can only use boost::thread::interrupt() if you define interruption points yourself. So I would need to add these to the OIDN source code if I'm not mistaken, and we would have to integrate our custom fork of OIDN which would make everything a lot more complicated.

Do you have other ideas how to approach this problem?
In the experimental Optix integration I launched an Optix process and just killed it via Popen's terminate(): https://github.com/LuxCoreRender/BlendL ... rt.py#L142
But with a thread this is not as easy apparently.
User avatar
Dade
Developer
Developer
Posts: 5672
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

Re: Interrupt Async Imagepipeline

Post by Dade »

B.Y.O.B. wrote: Wed Feb 13, 2019 11:29 pm to use the denoiser in the viewport I need the ability to interrupt the denoising process (all threads).
Easiest would be of course to use the already existing imagepipeline plugin, but I would need a InterruptAsyncExecuteImagePipeline() function for it. I tried to implement it myself, but I found that with boost threads, you can only use boost::thread::interrupt() if you define interruption points yourself. So I would need to add these to the OIDN source code if I'm not mistaken, and we would have to integrate our custom fork of OIDN which would make everything a lot more complicated.
We can fill a request for adding the capability to interrupt the denoising process to Oidn. It is something they must add first or later. However I would not hold my breath.
B.Y.O.B. wrote: Wed Feb 13, 2019 11:29 pm Do you have other ideas how to approach this problem?
In the experimental Optix integration I launched an Optix process and just killed it via Popen's terminate(): https://github.com/LuxCoreRender/BlendL ... rt.py#L142
But with a thread this is not as easy apparently.
Too cumbersome and slow, the overhead alone of executing the external command/passing data/killing the process, etc. is really huge. You could fork the process and use one of the many inter-process communication ways to return the result. At that point you can just kill the process.

https://www.boost.org/doc/libs/1_65_1/d ... ocess.html is the kind if stuff could be used.

It can be done in C++ with a couple of Blender dedicated methods callable from Python as usual.
Support LuxCoreRender project with salts and bounties
User avatar
B.Y.O.B.
Developer
Developer
Posts: 4146
Joined: Mon Dec 04, 2017 10:08 pm
Location: Germany
Contact:

Re: Interrupt Async Imagepipeline

Post by B.Y.O.B. »

Dade wrote: Thu Feb 14, 2019 12:16 am You could fork the process and use one of the many inter-process communication ways to return the result.
It looks like Boost.Process does not support forking, because it is not possible on Windows.
So I think we would have to compile an additional binary that is called by Boost.Process?
User avatar
Dade
Developer
Developer
Posts: 5672
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

Re: Interrupt Async Imagepipeline

Post by Dade »

B.Y.O.B. wrote: Thu Feb 14, 2019 8:07 am It looks like Boost.Process does not support forking, because it is not possible on Windows.
So I think we would have to compile an additional binary that is called by Boost.Process?
At that point we can just include Intel Oidn command line utility: the big advantage of Unix fork is the the Input would have been copied during the fork too (so you would have to transfer only the output in some way, for instance with shared memory).
Support LuxCoreRender project with salts and bounties
User avatar
B.Y.O.B.
Developer
Developer
Posts: 4146
Joined: Mon Dec 04, 2017 10:08 pm
Location: Germany
Contact:

Re: Interrupt Async Imagepipeline

Post by B.Y.O.B. »

Dade wrote: Thu Feb 14, 2019 10:22 am At that point we can just include Intel Oidn command line utility
Problem is that it only supports the PFM file format. We would have to save as EXR, convert to PFM, denoise, and do the whole thing backwards again. Or add PFM support to LuxCore.
I think it might be easier to write a small command line application myself and, as you suggested, use shared memory to transfer the data back and forth (https://www.boost.org/doc/libs/1_35_0/d ... esses.html).
User avatar
Dade
Developer
Developer
Posts: 5672
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

Re: Interrupt Async Imagepipeline

Post by Dade »

B.Y.O.B. wrote: Fri Feb 15, 2019 12:25 pm
Dade wrote: Thu Feb 14, 2019 10:22 am At that point we can just include Intel Oidn command line utility
Problem is that it only supports the PFM file format. We would have to save as EXR, convert to PFM, denoise, and do the whole thing backwards again. Or add PFM support to LuxCore.
I think it might be easier to write a small command line application myself and, as you suggested, use shared memory to transfer the data back and forth (https://www.boost.org/doc/libs/1_35_0/d ... esses.html).
It may be a lot easier to just write .pfm file from python once you have got the RGB buffer from LuxCore and read back from the new .pfm format the results. PFM is a trivial file format to write/read: http://www.pauldebevec.com/Research/HDR/PFM
Support LuxCoreRender project with salts and bounties
User avatar
B.Y.O.B.
Developer
Developer
Posts: 4146
Joined: Mon Dec 04, 2017 10:08 pm
Location: Germany
Contact:

Re: Interrupt Async Imagepipeline

Post by B.Y.O.B. »

Dade wrote: Fri Feb 15, 2019 1:00 pm It may be a lot easier to just write .pfm file from python once you have got the RGB buffer from LuxCore and read back from the new .pfm format the results. PFM is a trivial file format to write/read: http://www.pauldebevec.com/Research/HDR/PFM
I tested this and I get about 45 milliseconds to write the PFM:

Code: Select all

size: 964 * 1757 * 3
creating numpy buffer 0.0
GetOutputFloat into numpy buffer 0.02200460433959961
save pfm 0.02323603630065918
total time: 0.04524064064025879
I think I'll go with it for now.
User avatar
B.Y.O.B.
Developer
Developer
Posts: 4146
Joined: Mon Dec 04, 2017 10:08 pm
Location: Germany
Contact:

Re: Interrupt Async Imagepipeline

Post by B.Y.O.B. »

I have this solution (writing PFM files to disk) working now.
What I don't like about it though is the amount of data that is written. Since PFM files are uncompressed they can get pretty big. If you turn around in the viewport a bit and denoise a few times, a few hundreds of megabytes are written and read.
I save the files in the temp directory, which on modern computers is located on an SSD most of the time - and it's not exactly great to use SSD write cycles up like this. Granted, modern SSDs can far exceed the amount of specified write cycles. But in this case it's not even technically necessary to write anything to disk, we could pass it around in memory.

Ideas for solutions:
- Let the user decide where to save the tempfiles. This might be acceptable for the first alpha releases, but in the long run, I would like to avoid this
- Python has a SpooledTemporaryFile that is only saved to disk as a last resort and otherwise passed around in memory, but on Windows, it can only be opened once and is destroyed on close, so it can't be passed to another process (because Python opens it when it is created)
- Write my own denoiser binary with boost shared memory after all?
User avatar
Dade
Developer
Developer
Posts: 5672
Joined: Mon Dec 04, 2017 8:36 pm
Location: Italy

Re: Interrupt Async Imagepipeline

Post by Dade »

B.Y.O.B. wrote: Fri Feb 22, 2019 10:04 am - Write my own denoiser binary with boost shared memory after all?
Check if shared memory is supported on Windows, some stuff is Unix only.
Support LuxCoreRender project with salts and bounties
User avatar
B.Y.O.B.
Developer
Developer
Posts: 4146
Joined: Mon Dec 04, 2017 10:08 pm
Location: Germany
Contact:

Re: Interrupt Async Imagepipeline

Post by B.Y.O.B. »

I'm not sure if I understand this correctly:
https://www.boost.org/doc/libs/1_35_0/doc/html/interprocess/sharedmemorybetweenprocesses.html#interprocess.sharedmemorybetweenprocesses.sharedmemory.emulation wrote:Boost.Interprocess provides portable shared memory in terms of POSIX semantics.
Some operating systems don't support shared memory as defined by POSIX:
  • Windows operating systems provide shared memory using memory backed by the paging file but the lifetime semantics are different from the ones defined by POSIX (see Native windows shared memory section for more information).
  • Some UNIX systems don't support shared memory objects at all. MacOS is one of these operating systems.
In those platforms, shared memory is emulated with mapped files created in the temporary files directory.
Because of this emulation, shared memory has filesystem lifetime in those systems.
(emphasis by me)
What is a "mapped file" in this context? A bit further below they call it "memory mapped file". Is it saved to disk?
Post Reply