IRRADIANCE & GetOutputFloat

Use this forum for general user support and related questions.
Forum rules
Please upload a testscene that allows developers to reproduce the problem, and attach some images.
Post Reply
mick
Posts: 80
Joined: Mon May 21, 2018 7:57 pm

IRRADIANCE & GetOutputFloat

Post by mick »

Hi,

I follow the suggestion to output type IRRADIANCE.

Code: Select all

 
	renderengine.seed = 11
        renderengine.type = PATHCPU
        sampler.type = SOBOL
        film.width = 800
        film.height = 600
        # The first plugin: the linear tonemapper multiplies the pixel colors with the scale
        film.imagepipelines.0.0.type = TONEMAP_LINEAR
        film.imagepipelines.0.0.scale = 5e-5
        # The second plugin: gamma correction
        film.imagepipelines.0.1.type = GAMMA_CORRECTION
        film.imagepipelines.0.1.value = 2.2
        
        film.imagepipeline.2.type = CONTOUR_LINES
        film.imagepipeline.2.range = 20
        film.imagepipeline.2.steps = 10
        film.imagepipeline.2.zerogridsize = 8
        
        film.outputs.0.type = RGB_IMAGEPIPELINE
        film.outputs.0.filename = image0.png
        film.outputs.1.type = RGB
        film.outputs.1.filename = image1.hdr
        film.outputs.2.type = IRRADIANCE
        film.outputs.2.filename = image2.hdr
        film.outputs.3.type = RAYCOUNT
        film.outputs.3.filename = image3.png
But image2 is just a white on black mask of my irradiated objects, and image1 is just white. Maybe I need to parametrize the plugin for the right scale???

Also I wanted to check the output actual values.

Code: Select all

  buffer = bgl.Buffer(bgl.GL_FLOAT, [800 * 600 * 3])
  print(session.GetFilm().GetOutputFloat(pyluxcore.OUTPUT_IRRADIANCE, buffer, 0))
But from where can I import e.g. Buffer? Do I need to install bgl? Is the this Boost GL or Blender something?

Where can I find a good description of pipeline output system to fully understand it. ATM I just guessing how it might work.

Thanks,
Mick
User avatar
B.Y.O.B.
Developer
Developer
Posts: 4146
Joined: Mon Dec 04, 2017 10:08 pm
Location: Germany
Contact:

Re: IRRADIANCE & GetOutputFloat

Post by B.Y.O.B. »

Your code has some problems:

1. The imagepipeline with the contour lines uses the old, deprecated imagepipeline syntax.
film.imagepipeline.2.type = CONTOUR_LINES
You should use the new syntax, where you specify the imagepipeline index and then the plugin index:
film.imagepipelines.1.2.type = CONTOUR_LINES
Note the "s" at the end of "imagepipelines", then comes the pipeline index, then the plugin index.

2. The contour lines plugin should be used in the usual imagepipeline (first tonemap, then contour lines, at the end gamma correction).

3. The RGB_IMAGEPIPELINE output in your config is missing the "index" property that specifies the imagepipeline to use.

This is how I would set up the imagepipelines/outputs:

Code: Select all

###################################
        # Imagepipelines
        ###################################
        
        # Pipeline 0, plugin 0
        film.imagepipelines.0.0.type = TONEMAP_LINEAR
        film.imagepipelines.0.0.scale = 0.0001
        # Pipeline 0, plugin 1
        film.imagepipelines.0.1.type = GAMMA_CORRECTION
        film.imagepipelines.0.1.value = 2.2
        
        # Pipeline 1, plugin 0
        film.imagepipelines.1.0.type = TONEMAP_LINEAR
        film.imagepipelines.1.0.scale = 0.0001
        # Pipeline 1, plugin 1
        film.imagepipelines.1.1.type = CONTOUR_LINES
        film.imagepipelines.1.1.range = 20
        film.imagepipelines.1.1.steps = 10
        film.imagepipelines.1.1.zerogridsize = 8
        # Pipeline 1, plugin 2
        film.imagepipelines.1.2.type = GAMMA_CORRECTION
        film.imagepipelines.1.2.value = 2.2
        
        ###################################
        # Film outputs
        ###################################
        
        # This is the pipeline with the tonemapped image as usual
        film.outputs.0.type = RGB_IMAGEPIPELINE
        film.outputs.0.filename = pipeline_tonemapped.png
        # You have to specify which imagepipeline you want as output
        film.outputs.0.index = 0
        
        # This is the pipeline with the visible contour lines
        film.outputs.1.type = RGB_IMAGEPIPELINE
        film.outputs.1.filename = pipeline_contour_lines.png
        # You have to specify which imagepipeline you want as output
        film.outputs.1.index = 1
        
        film.outputs.2.type = IRRADIANCE
        film.outputs.2.filename = IRRADIANCE.exr
        
        film.outputs.3.type = RAYCOUNT
        film.outputs.3.filename = RAYCOUNT.exr
        
        film.outputs.4.type = RGB
        film.outputs.4.filename = RGB.exr
And this is how the outputs look like (I forgot to display irradiance, but it also looks like expected):
scrn_2018-05-23_18-58-12.png
But from where can I import e.g. Buffer?
bgl is a Blender module.
Instead of bgl.Buffer you can use the Python built-in array:

Code: Select all

    # IRRADIANCE has 3 float per pixel
    bufferdepth = 3
    import array
    buffer = array.array("f", [0.0]) * (WIDTH * HEIGHT * bufferdepth)
    session.GetFilm().GetOutputFloat(pyluxcore.FilmOutputType.IRRADIANCE, buffer, 0)
    # Get the values of the pixel in the middle
    x, y = 400, 300
    i = (y * WIDTH + x) * bufferdepth
    r = buffer[i]
    g = buffer[i + 1]
    b = buffer[i + 2]
    print("Irradiance at x: %d, y: %d is: (%.2f, %.2f, %.2f) (R, G, B)" % (x, y, r, g, b))
This prints Irradiance at x: 400, y: 300 is: (4985.00, 11546.90, 14904.85) (R, G, B).
By the way, you can get the available output types and their names from the dictionary pyluxcore.FilmOutputType.names

Here is the updated demo script: https://gist.github.com/Theverat/0707eb ... db2793ab67
And the changes: https://gist.github.com/Theverat/0707eb ... /revisions

By the way, how experienced are you with Python?
If you say "beginner", I'll try to keep the high-level Python stuff out of my examples so you don't have to google every statement.
mick
Posts: 80
Joined: Mon May 21, 2018 7:57 pm

Re: IRRADIANCE & GetOutputFloat

Post by mick »

Thanks again for the sample code. It shows the contour line for the plan, as it is the only irradiated surface. I put a pillar on it, which creates a shadow. This difference is not identified. I guess I have to tweak the parameters of the CONTOUR_LINES plug-in. Unfortunately the SDL Manual just explain them with "?".

My final objective is to measure the irradiation per physical area (e.g. m2) on the projection surface. For measuring with a camera I see two issues:
1. perspective if camera is not very distant
2. objects between camera and surface

Is there a way to measure irradiation of a surface independent of a camera looking at it?

BTW: No need to restrict python. I know enough about it, and will just find out what I don't know. It's just that I know zero about luxcore or blender. And the last thing I did with ray tracing was to write a simple ray tracer in C on paper 35+ years ago. I couldn't even test it because I had no computer. :)
User avatar
B.Y.O.B.
Developer
Developer
Posts: 4146
Joined: Mon Dec 04, 2017 10:08 pm
Location: Germany
Contact:

Re: IRRADIANCE & GetOutputFloat

Post by B.Y.O.B. »

mick wrote: Wed May 23, 2018 9:03 pm I guess I have to tweak the parameters of the CONTOUR_LINES plug-in. Unfortunately the SDL Manual just explain them with "?".
I had a look at the IRRADIANCE announcement thread in the old forum, the contour lines parameters are explained in this post: http://www.luxrender.net/forum/viewtopi ... 10#p109794
mick wrote: Wed May 23, 2018 9:03 pm My final objective is to measure the irradiation per physical area (e.g. m2) on the projection surface. For measuring with a camera I see two issues:
1. perspective if camera is not very distant
2. objects between camera and surface
Sounds like the best thing for you would be "baking". Unfortunately LuxCore does not support this.
However, you can try some other things:
  • Use an orthogonal camera (available camera types are listed here), which does not have perspective distortion
  • Use camera clipping to hide any obstacles between the camera and the surface from camera rays
  • You can make any obstacle between the camera and the surface "camerainvisible" (SDL manual, see the bottom of the object section)
mick
Posts: 80
Joined: Mon May 21, 2018 7:57 pm

Re: IRRADIANCE & GetOutputFloat

Post by mick »

After session.GetFilm().GetOutputFloat(pyluxcore.FilmOutputType.IRRADIANCE, buffer, 0) the buffer holds some strange outliers. Visualized these are extreme bright pixels:
image5.png
Where do they come from? I don't see them on the normal RGB output. Is is an artifact of the ray tracing smoothed out for normal RGB output, or a numeric issue of the irradiance plug-in, or what else?

BTW1: I could not find appropriate parameters for the IRRADIANCE plugin (AOV?). I tried e.g. range = max(GetOutputFloat), or with filtered outliers, even steps = 1000. Whatever I tried there was just one contour line.

BTW2: GetOutputFloat writes the buffer with an early state if called before session.Stop(). I've some phantasies about the reason. But why do you call session.GetFilm().SaveOutputs() before Stop()?
User avatar
B.Y.O.B.
Developer
Developer
Posts: 4146
Joined: Mon Dec 04, 2017 10:08 pm
Location: Germany
Contact:

Re: IRRADIANCE & GetOutputFloat

Post by B.Y.O.B. »

About the outliers in the IRRADIANCE AOV: no idea to be honest, I have never used this output before (and I doubt someone else has, you are probably the first serious user).
Which material did you use for the pillar? Can you post the code for creating the pillar so I can try to reproduce it?
Do the outliers also show up in the image saved with SaveOutputs(), or only when using GetOutputFloat()?
mick wrote: Thu May 24, 2018 1:39 pm BTW2: GetOutputFloat writes the buffer with an early state if called before session.Stop(). I've some phantasies about the reason.
Actually I forgot to call session.UpdateStats(), my bad.
You have to call this method to update the film, before getting/saving outputs.
But why do you call session.GetFilm().SaveOutputs() before Stop()?
This is how I do it in BlendLuxCore.
I only have a vague memory of Dade mentioning that it is illegal to call some methods on a stopped session, so I do this to stay on the safe side.

As you can see, the documentation is lacking and even I don't know everything I should know about the Python API.
We need more people who actually use the API to uncover these issues.
I'll try to find some time and motivation to write a tutorial about the LuxCore Python API.
User avatar
B.Y.O.B.
Developer
Developer
Posts: 4146
Joined: Mon Dec 04, 2017 10:08 pm
Location: Germany
Contact:

Re: IRRADIANCE & GetOutputFloat

Post by B.Y.O.B. »

I replicated the example scene in Blender so I could adjust the contour line settings in realtime during the render.
I also added a hollow cylinder that creates some shadows, otherwise the contour lines would be pretty pointless.
Through trial & error I found that with a scale value of 0.01 you start to get visible contours in the shadows.
The default values are probably suited for indoor scenes, a bright sky in an outdoor scene requires much different values.

I have also observed that you need a very high amount of samples for "noise-free contour lines".
The image is long free of visible noise when the contour lines just barely appear from a lot of noise.
Attachments
scrn_2018-05-24_16-31-30.png
example.blend
(175.25 KiB) Downloaded 239 times
mick
Posts: 80
Joined: Mon May 21, 2018 7:57 pm

Re: IRRADIANCE & GetOutputFloat

Post by mick »

Here my experimentation code:

Code: Select all

import sys
from time import sleep, time
WIDTH = 800
HEIGHT = 600
DEPTH = 3

sys.path.append('/home/mick/dev/DLO/lib')
import pyluxcore


def build_scene():
  scene = pyluxcore.Scene()

  # First, the camera
  cam_props = pyluxcore.Properties()
  cam_props.SetFromString("""
        scene.camera.lookat.orig = 2 2 2
        scene.camera.lookat.target = 0 0 0
        scene.camera.up = 0 0 1
        """)
  scene.Parse(cam_props)

  # An object with a plane mesh and red material
  mat_name = "test_material"
  obj_props = pyluxcore.Properties()
  obj_props.SetFromString("""
        scene.materials.{name}.type = matte
        scene.materials.{name}.kd = 0.8 0.8 0.8
        """.format(name=mat_name))
  scene.Parse(obj_props)

  # You could pass a transformation matrix here
  transform = None

  vertices = [
    (1, 1, 0),
    (1, -1, 0),
    (-1, -1, 0),
    (-1, 1, 0)
  ]
  faces = [
    (0, 1, 2),
    (2, 3, 0)
  ]
  mesh_name = "test_mesh"
  # You could pass UV coordinates, vertex colors and other stuff here, additionally
  scene.DefineMesh(mesh_name, vertices, faces, None, None, None, None, transform)

  # Define an object that uses the shape (mesh) and the red material
  add_object(scene, "plane", mesh_name, mat_name)

  vertices = [
    (.01, .01, 0),
    (.01, -.01, 0),
    (-.01, -.01, 0),
    (-.01, .01, 0),
    (.01, .01, .3),
    (.01, -.01, .3),
    (-.01, -.01, .3),
    (-.01, .01, .3),
  ]
  faces = [
    (0, 1, 4),
    (1, 5, 4),
    (1, 2, 5),
    (2, 6, 5),
    (2, 3, 6),
    (3, 7, 6),
    (3, 0, 7),
    (0, 4, 7),
  ]
  mesh_name = "pillar"
  scene.DefineMesh(mesh_name, vertices, faces, None, None, None, None, transform)
  add_object(scene, "pillar", mesh_name, mat_name)

  # A light source is also needed
  light_props = pyluxcore.Properties()
  for n in ["sun", "sky2"]:
    light_props.SetFromString("""
        scene.lights.{name}.type = {name}
        scene.lights.{name}.gain = {gain}
        scene.lights.{name}.turbidity = 3
        scene.lights.{name}.dir = {dir}
        scene.lights.{name}.turbidity = 3
        """.format(name=n, gain="1 1 1", dir="5 3 3"))
  scene.Parse(light_props)

  return scene


def add_object(scene, obj_name, mesh_name, mat_name):
  obj_props = pyluxcore.Properties()
  obj_props.Set(pyluxcore.Property("scene.objects." + obj_name + ".shape", mesh_name))
  obj_props.Set(pyluxcore.Property("scene.objects." + obj_name + ".material", mat_name))
  scene.Parse(obj_props)


def build_session(scene):
  config_props = pyluxcore.Properties()
  config_props.SetFromString("""
        renderengine.seed = 11
        renderengine.type = PATHCPU
        sampler.type = SOBOL
        film.width = """ + str(WIDTH) + """
        film.height = """ + str(HEIGHT) + """
        # The first plugin: the linear tonemapper multiplies the pixel colors with the scale
        film.imagepipelines.0.0.type = TONEMAP_LINEAR
        film.imagepipelines.0.0.scale = 5e-5
        # The second plugin: gamma correction
        film.imagepipelines.0.1.type = GAMMA_CORRECTION
        film.imagepipelines.0.1.value = 2.2
        
        # Pipeline 1, plugin 0
        #film.imagepipelines.1.0.type = TONEMAP_LINEAR
        #film.imagepipelines.1.0.scale = 5e-5
        # Pipeline 1, plugin 1
        film.imagepipelines.1.1.type = CONTOUR_LINES
        film.imagepipelines.1.1.range = 30000
        film.imagepipelines.1.1.steps = 10
        film.imagepipelines.1.1.zerogridsize = 8
        # Pipeline 1, plugin 2
        film.imagepipelines.1.2.type = GAMMA_CORRECTION
        film.imagepipelines.1.2.value = 2.2
        
        # This is the pipeline with the tonemapped image as usual
        film.outputs.0.type = RGB_IMAGEPIPELINE
        film.outputs.0.filename = image0.png
        # You have to specify which imagepipeline you want as output
        film.outputs.0.index = 0
        
        # This is the pipeline with the visible contour lines
        film.outputs.1.type = RGB_IMAGEPIPELINE
        film.outputs.1.filename = image1.png
        # You have to specify which imagepipeline you want as output
        film.outputs.1.index = 1
        
        film.outputs.2.type = IRRADIANCE
        film.outputs.2.filename = image2.hdr
        
        film.outputs.3.type = RAYCOUNT
        film.outputs.3.filename = image3.hdr
        
        film.outputs.4.type = RGB
        film.outputs.4.filename = image4.hdr
        """)

  renderconfig = pyluxcore.RenderConfig(config_props, scene)
  session = pyluxcore.RenderSession(renderconfig)
  return session


def render(scene):
  session = build_session(scene)
  session.Start()
  startTime = time()
  while True:
    sleep(1)
    elapsedTime = time() - startTime
    session.UpdateStats()
    stats = session.GetStats()
    print("[Elapsed time: %3d/5sec][Samples %4d][Avg. samples/sec % 3.2fM on %.1fK tris]" % (
      stats.Get("stats.renderengine.time").GetFloat(),
      stats.Get("stats.renderengine.pass").GetInt(),
      (stats.Get("stats.renderengine.total.samplesec").GetFloat() / 1000000.0),
      (stats.Get("stats.dataset.trianglecount").GetFloat() / 1000.0)))
    if elapsedTime > 5.0:
      break
  session.Stop()
  return session


def main():
  pyluxcore.Init()
  scene = build_scene()
  session = render(scene)
  session.GetFilm().Save()
  import array
  import numpy as np
  from scipy import stats, signal
  import imageio
  buffer = array.array("f", [0.0]) * (HEIGHT * WIDTH * DEPTH)
  session.GetFilm().GetOutputFloat(pyluxcore.FilmOutputType.IRRADIANCE, buffer, 1)
  npa = np.array(buffer).reshape(HEIGHT, WIDTH, DEPTH)[::-1]
  print(stats.describe(signal.medfilt(npa.reshape(HEIGHT * WIDTH, DEPTH))))
  imageio.imwrite("image5.png", npa)


if __name__ == "__main__":
  main()
Post Reply