Light tracing with cache?
Posted: Wed Sep 18, 2019 9:31 am
This is probably another stupid idea that won’t work but I’m posting it anyways
While experimenting with LuxCoreUi I discovered that LightCPU is the most powerful of all render engines. A hint can be taken from the performance of light tracing within the hybrid engine: Caustics appear almost instantly while BiDir, Path+Metro or caustic cache all need their time to resolve them.
When working with LightCPU I always experience that indirect lighting of diffuse surfaces is solved similarly fast. What takes a lot of time in path tracing is done super quickly in light tracing, even on CPU only.
The only problem with LightCPU is it doesn’t do specular surfaces.
Why is that?
Because we assume the camera is an infinitesimally small point that can never be hit when randomly sampling specular surfaces.
On closer inspection this is not really correct, because hidden therein is the assumption the camera would sample the screen space continuously whereas in reality it does it step-wise according to the image resolution.
When moving to the next pixel an angular shift is made for the camera ray cast. That means that the camera de facto has a pixel-wide extension (which is not constant but depends from ray length, surface curvature etc).
This extension can be calculated and make the camera visible to specular bounces from light rays.
Sadly this is no solution because the problem with a small intersectible camera is the same as with small intersectible lights with path tracing in the case of caustics: It is not efficient.
Replacing a probability of zero with a very low probability just means very slow convergence even with metropolis sampling.
I have rendered glass bodies with low roughness in LightCPU which is a similar problem; instead of being black they appear very dark and stay that way for a long time.
So this is a dead end, but when realising the discrete nature of camera scanning the scene another idea popped up.
Basically the image resolution makes the camera project a grid onto all surfaces in the scene where the grid nodes represent pixels.
Could those projected grid nodes be collected in a cache?
Let’s assume there is a scene with a mirror right in front of the camera with the size 100 pixels height and 100 pixels width that would show the stuff in the scene located behind the camera.
We could now trace rays from the camera through “the middle of each pixel“, bouncing off the mirror and register where they meet the surface of objects in the scene, collecting the data in a cache of 100x100=10 000 entries.
An entry needs to contain the shading data of all specular bounces like reflection or refraction colors and of course, the pixel identity.
Now when we do light tracing and hit the wall behind the camera, normally a next event estimation is made with the result that the camera is in line of sight but pointing the wrong way which means continuing the ray or termination in case the bounce limit was reached, but in either case no render result is calculated for this point.
But with a pixel-grid-cache (PGC) available nearby entries could be gathered and if the hit point is found to be in between two neighbour pixel entries we can assume indirect (specular) visibility to the camera and calculate and hand down a shading result to the respective pixel.
If a pixel entry is found nearby but no neighbour pixel entry that would indicate a border case (last pixel to the right) and neglected.
Suddenly in light tracing the wall behind the camera would appear in the mirror and the brick structure would be displayed sharply because the pixel grid nature guarantees sharpness!
Now I wonder if this is feasible or if there are deal-breakers I have overlooked?
If the mirror was not a flat plane but a sphere instead the pixel cache entries would be spread far apart, maybe there is a limit where they cannot get collected and calculated properly when the next entry is on a different object too far away.
A bump map on the mirror would distort the grid including the respective entries, neighbour entries would be spread and maybe distant entries jumbled closely together, probably difficult or impossible to calculate?
A glass body forces a ray-split event, that means double and probably more grid entries per pixel, maybe leading to a very big cache causing memory problems.
And then there is the problem with glossy materials or “specular roughness“.
LightCPU does a superior job with indirect diffuse lighting but once there are glossy materials in the mix it starts to struggle just like path tracing.
How to define pixel-grid- entries when their position becomes an area due to glossy roughness?
In the range where the “pixel-position-blur-areas“ don’t overlap there would be still a defined correlation but when the pixel grid gets too blurred I have no idea how to deal with it.
Probably it won’t work but I felt like I had to post this
While experimenting with LuxCoreUi I discovered that LightCPU is the most powerful of all render engines. A hint can be taken from the performance of light tracing within the hybrid engine: Caustics appear almost instantly while BiDir, Path+Metro or caustic cache all need their time to resolve them.
When working with LightCPU I always experience that indirect lighting of diffuse surfaces is solved similarly fast. What takes a lot of time in path tracing is done super quickly in light tracing, even on CPU only.
The only problem with LightCPU is it doesn’t do specular surfaces.
Why is that?
Because we assume the camera is an infinitesimally small point that can never be hit when randomly sampling specular surfaces.
On closer inspection this is not really correct, because hidden therein is the assumption the camera would sample the screen space continuously whereas in reality it does it step-wise according to the image resolution.
When moving to the next pixel an angular shift is made for the camera ray cast. That means that the camera de facto has a pixel-wide extension (which is not constant but depends from ray length, surface curvature etc).
This extension can be calculated and make the camera visible to specular bounces from light rays.
Sadly this is no solution because the problem with a small intersectible camera is the same as with small intersectible lights with path tracing in the case of caustics: It is not efficient.
Replacing a probability of zero with a very low probability just means very slow convergence even with metropolis sampling.
I have rendered glass bodies with low roughness in LightCPU which is a similar problem; instead of being black they appear very dark and stay that way for a long time.
So this is a dead end, but when realising the discrete nature of camera scanning the scene another idea popped up.
Basically the image resolution makes the camera project a grid onto all surfaces in the scene where the grid nodes represent pixels.
Could those projected grid nodes be collected in a cache?
Let’s assume there is a scene with a mirror right in front of the camera with the size 100 pixels height and 100 pixels width that would show the stuff in the scene located behind the camera.
We could now trace rays from the camera through “the middle of each pixel“, bouncing off the mirror and register where they meet the surface of objects in the scene, collecting the data in a cache of 100x100=10 000 entries.
An entry needs to contain the shading data of all specular bounces like reflection or refraction colors and of course, the pixel identity.
Now when we do light tracing and hit the wall behind the camera, normally a next event estimation is made with the result that the camera is in line of sight but pointing the wrong way which means continuing the ray or termination in case the bounce limit was reached, but in either case no render result is calculated for this point.
But with a pixel-grid-cache (PGC) available nearby entries could be gathered and if the hit point is found to be in between two neighbour pixel entries we can assume indirect (specular) visibility to the camera and calculate and hand down a shading result to the respective pixel.
If a pixel entry is found nearby but no neighbour pixel entry that would indicate a border case (last pixel to the right) and neglected.
Suddenly in light tracing the wall behind the camera would appear in the mirror and the brick structure would be displayed sharply because the pixel grid nature guarantees sharpness!
Now I wonder if this is feasible or if there are deal-breakers I have overlooked?
If the mirror was not a flat plane but a sphere instead the pixel cache entries would be spread far apart, maybe there is a limit where they cannot get collected and calculated properly when the next entry is on a different object too far away.
A bump map on the mirror would distort the grid including the respective entries, neighbour entries would be spread and maybe distant entries jumbled closely together, probably difficult or impossible to calculate?
A glass body forces a ray-split event, that means double and probably more grid entries per pixel, maybe leading to a very big cache causing memory problems.
And then there is the problem with glossy materials or “specular roughness“.
LightCPU does a superior job with indirect diffuse lighting but once there are glossy materials in the mix it starts to struggle just like path tracing.
How to define pixel-grid- entries when their position becomes an area due to glossy roughness?
In the range where the “pixel-position-blur-areas“ don’t overlap there would be still a defined correlation but when the pixel grid gets too blurred I have no idea how to deal with it.
Probably it won’t work but I felt like I had to post this