FarbigeWelt wrote: Fri Jul 05, 2019 3:11 pm
acasta69 wrote: Fri Jul 05, 2019 7:42 am
I think I'm a bit losing your point... Are you suggesting that raytracers should move towards the simulation of real camera systems, with real lenses and sensors?
Wouldn't this be computationally a lot more expensive?
The "thin lens" camera model has limitations for sure, but is very effective.
As much I see you got my point. I don’t mean to simulate real optics system. [...]
Diffraction and interferences would be pretty cool but is a completely different story. [...] (Could be something for Version 3.0

)
Actually, a benefit of simulating real lens systems could be a realistic lens flare effect

However, for this purpose, it would probably be much more computationally efficient to implement the camera by sequential ray tracing, i.e. geometric raytracing using matrix calculations.
Diffraction is just outright hopeless, at least if you want to compute general effects

To give you an impression:
I currently work in a projcet to develop an instrument where the scattered light is highly diffraction limited. Naturally, we needed simulations for that.
The final simulations (that have been verified quite well experimentally) we did were made using an open source package called PROPER (
http://proper-library.sourceforge.net/), which you can download for python and try out if you are interested. It was written for stellar coronagraphs, where you observe with very large focal lengths and are interested in diffraction at angles of arcseconds or below. This works quite easily. We needed to simulate for a full frame sensor and around 100mm focal length. The computation requires 500GB RAM and takes a few hours. And that wasn't even enough to simulate our full instrument, as a wider input beam diameter would mean larger angles and even more RAM ^^ (Also, one such simulations is only for one wavelength and a plane wave (i.e. infinity light, size 0). I'm not sure if you can easily do point sources or something else with this method?)
The scientist who does all this just implemented another semi-analytical method that uses less RAM, but takes longer. It has been running for 4 weeks now on 60 cores, and has not yet finished
