Humboldt-Universität zu Berlin - Mathematisch-Naturwissenschaftliche Fakultät - Institut für Informatik

Probevortrag zur Dissertation: Dominik Rueß

“Smooth Central and Non-Central Camera Models in Object Space”

Der Vortrag findet digital per Zoom statt. Eine Zoom-Einladung finden Sie hier. (nur mit Informatik-Account)

 

Abstract:

Photogrammetry and Computer Vision provide the mathematical framework for the description of image to object relations, which allows the measurement of object properties in the image. Cameras are described by the pinhole camera model with a defined focal length.
In recent years, more and more affordable sensors with an increasing variety of optical imaging features have become available. Low-cost optics may deviate from the desired metric due to higher tolerances and different optical materials. Wide-angle and fisheye lenses, distorting catadioptric lenses (specular and refractive) and other unusual lenses deviate from the single focal pinhole camera model assumption, which is sometimes intentional.
Action cameras can map the entire environment using two lenses, these usually no longer correspond to the pinhole camera model. Cameras are also used for measuring tasks behind additional optical elements. An example here is the use behind windscreens – with unforeseeable deviations in the line of sight.

The present work expands the first findings in the field of differentiable (smooth) camera models without constraints. Many existing models specialize in certain types of lenses. So far, first general models have hardly existed and still have several disadvantages. In this work, several such general models are introduced without requiring fixed focus, focal length, and radial symmetry.
An introduction of alternative error metrics in the object space also gives enormous computational advantages, since one imaging direction can be calculated analytically and many of the calculation results can be kept constant.
For the generation of meaningful starting values of such models (initialization), this work introduces a generic linear camera. The essential feature of is an artificial transformation into higher dimensions. These transformed coordinates can then continue to be used with linear methods. They already model non-linear distortions and asymmetries. A multi-camera calibration software that efficiently implements these models is also described.

The result of the work is a theoretical framework for smooth camera models in the object space itself - instead of the established mapping into the image space - with several concrete model proposals, implementations and the adapted and extended calibration process. The results of the experiments show that each of these models can describe all tested camera types with high accuracy. The results compare very well to other established models.