Inverse depth parametrization
Computational method for constructing 3D models From Wikipedia, the free encyclopedia
In computer vision, the inverse depth parametrization is a parametrization used in methods for 3D reconstruction from multiple images such as simultaneous localization and mapping (SLAM).[1][2] Given a point in 3D space observed by a monocular pinhole camera from multiple views, the inverse depth parametrization of the point's position is a 6D vector that encodes the optical centre of the camera when in first observed the point, and the position of the point along the ray passing through and .[3]

Inverse depth parametrization generally improves numerical stability and allows to represent points with zero parallax. Moreover, the error associated to the observation of the point's position can be modelled with a Gaussian distribution when expressed in inverse depth. This is an important property required to apply methods, such as Kalman filters, that assume normality of the measurement error distribution. The major drawback is the larger memory consumption, since the dimensionality of the point's representation is doubled.[3]
Definition
Given 3D point with world coordinates in a reference frame , observed from different views, the inverse depth parametrization of is given by:
where the first five components encode the camera pose in the first observation of the point, being the optical centre, the azimuth, the elevation angle, and the inverse depth of at the first observation.[3]
References
Bibliography
Wikiwand - on
Seamless Wikipedia browsing. On steroids.