Self-calibration of a camera using multiple images
- 2 January 2003
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
The problem of calibrating cameras is extremely important in computer vision. Existing work is based on the use of a calibration pattern whose 3D model is known a priori. The authors present a complete method for calibrating a camera, which requires only point matches from image sequences. The authors show, using experiments with noisy data, that it is possible to calibrate a camera just by pointing it at the environment, selecting points of interests, and tracking them in the image while moving the camera with an unknown motion. The camera calibration is computed in two steps. In the first step the epipolar transformation is found via the estimation of the fundamental matrix. The second step of the computation uses the so-called Kruppa equations, which link the epipolar transformation to the intrinsic parameters. These equations are integrated in an iterative filtering scheme.Keywords
This publication has 6 references indexed in Scilit:
- Accurate corner detection: an analytical studyPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- A theory of self-calibration of a moving cameraInternational Journal of Computer Vision, 1992
- Some properties of the E matrix in two-view motion estimationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1989
- Can multiple views make up for lack of camera registration?Image and Vision Computing, 1988
- A Combined Corner and Edge DetectorPublished by British Machine Vision Association and Society for Pattern Recognition ,1988
- A computer algorithm for reconstructing a scene from two projectionsNature, 1981