Arbitrary view generation for three-dimensional scenes from uncalibrated video cameras
- 19 November 2002
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- Vol. 4 (15206149) , 2455-2458
- https://doi.org/10.1109/icassp.1995.480045
Abstract
This paper focuses on the representation and arbitrary view generation of three dimensional (3-D) scenes. In contrast to existing methods that construct a full 3-D model or those that exploit geometric invariants, our representation consists of dense depth maps at several preselected viewpoints from an image sequence. Furthermore, instead of using multiple calibrated stationary cameras or range data, we derive our depth maps from image sequences captured by an uncalibrated camera. We propose an adaptive matching algorithm which assigns various confidence levels to different regions. Nonuniform bicubic spline interpolation is then used to fill in low confidence regions in the depth maps. Once the depth maps are computed at preselected viewpoints, the intensity and depth at these locations are used to reconstruct arbitrary views of the 3-D scene. Experimental results are presented to verify our approach.Keywords
This publication has 4 references indexed in Scilit:
- View interpolation for image synthesisPublished by Association for Computing Machinery (ACM) ,1993
- A three camera approach for calculating disparity and synthesizing intermediate picturesSignal Processing: Image Communication, 1991
- Structure from stereo-a reviewIEEE Transactions on Systems, Man, and Cybernetics, 1989
- Identification of 3D objects from multiple silhouettes using quadtrees/octreesComputer Vision, Graphics, and Image Processing, 1986