Image based view synthesis of articulated agents
- 22 November 2002
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- p. 1047-1053
- https://doi.org/10.1109/cvpr.1997.609459
Abstract
Using a combination of techniques from visual representations, view synthesis, and visual-motor model estimation, we present a method for animating movements of an articulated agent (e.g. human or robot arm), without the use of any prior models or explicit 3D information. The information needed to generate simulated images can be acquired either on or off fine, by watching the agent doing an arbitrary, possibly unrelated task. We present experimental results synthesizing image sequences of the simulated movement of a human arm and a PUMA 760 robot arm. Control is in either image (camera), motor (joint), or Cartesian world coordinates. We have created a user interface, where a user can input a movement program, and then upon execution, view movies of the (simulated) agent executing the program, along with the instantaneous values of the dynamics variables.Keywords
This publication has 5 references indexed in Scilit:
- Rendering real-world objects using view interpolationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Visual space task specification, planning and controlPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Physically-valid view synthesis by image interpolationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Stereo Without Disparity Gradient Smoothing: a Bayesian Sensor Fusion SolutionPublished by Springer Nature ,1992
- Eigenfaces for RecognitionJournal of Cognitive Neuroscience, 1991