 The goal of this work is to develop basic tools for controlling in real-time, either autonomously or interactively, a virtual camera of a real environment.  The input is a set of images or video streams, acquired from fixed or mobile cameras around a site, and the output is a panoramic visualization of the scene in which a virtual, user-controlled camera can be moved through the environment.   With this technology a user could interactively navigate through a real environment, controlling a customized path of views of the site that are not predetermined by the input images.  The main research question is how to adaptively combine a set of basis images to synthesize new views of the scene without 3D models or 3D scene reconstruction as an intermediate step.   Recently we have developed an innovative technique, which we call <A HREF="http://www.cs.wisc.edu/~seitz/interp/vmorph.html"><I>view morphing</I></A>, that takes two basis images and interpolates a continuous range of in-between images  corresponding to views on the linear path connecting the two camera centers. <P>
