Video gives us a compelling view into another part of the real world, such as a sporting event, political rally, or broadway play. While this capability is great, but each viewer gets the same view, whether they want it or not, and viewers don't have the power to control their viewpoint. In contrast, virtual reality (VR) immerses viewers in virtual worlds even though their bodies are still in the real world. Each viewer may move independently and freely throughout this world, and see events from their own viewpoint. VR, though, has focused on creating purely virtual worlds that do not correspond to anything in the real world.

Virtualization is the process of making real scenes and real events virtual --- what we call Virtualized RealityTM event models. These models can then be used to construct views of the real events from nearly any viewpoint, without interfering with the events! Like VR, Virtualized Reality dynamic event models allow viewers to see whatever they want to, but unlike VR, this "other world" is actually a real event, and the views of this event are photo- and phono-realistic.

We are currently in the process of building the largest virtualization sensor in history. The sensor will house 1000 cameras, with overall visual sensor resolutions of approximately 307 megapixels, and 200 microphones, for sound localization and association. This terasensor (called so because it will be capable of produce 5.7 terabytes of data per second) will be used to robustly create the photo and phono-realistic reconstructions, both spatially and temporally, of events occuring in the event space.