Overview of CMU Command Post of The Future
MultiModal Command and Control

June, 1998

For the command post of the future to be effective and efficient, the participants will need new ways to collaborate with each other and to interact with supporting information assets and sources. We propose to create a broad range of advanced human-systems interaction technologies, along with the supporting toolkits, that will provide an order of magnitude greater speed and efficiency for interacting with computers. These include conversational speech recognition, continuous handwriting recognition, person tracking and face tracking using inconspicuous cameras, virtual reality and 3D graphics technologies, integration of personal handheld input-output devices (such as the PalmPilot PDA) with the other personal and shared input-output mechanisms in the command post, and new interaction techniques that facilitate joint work on shared displays. New 3D interaction techniques using conventional and novel input devices, will be developed and integrated with the environment, so that interacting with 3D displays will be as fluid and natural as 2D. For example, 3D "props" such as a model of a building or a tank will be used to easily change the view. These will be applied to map-based and data visualization tasks. Since the interactions will be more natural, along with the increased speed and efficiency, we expect higher accuracy and fewer errors.

In addition, automatic recordings of actions and histories of interactions will make the briefings and evaluations of the actions more efficient. This rich multi-modal record will provide an organizational memory of command post activity that can be used, for example, to provide rapid drill-down access in decision briefings. These histories will also be used for creating "macros" or "intelligent agents" to automate routine tasks. Information about the participants' attention will also be used to provide non-verbal cues to index the history for later description. Note taking will be supported by public joint notes or action items that can be entered by voice, handwriting or typing. Private notes or side conversations will also be supported by local written or spoken messages. The joint activity will be recorded, tracked and processed to provide indexing, rapid browsing and summaries for the review of late participants. The histories will be automatically summarized and presented in an interactive "meeting browser" tool, to facilitate rapid understanding and evaluation of the activities. Similarly, the comments and views of earlier participants can be replayed at relevant times during the discussion for a later group of participants. The history can be linked directly with commercial presentation tools that will enable briefing material to be directly linked with the supporting information assets. When information in the command post information record is updated, briefings can, if appropriate, be updated automatically.

All of this will be supported by flexible, open tools that will enable developers to rapidly create new systems and adapt solutions to emerging situations. Significantly fewer people will be needed to configure the systems and interface to new databases and external data sources, due to high-level interactive editors and tools. Many capabilities will be available for the end-users to customize, and others will be easily changed by developers using the high-level toolkits for multimodal, 3D, and collaborative interactions.

All of this will be possible by building on our substantial existing technology and knowledge base. Our JANUS speech recognition system, NPen++ handwriting recognition system, and person and gaze tracking are among the world's most accurate. Our Multimodal Toolkit will be used to control these. Our Ariadne map tool will make visualizations over maps easier to build. Our Pebbles work on multi-user interaction with shared displays and personal digital assistants will be integrated to facilitate multi-user shared interactions. The Alice toolkit has demonstrated how easily 3D environments and visualizations can be created. The CSPACE system (developed in the ITO IC&V program) will be used to interlink commercial presentation tools with the meeting record and provide support for evolving documents (through versioning and update events). Each of these have already been demonstrated on individual tasks, some of which have been map-based. The proposed work will provide revolutionary productivity gains by integrating these technologies and further enhancing them in the direction of making them more practical, higher-accuracy and more effective for real people.

Back to the CMU CPOF Main Page.