overview
people
research
publications
facilities
links
site credits

 

 

Project Overview


Head-Mounted vs. Scene-Oriented Cameras

Goal: To understand the value of different sources of visual information for collaboration on physical tasks.

Method: Participants collaborated to build a large toy robot. One partner (the "worker") builts the robot, under the guidance of the other partner (the "helper"). Pairs completed five tasks, one in each of five media conditions:

  • side-by-side (participants worked together in the same room)
  • head-mounted camera with eye-tracking (remote helper saw the output from the head-mounted camera on his/her monitor)
  • scene camera showing wider view of work space (remote helper saw output from scene camera on his/her monitor)
  • head and scene cameras at the same time
  • audio-only

Findings: Scene camera was of more value than head-mounted camera, or audio-only, but not as valuable as working side-by-side.

Publication: Fussell, S. R., Setlock, L. D., & Kraut, R. E. (in press).



Robot used in our studies




Remote helper's screen




Cursor Pointing Tool

Goal: To assess whether providing remote helpers with a cursor-controlled pointing device that is simultaneously viewed by the worker building the robot will improve collaboration.

Method: Participants collaborated to build the same large toy robot, under one of three media conditions:

  • side-by-side
  • scene camera alone
  • scene camera plus cursor pointing device

Findings: Participants valued the cursor pointing device. They rated it significantly easier to refer to task objects and locations using the cursor tool. However, the presence of the cursor pointer did not improve performance times over those with the scene camera alone.

Preliminary Report: Fussell, Setlock, Parker & Yang (in press).





Helper screen with manual and cursor pointer.


Video Drawing Tools

Goals: To design a new tool to allow remote helpers to communicate gestural information by drawing on a video feed, and to assess the value of this tool for remote collaboration on physical tasks.

Development: We designed a simple drawing tool that allows users to draw lines and shapes on top of the output from a video camera. The tool also offers simple drawing recognition (circles, arrows, lines, squares). The shapes are recognized and normalized, and the output is seen by both the helper and worker.

Behavioral studies: We will be evaluating communication and performance on the robot construction task with and without the gesture drawing tool. We will also compare performance with and without the drawing recognition component.



Examples of gesture drawing





Example of drawing recognition


Gesture in Face-to-Face Communication

Goals: To understand the timing and nature of gestures during collaborative physical tasks, and to understand when workers focus their attention on helpers' gestures.

Method: Videotaping and eye-tracking of gestures during collaborative physical tasks.



 

This work is funded by the National Science Foundation, grant #0208903. The opinions and findings expressed on this site are those of the investigators, and do not reflect the views of the National Science Foundation.