Detailed Human Data Acquisition of Kitchen Activities: the CMU-Multimodal Activity Database (CMU-MMAC)

Teaser

People

Abstract

Over the past decade, researchers in computer graphics, computer vision, and robotics have begun to work with significantly larger collections of data. A number of sizable databases have been collected and made available to researchers: faces, motion capture, natural scenes, and changes in weather and lighting. These and other databases have done a great deal to facilitate research and to provide standardized test datasets for new algorithms, however, these databases are limited by the constrained settings within which they are collected. We propose a focused effort to capture detailed (high spatial and temporal resolution) human data in the kitchen while cooking several recipes. The database contains multimodal measures of the human activity of subjects performing the tasks involved in cooking and food preparation. Currently we record video from five external cameras and one wearable camera, audio from five balanced microphones and a wearable watch, motion capture with a 12 camera Vicon systems, and accelerometers, gyroscopes and magnetic sensors from five IMUs. Several computers were used for recording the various modalities. The computers were synchronized using the Network Time Protocol (NTP). Preliminary data can be downloaded from http://kitchen.cs.cmu.edu/, and it is currently used to solve problems of multimodal temporal segmentation of activities and activity recognition.

Citation

Paper thumbnail Fernando de la Torre, Jessica Hodgins, Javier Montano and Sergio Valcarcel.
Detailed Human Data Acquisition of Kitchen Activities: the CMU-Multimodal Activity Database (CMU-MMAC)
CHI 2009 Workshop. Developing Shared Home Behavior Datasets to Advance HCI and Ubiquitous Computing Research. Boston. April 4th, 2009.
[PDF] [Bibtex]

Results

CMU Multi-Modal Activity Database:

Acknowledgements and Funding

We thank Adam Bargneil, Xavi Martin, Alex Collado, Pep Beltran for preliminary version of the capture system. The data collection was funded in part by the National Science Foundation under Grant No. EEEC-0540865.

Copyright notice

Human Sensing Lab