Facial Expression Analysis


September 8, 2015
2:00pm-6:00pm
7th IEEE International Conference on Biometrics
BTAS 2015
Arlington, Virginia

 

 


Description

The face is one of the most powerful channels of nonverbal communication. Facial expression provides cues about emotion, intention, alertness, pain, personality, regulates interpersonal behavior, and communicates psychiatric and biomedical status among other functions. Within the past 15 years, there has been increasing interest in automated facial expression analysis within the computer vision and machine learning communities. This tutorial will review fundamental approaches to facial measurement by behavioral scientists and current efforts in automated facial expression recognition. We consider challenges, review databases available to the research community, approaches to feature detection, tracking, and representation, and both supervised and unsupervised learning. In addition, we will discuss applications of facial image analysis to clinical depression, personality processes, pain assessment and the use of facial expression as biometrics.


Program

Introduction

Facial Action Coding System (FACS)

Computational models for Facial Expression Analysis

  • Registration
  • Features
  • Supervised Facial Expression Analysis
  • Unsupervised Facial Expression Analysis

Applications to:

  • Clinical depression
  • Pain Assessment
  • Facial expression as biometrics
  • Personality and facial expression

Conclusions and open problems


Target participants

The tutorial is targeted to researchers or practitioners that use video to analyze facial behavior. The tutorial will be self-contained. Basic knowledge on computer vision will be required to understand some of the methods, but in the second part of the tutorial (application area) no knowledge on computer vision is required.


Relevant literature

Alghowinem, S., Goecke, R., Cohn, J. F., Wagner, M., Parker, G., & Breakspear, M. (2015). Nonverbal social withdrawal in depression: Evidence from manual and automatic analyses..  Image and Vision Computing, 32(10), 641–647.

Hammal, Z. & Cohn, J.F. (2014). Towards multimodal pain assessment for research and clinical use. Proceedings of the 2014 Workshop on Road-mapping the Future of Multimodal Interaction Research including Business Opportunities and Challenges. Istanbul, Turkey

Girard, J. M., Cohn, J. F., Sayette, M. A., Jeni, L. A., & De la Torre, F. (2015). How much training data for facial action unit detection? Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, Ljubljana, Slovenia

De la Torre, F., Chu, W.-S., Xiong, X., Vicentey, F., Dingy, X., & Cohn, J. F. (2015). IntraFace. Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, Ljubljan, Slovenia.

Girard, J. M., Cohn, J. F., Sayette, M. A., Jeni, L., & De la Torre, F. (2014). Spontaneous facial expression can be measured automatically. Behavior Research Methods.

Estimating smile intensity: A better way 
Jeffrey M. Girard, Jeffrey F. Cohn and Fernando De la Torre 
Pattern Recognition Letters, 2014.

Automated Face Analysis for Affective Computing 
J. F. Cohn and F. De la Torre 
The Oxford Handbook of Affective Computing, 2014. 

Max-Margin Early Event Detectors 
M. Hoai and F. De la Torre 
International Journal of Computer Vision (IJCV), vol. 107, issue 2, pp. 191-202, 2014 

Facing imbalanced data recommendations for the use of performance metrics 
L. A. Jeni, J. F. Cohn and F. De la Torre 
Affective Computing and Intelligent Interaction (ACII), 2013.

Selective Transfer Machine for Personalized Facial Action Unit Detection 
W.-S. Chu, F. De la Torre and J. F. Cohn 
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013. 

Unsupervised Temporal Commonality Discovery 
W.-S. Chu, F. Zhou and F. De la Torre 
European Conference on Computer Vision (ECCV), 2012

FAST-FACS: A Computer-Assisted System to Increase Speed and Reliability of Manual FACS Coding 
F. De la Torre, T. Simon, Z. Ambadar and J. F. Cohn 
Affective Computing and Intelligent Interaction (ACII), 2011. 


Facial Expression Analysis
 
F. De la Torre and J. F. Cohn 
Guide to Visual Analysis of Humans: Looking at People, Springer, 2011.

Dynamic Cascades with Bidirectional Bootstrapping 
for Spontaneous Facial Action Unit Detection
 
Y. Zhu, F. De la Torre, J. Cohn and Y. Zhang 
IEEE Transactions on Affective Computing, vol. 2, issue 2, pp. 79-91, 2011. 

Detecting Depression from Facial Actions and Vocal Prosody 
J. F. Cohn, T. Simon, I. Matthews, Y. Yang, M. H. Nguyen, M. Tejera, F. Zhou and F. De la Torre 
Affective Computing and Intelligent Interaction (ACII), September 2009.

 


Lecturers


Fernando de la Torre
Associate Professor
Carnegie Mellon University

Fernando De la Torre is an Associate Research Professor in the Robotics Institute at Carnegie Mellon University. He received his B.Sc. degree in Telecommunications, as well as his M.Sc. and Ph. D degrees in Electronic Engineering from La Salle School of Engineering at Ramon Llull University, Barcelona, Spain in 1994, 1996, and 2002, respectively. His research interests are in the fields of computer vision and Machine Learning. Currently, he is directing the Component Analysis Laboratory (http://ca.cs.cmu.edu) and the Human Sensing Laboratory (http://humansensing.cs.cmu.edu) at Carnegie Mellon University. He has over 150 publications in referred journals and conferences. He has organized and co-organized several workshops and has given tutorials at international conferences on the use and extensions of Component Analysis. He is Associate Editor of IEEE on Pattern Analysis and Machine Intelligence.


Jeff Cohn
Professor
University of Pittsburgh

Jeffrey Cohn is Professor of Psychology and Psychiatry at the University of Pittsburgh and Adjunct Professor of Computer Science at the Robotics Institute at Carnegie Mellon University. He leads interdisciplinary and inter-institutional efforts to develop advanced methods of automatic analysis and synthesis of facial expression and prosody; and applies those tools to research in human emotion, social development, non-verbal communication, psychopathology, and biomedicine. He has served as Co-Chair of the 2008 and 2015IEEE International Conference on Automatic Face and Gesture Recognition (FG2008) (FG2015), the 2009International Conference on Affective Computing and Intelligent Interaction (ACII2009), the Steering Committee for IEEE International Conference on Automatic Face and Gesture Recognition, and the 2014 International Conference on Multimodal Interaces (ACM 2014). He has co-edited special issues of the Journal of Image and Vision Computing and is a Co-Editor of IEEE Transactions in Affective Computing (TAC). His research has been supported by grants from the National Institutes of Health, National Science Foundation, Autism Foundation, Office of Naval Research, Defense Advanced Research Projects Agency, and other sponsors.


Jeff Girard
Ph. D. student
University of Pittsburgh

Jeffrey Girard is a Ph. D. student in Clinical Psychology at the University of Pittsburgh. He received his B.A. degree in Psychology and Philosophy from the University of Washington in 2009, and his M.S. degree in Clinical Psychology from the University of Pittsburgh in 2013. His research interests are in applying computational and statistical methods to the study of affective and interpersonal behavior. Currently, he is contributing to research at the Affect Analysis Group and the Personality Processes and Outcomes Laboratory at the University of Pittsburgh. His recent contributions to the field of affective computing include the development and assessment of methods for supervised facial expression analysis and their application to the study of clinical depression and personality processes.