Research Interests

Dr. De la Torre's research interests include machine learning, signal processing and computer vision, with a focus on understanding human behavior from multimodal sensors (e.g. video, body sensors). I am particularly interested in three main topics:

      Human Sensing: Modeling and understanding human behavior from sensory data (e.g. video, motion capture, audio). This work is motivated by applications in the fields of human health, computer graphics, machine vision, biometrics, and human-machine interface. I co-lead the human sensing lab at CMU, for more information see Human Sensing Lab.

      Component Analysis (CA): CA (e.g. kernel PCA, Normalized Cuts, Multidimensional Scaling) are a set of algebraic techniques that decompose a signal into relevant components for classification, clustering, modeling, or visualization. I am keen on using CA methods to efficiently and robustly learn models from large amounts of high dimensional data. The theoretical focus of my work is to develop a unification theory for many component analysis methods. I lead the component analysis lab at CMU, which can be found at Component Analysis Lab.

      Face Analysis: Developing algorithms for real-time face tracking, recognition, and expression/emotion analysis.

 

 

Current projects

 

 

 

 

 

Human Sensing


 

depressionDepression assessment

 

This project aims to compute quantitative behavioral measures related to depression severity from facial expression, body gestures and vocal prosody in clinical interviews.

 

 

 

 

 

a5156_1440Deception detection

 

Learning facial indicators of deception.

 

 

 

 

 

 

[Project image]Hot flash detection

 

Machine learning algorithms to detect hot flashes in women using physiological measures.

 

 

 

 

[Project image]Forecasting the Anterior Cruciate Ligament Rupture Patterns

 

Use of machine learning techniques to predict the injury pattern of the Anterior Cruciate Ligament (ACL) using non-invasive methods.

 

 

 

[Project image]Intelligent diabetes assistant

 

We are working to create an intelligent assistant to help patients and clinicians work together to manage diabetes at a personal and social level. This project uses machine learning to predict the effect that patient specific behaviors have on blood glucose.

 

 

 

[Project image]Indoor people localization

 

Tracking multiple people in indoor environments with the connectivity of Bluetooth devices.

 

 

 

 

 

Quality of Life Technology Center (QLoT)

 

QoLT is a unique partnership between Carnegie Mellon and the University of Pittsburgh that brings together a cross-disciplinary team of technologists, clinicians, industry partners, end users, and other stakeholders to create revolutionary technologies that will improve and sustain the quality of life for all people.

 

Multimodal data collection

 

A multimodal database of subjects performing the tasks involved in cooking, captured with several sensors (audio, video, motion capture, accelerometer/gyroscope).

 

 

 

 

 

 

 

 

Component Analysis (CA) Methods


Unification of Component Analysis

 

This project aims to find the fundamental set of equations that unifies all component analysis methods.

 

 

 

 

 

Feature Selection

 

A convex optimization relaxation framework for feature selection

 

 

Low dimensional embeddings

 

Finding low dimensional embeddings of signals optimal for modeling, classification, visualization and clustering.

 

 

 

ca

 

Learning optimal representations

 

Learning optimal representations for classification, image alignment, visualization and clustering.

 

 

 

 

 

 

 

 

 

 

Face Analysis


fitting_v2

Image Alignment with Parameterized Appearance Models (PAMs)

 

Image alignment with parameterized appearance models (e.g. Active Appearance Model, Morphable Models, Eigentracking)

 

 

 

 

 

Face Recognition

 

Recognizing people from images and videos.

 

 

 

 

facial feature detection logoFacial feature detection

 

Detecting facial features in images.

 

 

 

 

 

 

 

Temporal Segmentation


segmentTemporal segmentation of human motion

Temporal segmentation of human motion

 

 

Spatio-Temporal Facial Expression Segmentation

A two-step approach temporally segment facial gestures from video sequences. It can register the rigid and non-rigid motion of the face

 

 

 

oie_animation-mediumMultimodal diaries

Summarization of daily activity from multimodal data (audio, video, body sensors and computer monitoring)

 

 

 

 

 


Past projects