Junwei Liang's PhD Thesis Proposal
Joint Analysis and Prediction of Human Actions and Paths in Video
Junwei Liang
Oct. 30, 2020, 2:00 pm EST
Carnegie Mellon University
Thesis Committee
Document
The write-up [.pdf] can be found here. ArXiv version.
Slides
The slides [.pdf] can be found here.
Abstract
With the advancement in computer vision deep learning, systems now are able to analyze an unprecedented amount of rich visual information from videos to enable applications such as autonomous driving, socially-aware robot assistant and public safety monitoring. Deciphering human behaviors to predict their future paths/trajectories and what they would do from videos is important in these applications. However, modern vision systems in self-driving applications usually perform the detection (perception) and prediction task in separate components, which leads to error propagation and sub-optimal performance. More importantly, these systems do not provide high-level semantic attributes to reason about pedestrian future. This design hinders prediction performance in video data from diverse domains and unseen scenarios. To enable optimal future human behavioral forecasting, it is crucial for the system to be able to detect and analyze human activities leading up to the prediction period, passing informative features to the subsequent prediction module for context understanding.
In this thesis, with the goal of improving the performance and generalization ability of future trajectory and action prediction models, we conduct human action analysis and jointly optimize models for action detection, prediction and trajectory prediction. This thesis consists of three parts. The first part analyzes human actions. We aim to develop an efficient object detection and tracking system similar to the perception system used in self-driving, and tackle action recognition problem under weakly-supervised learning settings. We propose a method to learn viewpoint invariant representations for video action recognition and detection with better generalization. In the second part, we tackle the problem of trajectory forecasting with semantic context understanding. We study multiple-future trajectory prediction using scene semantics and exploit 3D simulation for robust learning. Finally, in the third part, we explore joint analysis and prediction of human actions and trajectories. Our final goal is to build a robust end-to-end vision system that could jointly detect and forecast future actions and trajectories.
Code/Datasets/Models
References (Completed Work)
My thesis is based on the following publications:
  1. The Garden of Forking Paths: Towards Multi-Future Trajectory Prediction
    Junwei Liang, Lu Jiang, Kevin Murphy, Ting Yu, Alexander Hauptmann
    CVPR 2020.  
  2. Peeking into the Future: Predicting Future Person Activities and Locations in Videos
    Junwei Liang, Lu Jiang, Juan Carlos Niebles, Alexander Hauptmann, Li Fei-Fei
    CVPR 2019. (Translated and reported by multiple Chinese media (量子位 & 机器之心, 02/13/2019), with 30k+ views in a week.)
    #1 Tensorflow-based code on PaperWithCode in Trajectory Prediction task.
  3. SimAug: Learning Robust Representations from Simulation for Trajectory Prediction
    Junwei Liang, Lu Jiang, Alexander Hauptmann
    ECCV 2020.  
  4. Focal Visual-Text Attention for Memex Question Answering
    Junwei Liang, Lu Jiang, Liangliang Cao, Yannis Kalantidis, Li-Jia Li, and Alexander Hauptmann
    In IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2019.
  5. Focal Visual-Text Attention for Visual Question Answering
    Junwei Liang, Lu Jiang, Liangliang Cao, Li-Jia Li, and Alexander Hauptmann
    CVPR 2018. (Spotlight Paper, 6.8% acceptance rate)
  6. Webly-Supervised Learning of Multimodal Video Detectors
    Junwei Liang, Lu Jiang, and Alexander Hauptmann
    AAAI 2017 Demo.
  7. Leveraging Multi-modal Prior Knowledge for Large-scale Concept Learning in Noisy Web Data
    Junwei Liang, Lu Jiang, Deyu Meng, and Alexander Hauptmann
    ICMR 2017.
  8. Learning to Detect Concepts from Webly-Labeled Video Data
    Junwei Liang, Lu Jiang, Deyu Meng, and Alexander Hauptmann
    IJCAI 2016.