Junwei Liang's PhD Thesis
From Recognition to Prediction: Analysis of Human Action and Trajectory Prediction in Video
Junwei Liang
June. 30, 2020
Carnegie Mellon University
Thesis Committee
The write-up [.pdf] can be found here. ArXiv version.
The slides [.pdf] can be found here.
With the advancement in computer vision deep learning, systems now are able to analyze an unprecedented amount of rich visual information from videos to enable applications such as autonomous driving, socially-aware robot assistant and public safety monitoring. Deciphering human behaviors to predict their future paths/trajectories and what they would do from videos is important in these applications. However, human trajectory prediction still remains a challenging task, as scene semantics and human intent are difficult to model. Many systems do not provide high-level semantic attributes to reason about pedestrian future. This design hinders prediction performance in video data from diverse domains and unseen scenarios. To enable optimal future human behavioral forecasting, it is crucial for the system to be able to detect and analyze human activities as well as scene semantics, passing informative features to the subsequent prediction module for context understanding.
In this thesis, we conduct human action analysis and develop robust algorithm and models for human trajectory prediction in urban traffic scenes. This thesis consists of three parts. The first part analyzes human actions. We aim to develop an efficient object detection and tracking system similar to the perception system used in self-driving, and tackle action recognition problem under weakly-supervised learning settings. We propose a method to learn viewpoint invariant representations for video action recognition and detection with better generalization. In the second part, we tackle the problem of trajectory forecasting with scene semantic understanding. We study multi-modal future trajectory prediction using scene semantics and exploit 3D simulation for robust learning. Finally, in the third part, we explore using both scene semantics and action analysis for prediction of human trajectories. We show our model efficacy on a new challenging long-term trajectory prediction benchmark with multi-view camera data in traffic scenes.
My thesis is based on the following publications:
  1. The Garden of Forking Paths: Towards Multi-Future Trajectory Prediction
    Junwei Liang, Lu Jiang, Kevin Murphy, Ting Yu, Alexander Hauptmann
    CVPR 2020.  
  2. Peeking into the Future: Predicting Future Person Activities and Locations in Videos
    Junwei Liang, Lu Jiang, Juan Carlos Niebles, Alexander Hauptmann, Li Fei-Fei
    CVPR 2019. (Translated and reported by multiple Chinese media (量子位 & 机器之心, 02/13/2019), with 30k+ views in a week.)
    #1 Tensorflow-based code on PaperWithCode in Trajectory Prediction task.
  3. SimAug: Learning Robust Representations from Simulation for Trajectory Prediction
    Junwei Liang, Lu Jiang, Alexander Hauptmann
    ECCV 2020.  
  4. Focal Visual-Text Attention for Memex Question Answering
    Junwei Liang, Lu Jiang, Liangliang Cao, Yannis Kalantidis, Li-Jia Li, and Alexander Hauptmann
    In IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2019.
  5. Focal Visual-Text Attention for Visual Question Answering
    Junwei Liang, Lu Jiang, Liangliang Cao, Li-Jia Li, and Alexander Hauptmann
    CVPR 2018. (Spotlight Paper, 6.8% acceptance rate)
  6. Webly-Supervised Learning of Multimodal Video Detectors
    Junwei Liang, Lu Jiang, and Alexander Hauptmann
    AAAI 2017 Demo.
  7. Leveraging Multi-modal Prior Knowledge for Large-scale Concept Learning in Noisy Web Data
    Junwei Liang, Lu Jiang, Deyu Meng, and Alexander Hauptmann
    ICMR 2017.
  8. Learning to Detect Concepts from Webly-Labeled Video Data
    Junwei Liang, Lu Jiang, Deyu Meng, and Alexander Hauptmann
    IJCAI 2016.