We harness ideas in imitation learning, specifically Maximum Entropy Inverse Reinforcement Learning (MaxEntIRL). The model is continously informed by a person's behaviors as observed by a first-person camera. This model is used to forward simulate the person's possible futures, which yields predictions for 1) what goal the person intends to reach 2) how they will achieve this goal.
The person is localized via a monocular SLAM algorithm, which forms the spatial component of the person's (and environment's) state
. The semantic components of state are represented by the relationship between the person and objects of interest. We adopt a straightforward approach to track the person's possession of objects as part of the semantic component of state.
The continuous modelling, state estimation, and forecasting loop comprises our main algorithm, Demonstrating Agent Rewards for K-futures Online:
. This loop is depicted in Algorithm 1 from the paper, reproduced below:
We derive MaxEntIRL extensions for performing inferences over state and action subspaces in our paper, each of which can have important semantic meaning. For example, we can forecast visitation to the subspace "has a bookbag and laptop". See the paper for more examples. We additionally show how this property extends to efficiently predict the expected length of the person's future trajectory, and provide empirical results of this forecasting. This result is summarized below as an excerpt from our paper, which shows that the expected future trajectory length can be computed efficiently as a summation of the expected future state visitation over the entire state space. This result is best understood in context to the derivation as presented in Section 3.5.