Current Research

See our Publications, Code, and Media pages for more details on our progress.

Robot Feedback Assisting Learning of Sorting Game Rules - In Progress

We have developed a simple sorting game where cards belong in one of two bins based on a pre-defined rule. The rule above is that all diamond shapes belong in the left bin. The Quori robot can provide the player feedback as they play the game, using multiple modalities including movement and text, to provide information on correctness and encourage the player to learn the rule. We can measure the player's performance and subjective experience after the survey to determine which types of robot feedback are most effective. The first study we are currently developing looks at whether including robot nonverbal behavior changes the objective and subjective performance of the player. The second study is a data collection study to record facial expressions during the game, which will be used to build predictive models of internal state for choosing appropriate robot feedback.

Student Researchers: Roshni Kaushik, Bharath Sreenivas, Mark Chen, Priyanshi Garg, Jaehee Kim

Generating Nonverbal Behaviors on Quori (HRI 2021 paper)

An important aspect of a student to effectively teach the robot is for the student to appropriately interpret the robot's internal state. Nonverbal communication forms a large part of our broadcasting of emotion, so we must develop nonverbal behaviors for the Quori robot to convey different emotional states. We have developed a simulation using ROS and Gazebo to generate a wide range of Quori's behaviors. We ran an online user study to determine what emotion people perceive in these behaviors. Knowing the relationship between nonverbal behaviors and perceived emotion will allow us to program Quori to move in an appropriate way to convey its internal state.

Student Researchers: Roshni Kaushik and Adrian Thinnyun

Early Prediction of Student Engagement from Facial and Contextual Features (ICSR 2021 paper)

An important aspect of this project is understanding when students are engaged and intervene to re-engage the student to improve the educational outcome. We explored this topic by using an existing dataset from an educational app, RoboTutor, which contained screen-captures from the tablet screen in addition to video of students completing the educational activities. We extracted a set of facial (i.e. distance from the camera, gaze direction) and contextual (i.e. percentage of activity completed) features to predict key events related to engagement. For example, as seen above, given facial features and the context of the activity, we can track the probability, over time, of what feedback the student will provide at the end of the activity - whether they are having a positive (green), neutral (yellow), or negative (red) experience. Using a pre-determined threshold, we can determine, before the activity is over, if the student is not having a good experience, which would enable the system to intervene appropriately.

Student Researchers: Roshni Kaushik and Steven Qu