Andrea Bajcsy
Intent Lab Research Publications Teaching Contact

I am an Assistant Professor in the Robotics Institute and School of Computer Science at Carnegie Mellon University.

I lead the Interactive and Trustworthy Robotics Lab (Intent Lab). We study how to make learning-enabled robots safely and intelligently interact with humans. We draw upon methods from optimal control, dynamic game theory, Bayesian inference, and deep learning.

I obtained my Ph.D. in electrical engineering & computer science at UC Berkeley with Anca Dragan and Claire Tomlin. Previously, I was also a postdoctoral scholar with Jitendra Malik and worked at NVIDIA in the Autonomous Vehicle Research Group.

Prospective students: please see here.

email   |   cv   |   google scholar   |   github  |   bio

profile photo

Unsure how to pronounce my last name? Bajcsy sounds like BYE-chee.


  • [Oct 2023]

    Submitted a paper to ICLR! We propose Representation-Aligned Preference-based Learning (RAPL), a tractable video-only method for solving the visual representation alignment problem and learning visual robot rewards via optimal transport.
  • [Sep 2023]

    Submitted a paper to ICRA! We present Conformal Decision Theory, a new theoretical and algorithmic framework for online calibration of decision risk.
  • [Aug 2023]

    ArXiv paper on learning vision-based pursuit-evasion robot policies. Check out our project website for videos of human-robot and robot-robot interaction ``in the wild''.
  • [Aug 2023]

    1 paper accepted to CoRL! This work synthesizes safe control policies that explicitly account a robot's ability to learn and adapt at runtime.
  • [Aug 2023]

    Submitted a paper to RA-L on stabilized and robust online learning from humans.
  • [Aug 2023]

    Extended results on contingency games now on arXiv.
  • [Aug 2023]

    I gave a talk at Bosch research.
  • [Apr 2023]

    New paper on contingency games: a model for strategic interactions which allows a robot to consider the full distribution of other agents’ intents while anticipating intent certainty in the near future.