Updates
[Jan '24]   |
We just released a dataset and method for estimating the importance of driving objects with a view towards triaging for driver assistance.
The RA-L paper is out now! |
[Jun '23]   |
Presented our work on the characterizing of human peripheral vision during driving for
intelligent driving assistance at IV 2023 in beautiful Anchorage! |
[May '23]   |
Starting a research internship at Toyota Research Institute, working on object-based representations of driver awareness. Excited to be in Cambridge! |
[Mar '23]   |
Proposed my PhD thesis -- offically a PhD candidate! You can email me to watch a recording of my talk "Eye Gaze for Intelligent Driving". |
[Dec '22]   |
Our work on using using driver eye gaze as a supervisor for imitation learned driving won best paper at the
Aligning Robot Representations with Humans workshop at CoRL 2022! |
[Dec '22]   |
Organized the Attention Learning Workshop at NeurIPS '22 . |
Click for more updates
[May '22]   |
Grateful to have won a Modeling, Simulation, and Training Fellowship to support my PhD research -- thank you to the Link Foundation! |
[Mar '22]   |
Presented our VR driving simulator DReyeVR at HRI 2023 -- available on GitHub! |
[Jun '22]   |
Starting a research internship at Bosch, exploring the use of human driver eye gaze for supervising imitation learned driving agents. |
Ongoing work
|
-
Representations of driver situational awareness (SA) based on eye gaze
Building real-time driver situational awareness via a novel interactive driver SA data collection method and object-based representations.
|
-
Driver risk perception modeling
With TRI, I am working on modeling the driver's mental model of other vehicles on the road and hence, their perceived risk using eye gaze data.
|
-
Gaze-based memory modeling in virtual scenes
We show that it is possible to predict cue-free recall of objects in both 2D and 3D virtual scenes using purely gaze and object positional data. Preprint coming soon!
|
Research
(*) denotes equal contribution
|
|
Mitigating Causal Confusion in Driving Agents via Gaze Supervision
A Biswas,
BA Pardhi,
C Chuck,
J Holtz,
S Niekum,
H Admoni, and
A Allievi
International Conference on Autonomous Agents and Multiagent Systems (AAMAS) 2024
Also appeared at Aligning Robot Representations with Humans (ARRH) workshop at Conference on Robot Learning 2022
[NVIDIA best paper award @ CoRL ARRH workshop]
[Pre-print]
While driving, human drivers naturally exhibit an easily obtained, continuous signal that is highly correlated with causal elements of the state
space: eye gaze. How can we use it as a supervisory signal?
|
|
Object Importance Estimation using Counterfactual Reasoning for Intelligent Driving
Pranay Gupta,
A Biswas, and
Henny Admoni,
David Held
IEEE Robotics and Automation Letters (RA-L) 2024
[Project Page]
[Code & Dataset]
[arXiv]
The ability to identify important objects in a complex and dynamic driving environment can help assistive driving systems decide when to alert drivers.
We tackle object importance estimation in a data-driven fashion and introduce HOIST - Human-annotated Object Importance in Simulated Traffic.
HOIST contains driving scenarios with human-annotated importance labels for vehicles and pedestrians.
We additionally propose a novel approach that relies on counterfactual reasoning to estimate an object's importance.
We generate counterfactual scenarios by modifying the motion of objects and ascribe importance based on how the modifications affect the ego vehicle's driving.
Our approach outperforms strong baselines for the task of object importance estimation on HOIST.
|
|
Characterizing Drivers' Peripheral Vision via the Functional Field of View for Intelligent Driving Assistance
A Biswas, and
H Admoni
IEEE Intelligent Vehicle Symposium (IV) 2023
Oral: 5% acceptance rate
Also appeared as a peer-reviewed talk at CogSci 23
[Pre-print]
We find that driver peripheral vision is vertically asymmetrical -- more peripheral stimuli are missed
in the upper portion of drivers FoV (only while driving).
Also, right after saccades (eye movements), driver peripheral vision degrades.
|
|
DReyeVR: Democratizing Virtual Reality Driving Simulation for Behavioural & Interaction Research
G Silvera*,
A Biswas*, and
H Admoni
ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2022,
Short Contributions Track
[arXiv]
[Simulator Github]
[Video]
We open-source DReyeVR, our VR-based driving simulator built with human-centric research in mind.
It's based on CARLA -- if CARLA is for algorithmic drivers, DReyeVR is for humans.
The hardware setup is affordable for many academic labs, costing under 5000 USD.
|
|
SocNavBench: A Grounded Simulation Testing Framework for Evaluating Social Navigation
A Biswas,
A Wang,
G Silvera,
A Steinfeld, and
H Admoni
ACM Transactions on Human-Robot Interaction (THRI) 2021,
Special Issue: Test Methods for Human-Robot Teaming Performance Evaluations
[Paper]
[Pre-print]
[Simulator]
[Baselines]
We introduce SocNavBench, a simulation framework for evaluating social navigation algorithms in a consistent and interpretable manner.
It has a simulator with photo-realistic capabilities, curated social navigation scenarios grounded in real-world pedestrian data, and a suite of metrics that is auto-computed.
Try it out to evaluate your own social navigation algorithms!
|
|
Examining the Effects of Anticipatory Robot Assistance on Human Decision Making
B Newman*,
A Biswas*,
S Ahuja,
S Girdhar, and
H Admoni
International Conference on Social Robotics (ICSR) 2020
[Paper]
[Video]
Does preemptive robot assistance change human decision making?
We show in an experiment (N=99), that people's decision making in a selection task
does change in response to anticipatory robot assistance, but predicting the direction of change is difficult.
|
|
Human Torso Pose Forecasting in the Real World
A Biswas,
H Admoni, and
A Steinfeld
Multi-modal Perception and Control Workshop, Robotics:Science and Systems (RSS) 2018
[Paper]
[More results]
|
|
SketchParse: Towards Rich Descriptions for Poorly Drawn Sketches using Multi-Task Hierarchical Deep Networks
RK Sarvadevabhatla,
I Dwivedi,
A Biswas,
S Manocha, and
R V Babu
ACM Multimedia Conference (ACM MM) 2017
[arXiv]
[Code]
Can we use neural networks to semantically parse freehand sketches?
We show this is possible by "sketchifying" natural images to generate training data and employing a graphical model for generating descriptions.
|
|
Development of an Assistive Stereo Vision System
T Shankar,
A Biswas, and
V Arun
International Convention on Rehabilitation Engineering & Assistive Technology, (i-CREATe) 2015
[Paper]
[News]
|
|
First-order Meta-Learned Initialization for Faster Adaptation in Deep Reinforcement Learning
Abhijat Biswas, Shubham Agrawal
[Report]
First-derivative approximations to meta-learning updates perform just as well as second-derivative ones. Demonstrated on RL tasks
|
|
Socially compliant path planning
Abhijat Biswas, Ting-Che Lin, and Sean Wang
[Report]
[Code]
[Video]
RTAA* + Social-LSTM based social navigation
|
|
Automatic Extrinsic Calibration of Stereo Camera and 3D LiDAR
Abhijat Biswas, Aashi Manglik
[Poster]
We implement a method for estimation of MAV poses and dynamic parameters during flight.
|
|