|
ShapeMap 3-D: Efficient shape mapping through dense touch and vision
S. Suresh,
Z. Si,
J. Mangelson,
W. Yuan, and
M. Kaess
IEEE Intl. Conf. on Robotics and Automation, ICRA, May 2022
arXiv /
website /
video
Can we efficiently reconstruct household objects with touch and vision? We harness the GelSight sensor and a depth-camera for 3-D shape perception, as inference on a spatial graph informed by a Gaussian process.
|
|
Tactile SLAM: Real-time inference of shape and pose from planar pushing
S. Suresh,
M. Bauza,
K.-T. Yu,
J. Mangelson,
A. Rodriguez, and
M. Kaess
IEEE Intl. Conf. on Robotics and Automation, ICRA, May 2021
[Finalist for the 2021 IEEE ICRA Best Paper Award in Service Robotics]
arXiv /
website /
presentation
ICRA '20 ViTac workshop: Closing the Perception-Action Loop with Vision and Tactile Sensing
pdf /
presentation
Can we estimate object shape and pose in real-time through purely tactile sensing? We demonstrate this for planar pushing, combining Gaussian process implicit surfaces with factor-graph based inference.
|
|
Active SLAM using 3D submap saliency for underwater volumetric exploration
S. Suresh,
P. Sodhi,
J. Mangelson,
D. Wettergreen, and
M. Kaess
IEEE Intl. Conf. on Robotics and Automation, ICRA, May 2020
pdf /
video
How do you balance volumetric exploration and pose uncertainty in exploration? We combine a sampling-based planner, deformable pose graph, and a 3D saliency metric to explore a 3D underwater volume.
|
|
Through-water stereo SLAM with refraction correction for AUV localization
S. Suresh,
E. Westman, and
M. Kaess
IEEE Robotics and Automation Letters (RA-L), presented at ICRA 2019, Jan 2019
pdf /
video
How can you incorporate refraction into water-to-air visual SLAM? We present a novel method inspired by multimedia photogrammetry for underwater localization.
|
|
Localized imaging and mapping for underwater fuel storage basins
J. Hsiung,
A. Tallaksen,
L. Papincak,
S. Suresh,
H. Jones,
W. L. Whittaker, and
M. Kaess
Proceedings of the Symposium on Waste Management, Phoenix, Arizona, Mar 2018
pdf /
slides /
video
What's the ideal sensor suite for underwater dense mapping? We build and demonstrate an inspection solution comprising of a stereo camera, IMU, standard + structured lighting, and depth sensor.
|
|
Optical kinematic state estimation of planetary rovers using downward-facing monocular fisheye camera
S. Suresh ,
E. Fang,
and
W. L. Whittaker
Robotics Institute Summer Scholars Working Paper Journal, Nov 2016
Camera-Only Kinematics for Small Lunar Rovers
E. Fang,
S. Suresh,
and
W. L. Whittaker
Annual Meeting of the Lunar Exploration Analysis Group, Nov 2016
pdf /
video /
poster
Is it possible to track a lunar rover's kinematic state through self-perception? With a downward-facing fisheye lens, we estimate the Autokrawler's kinematics on rugged terrain.
|
|
Object category understanding via eye fixations on freehand sketches
R. K. Sarvadevabhatla,
S. Suresh and
R. V. Babu
IEEE Transactions on Image Processing (TIP), May 2017
paper /
website /
dataset
Can we better understand free-hand sketches through human gaze fixations? We collect the SketchFix-160 dataset and investigate visual saliency to reveal multi-level consistency in sketches.
|
|
DeepGeo: photo localization with deep neural network
S. Suresh, N. Chodosh, and M. Abello
arXiv / github
A deep network that beats humans at GeoGuessr, trained on our 50States10K dataset.
|
|
Task and motion planning for robotic food preparation
S. Suresh, T. Rhodes, M. Abello, and H. Yadav
pdf /
video 1 / video 2
Hierarchical task and motion planning for a 6-DOF robot arm, to prepare yogurt parfaits!
|
|
Thin structure reconstruction via 3D lines and points
S. Suresh and M. Abello
poster
Reconstructing thin objects in a scene through an SfM pipeline can be hard!
|
|
Factor graph optimization for dynamic parameter estimation
S. Suresh, E. Dexheimer, and M. Abello
pdf
We implement a method for estimation of MAV poses and dynamic parameters during flight.
|
|