Sudharshan Suresh | Suddhu
suddhu [at] cmu [dot] edu
I'm a final-year PhD candidate in the Robotics Institute at Carnegie Mellon University, advised by Michael Kaess. I work on spatial AI from touch and vision for robot manipulation. My thesis focuses on learning object-centric representations and harnessing vision-based touch. I'm also a visiting researcher at Meta AI, where I work with Mustafa Mukadam.
I completed my Masters in Robotics at CMU with Michael Kaess on underwater exploration and SLAM (thesis).
Prior to that, I worked with Red Whittaker on state-estimation for lunar rovers, and at IISc Bangalore working on visual understanding.
In my undergrad, I majored in Controls and Instrumentation at NIT Trichy.
I'm on the job market for research positions in industry, starting early 2024. Please reach out if you would like to chat!
CV  / 
Scholar  / 
Github  / 
LinkedIn
|
|
Updates
[Aug '23]     |
Our work RotateIt, led by Haozhi, was accepted to CoRL 2023. |
[April '23]     |
Spending the summer as a research scientist intern at Meta AI Menlo Park on visuo-tactile manipulation! |
[Dec '22]   |
MidasTouch was showcased at CoRL 2022 with a live demo.
|
[Oct '22]   |
Successfully passed my Ph.D. thesis proposal! |
[Sep '22]   |
MidasTouch was accepted to CoRL 2022 as an oral.
|
[Aug '22]   |
We've extended iSDF for neural mapping with the Franka robot, code here. |
[May '22]     |
Organized the Debates on the Future of Robotics Research workshop at ICRA '22 |
[April '22]     |
Spending the summer at Meta AI Pittsburgh working on pose tracking from touch |
[Jan '22]     |
ShapeMap 3-D was accepted to ICRA 2022, with an open-source implementation. |
[Aug '21]     |
Presented at the Tartan SLAM series on our working on perception for planar pushing, video here. |
[May '21]     |
Tactile SLAM was the ICRA 2021 best paper in service robotics finalist! |
|
A visuotactile transformer gives us general dexterity for multi-axis object rotation in the wild.
|
|
Tracking the pose distribution of a robot finger on an object surface over time, using surface geometry captured by a tactile sensor
|
|
Can we efficiently reconstruct household objects with touch and vision? We harness the GelSight sensor and a depth-camera for 3-D shape perception, as inference on a spatial graph informed by a Gaussian process.
|
|
Can we estimate object shape and pose in real-time through purely tactile sensing? We demonstrate this for planar pushing, combining Gaussian process implicit surfaces with factor-graph based inference.
|
|
How do you balance volumetric exploration and pose uncertainty in exploration? We combine a sampling-based planner, deformable pose graph, and a 3D saliency metric to explore a 3D underwater volume.
|
|
Through-water stereo SLAM with refraction correction for AUV localization
S. Suresh,
E. Westman, and
M. Kaess
IEEE Robotics and Automation Letters (RA-L), presented at ICRA 2019, Jan 2019
paper /
presentation
How can you incorporate refraction into water-to-air visual SLAM? We present a novel method inspired by multimedia photogrammetry for underwater localization.
|
|
Localized imaging and mapping for underwater fuel storage basins
J. Hsiung,
A. Tallaksen,
L. Papincak,
S. Suresh,
H. Jones,
W. L. Whittaker, and
M. Kaess
Proceedings of the Symposium on Waste Management, Phoenix, Arizona, Mar 2018
paper /
slides /
video
What's the ideal sensor suite for underwater dense mapping? We build and demonstrate an inspection solution comprising of a stereo camera, IMU, standard + structured lighting, and depth sensor.
|
|
Camera-Only Kinematics for Small Lunar Rovers
S. Suresh ,
E. Fang,
and
W. L. Whittaker
Robotics Institute Summer Scholars Working Paper Journal, Nov 2016
Annual Meeting of the Lunar Exploration Analysis Group, Nov 2016
paper /
video /
poster
Is it possible to track a lunar rover's kinematic state through self-perception? With a downward-facing fisheye lens, we estimate the Autokrawler's kinematics on rugged terrain.
|
|
Can we better understand free-hand sketches through human gaze fixations? We collect the SketchFix-160 dataset and investigate visual saliency to reveal multi-level consistency in sketches.
|
Other projects
|
Franka iSDF: neural mapping for tabletop manipulation
S. Suresh, J. Ortiz, and M. Mukadam
github
Extending iSDF to build real-time neural models of tabletop scenes with the Franka Panda arm
|
|
DeepGeo: photo localization with deep neural network
S. Suresh, N. Chodosh, and M. Abello
arXiv / github
A deep network that beats humans at GeoGuessr, trained on our 50States10K dataset.
|
|
Task and motion planning for robotic food preparation
S. Suresh, T. Rhodes, M. Abello, and H. Yadav
pdf /
video 1 / video 2
Hierarchical task and motion planning for a 6-DOF robot arm, to prepare yogurt parfaits!
|
|
Thin structure reconstruction via 3D lines and points
S. Suresh and M. Abello
poster
Reconstructing thin objects in a scene through an SfM pipeline can be hard!
|
|
Factor graph optimization for dynamic parameter estimation
S. Suresh, E. Dexheimer, and M. Abello
pdf
We implement a method for estimation of MAV poses and dynamic parameters during flight.
|
|