In this talk, we discuss a novel approach to integrating inertial sensor data into a pose-graph free dense mapping algorithm that we call GravityFusion. A range of dense mapping algorithms have recently been proposed, though few integrate inertial sensing. We build on ElasticFusion, a particularly elegant dense mapping approach that fuses sensor information directly into small surface patches called surfels. Traditional inertial integration happens at the level of camera motion, however, a pose graph is not available here. Instead, we present a novel approach that incorporates the gravity measurements directly into the map: Each surfel is annotated by a gravity measurement, and that measurement is updated with each new observation of the surfel. We use mesh deformation, the same mechanism used for loop closure in ElasticFusion, to enforce a consistent gravity direction among all the surfels. This eliminates drift in two degrees of freedom, avoiding the typical curving of maps that are particularly pronounced in long hallways, as we qualitatively show in the experimental evaluation.

Puneet Puri is an M.S. student in the Robotics Institute at Carnegie Mellon University, advised by Prof. Michael Kaess. The primary focus of his research is towards accurate dense mapping using RGB-D and stereo camera systems. He also works on applying machine learning for damage detection on these dense models. He previously received his Bachelor degree in Computer Engineering from Bangalore University and has industry experience working on unmanned aerial systems and their deployment in the field for inspection.

Committee Members
Prof. Michael Kaess (Advisor)
Prof. Martial Hebert
Daniel Maturana

Unmanned aerial vehicles have many potential applications, such as monitoring crops and inspecting infrastructure. The potential benefits are greater if the UAV is semi- or fully-autonomous, requiring only occasional human oversight or none at all. This would allow the above use cases to be performed at lower cost, during any time of day, or enable new possibilities such as autonomous package delivery.

However, whilst aircraft generally stay out of urban areas, the possibility of UAVs operating in the lowest 50 feet of civilian airspace brings many technical challenges. We want to enable UAVs to operate very close-too, and possibly even interact with obstacles. Our general approach is to create a static map of the world and to plan the entire mission for the UAV. Our world model consists of a 3D Octree representation and a 2D height grid of the ground surface. We use a sample-based planning approach with rejection-sampling for constraint satisfaction. We also plan a complex roadmap of alternative paths so that if the vehicle encounters an obstacle during the mission, it only has to choose which obstacle-avoidance maneuver to execute. The map is high-resolution so the UAV can avoid trees or power lines in order to achieve the task, such as delivering a package to your front door. The UAV localizes in the static map using a LiDAR and VO-based odometry method.

The benefits of our planning method are that it finds a path in less time than previous methods, such as Visibility Graphs ("SPARTAN"), with a comparable total path length. We can also plan in larger, higher-resolution maps than with previous methods, such as 3D Cost Maps.
This allows the UAV to fly through overhanging structures or into windows of derelict buildings. Our method also produces altitude-constrained paths and smooth take-off/landing motions. For collision-avoidance, we show faster decision making in cluttered spaces which leads to more safe vehicle behavior. For large, open spaces our method has similar performance to previous work using online planning with motion primitives.

David Butterworth is a M.S. student in the Robotics Institute at Carnegie Mellon University advised by Prof. Sanjiv Singh. He earned a B.Eng with honors in Mechatronics from the University of Queensland, and a Dip in Robotics and Mechatronics from Swinburne University in Melbourne, Australia. His current work focuses on motion planning for an autonomous aerial vehicle.

Committee Members:
Sanjiv Singh (Advisor)
Stephen Nuske
Sanjiban Choudhury

Autonomous exploration and mapping is an important capability for robotic systems both for creation of accurate models of unknown environments to be used by by human operators and to enable other functions of the robotic systems operating in that environment. Recently, information-theoretic approaches to this problem which seek to maximize information gained about the environment from future sensor measurements have become popular. This talk will describe how this approach can be applied to achieve online exploration with large teams of robots. Toward this end we propose an algorithm for distributed approximate sequential assignment and conclude with results demonstrating exploration and mapping on a team of aerial robots.

Micah Corah is a Ph.D. student in the Robotics Institute at Carnegie Mellon University, advised by Prof. Nathan Michael. Before attending Carnegie Mellon University, he received B.S. degrees in Computer Science and Mechanical Engineering from the Rensselaer Polytechnic Institute in 2015. His research focuses on problems related to active sensing with large teams of aerial robots as in exploration and mapping or interactive perception and manipulation.

Committee Members: 
Nathan Michael (Advisor)
Siddhartha Srinivasa
Koushil Sreenath
Jiaji Zhou

Simultaneous localization and mapping (SLAM) has been widely used in autonomous robots and virtual reality. Existing SLAM algorithms can achieve impressive results in feature-rich environments but cannot work robustly in some challenging low-texture scenarios. In addition, sparse geometric map representation from SLAM is limited for many advanced tasks including robot obstacle avoidance and interactions which may require a high-level semantic understanding of environment layout and 3D object locations. However, current layout estimation object detection usually only works in Manhattan box rooms and not robust to various environment structures, camera views and object occlusions.

In this work, we propose a novel approach to solve SLAM and scene understanding in a unified framework and demonstrate that these two tasks can benefit each other, with the ability to work in large scale and diverse environments. We first build a new graphical model for single image understanding and develop efficient inference algorithm for it which can build a complete 3D model to provide constraints for state estimates and mapping. Then, we propose a new bundle adjustment system to jointly optimize camera poses, with objects and layouts considering the geometric and contextual relationships between them. We also naturally extend it to cluttered and dynamic environments.

Shichao Yang is a Ph.D. student in the Mechanical Engineering at Carnegie Mellon University, advised by Prof. Sebastian Scherer in the Robotics Institute. He received a B.S in Mechanical Engineering from Shanghai Jiao Tong University in 2013. His research focuses on visual simultaneous localization and mapping (SLAM) combined with semantic scene understanding, to improve the robot intelligence in challenging real-life environments.

Thesis Committee:
Sebastian Scherer (Chair)
Michael Kaess
David Wettergreen
Koushil Sreenath
Derek Hoiem (UIUC)

Imaging spectrometers are invaluable instruments for robotic science exploration, enabling quantitative maps of physical and chemical properties at high spatial resolution.  This is particularly valuable in remote missions to other planetary bodies like Mars. The PIXL instrument on the Mars2020 rover will deploy an arm-mounted X-Ray fluorescence spectrometer to map chemical composition at sub-millimeter scales.  Its high resolution places dramatic new demands on instrument placement accuracy and measurement time.  We address these challenges using novel onboard data analysis strategies inspired by FRC science autonomy research.

David R. Thompson is an alumnus of the Field Robotics Center.  He is currently a technical group lead in the Imaging Spectroscopy group at the NASA Jet Propulsion Laboratory, and Investigation Scientist for the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) project.  Other roles include science software lead for the NEAScout mission and autonomy software lead for the PIXL instrument. He is recipient of the NASA Early Career Achievement Medal and the JPL Lew Allen Award.

As self driving car technology advances, it is important for mobile robots and autonomous vehicles to navigate accurately. Vision-Enhanced Lidar Odometry and Mapping (VELO) is a new algorithm for simultaneous localization and mapping using a set of cameras and a lidar. By tightly coupling sparse visual odometry and lidar scan matching, VELO is able to achieve reduced drift error compared to using either one or the other method. Moreover, the algorithm is capable of functioning when either the lidar or the camera is blinded. Incremental Smoothing and Mapping is used to refine the pose-graph, further improving accuracy. Experimental results obtained using the publicly available KITTI data set reveal that VELO achieves around 1% translation error with respect to distance travelled, indicating it has comparable performance to state-of-the-art vision- and lidar-based SLAM methods.

Daniel Lu is an MS student at the Robotics Institute at Carnegie Mellon University advised by Prof. George Kantor. Daniel received his Bachelor's of Applied Science in Engineering Physics from the University of British Columbia in 2014. His research currently focuses on perception and pose estimation using a combination of cameras and lidar for autonomous terrestrial vehicles.

Masters Committee:
George Kantor
Michael Kaess
Ji Zhang

This talk will serve as a Robotics Speaking Qualifier.

A new control method is presented which solves reach-avoid problems by interpolating optimal solutions using convex combinations. It also provides formal guarantees for constraint satisfaction and safety. Reach-avoid problems are important control tasks, which arise in many modern application areas, including autonomous driving and robotic path planning. By computing the optimal input trajectories for finitely many extreme states only and combining them using convex combinations for all states in a continuous set, we obtain an efficient control policy. Moreover, our approach has very low online computation complexity. Thus it is applicable for fast dynamical systems. Iterating through this approach leads to feedback control and thereby to robustness and stability. Therefore, it combines the advantages of optimal open-loop control and robust closed-loop control. We consider this novel control approach for nonlinear systems affected by disturbances. Our approach is formal and provably correct. We demonstrate the new control method for a control problem in automated driving and show the advantages compared to classical control methods.

Bastian Schürmann is a PhD candidate at the Institute of Robotics and Embedded Systems at the Technical University of Munich, Germany. There he works in the Cyber-Physical Systems group with Professor Matthias Althoff. His research focuses on obtaining controllers with high performance and formal guarantees for safety-critical systems. This is achieved by combining methods from control and optimization with reachability analysis. Application areas include autonomous driving and human-robot interaction.

Bastian received his B.Sc. in Electrical Engineering from the University of Kaiserslautern in 2012. Parallel to completing a M.Sc. degree in Engineering Cybernetics at the University of Stuttgart, he finished an additional M.Sc. in Electrical Engineering at the University of California, Los Angeles under a Fulbright Fellowship in 2014. During this time he worked in the group of Professor Paulo Tabuada on correct-by-construction controller design.

utonomous outdoor localization is a challenging but important task for rovers. This is especially true in desert-like environments such as those on Mars, where features can be difficult to distinguish and GPS is not available. This work describes a localization system called MeshSLAM, which requires only stereo images as inputs. MeshSLAM uses the spatial geometry of rocks as landmarks in a GraphSLAM algorithm. These landmarks are termed “constellations,” and this work will present and compare methods of generating, describing and matching constellations. Motion is estimated through visual odometry.

This work will also discusses two new methods of detecting rocks in an image — one that uses superpixel clustering and ground plane fitting, and another that uses a convolutional neural network. The analysis of feature descriptors and descriptor matching that follows will show that accurate landmark matching can be achieved by systematically building convex hull boundary descriptors in each image, and rejecting outliers using RANSAC and motion-invariant rock features.

Several hundred images were collected by the rover Zoë from the Atacama desert in Chile. These images, as well as a set of synthetic data, are used to validate the system.

Samuel Yim is an M.S. student in the Robotics Institute advised by David Wettergreen. He received a B.S. in Engineering from Harvey Mudd College in 2014. His current research focuses on robustly detecting and describing features for SLAM applications.

Committee Members:
David Wettergreen (Advisor)
 Michael Kaess
Greydon Foil

This talk will serve as a RI Speaking Qualifier.
Lunch will be served.

The ever-growing applications of Unmanned Aerial Vehicles (UAVs) require UAVs to navigate at low altitude below 2000 feet. Traditionally, a UAV is equipped with a single GPS receiver. When flying at low altitude, a single GPS receiver may receive signals from less than four GPS satellites in the partially visible sky, not sufficient to conduct trilateration. In such a situation, GPS coordinates become unavailable and the partial GPS information is discarded. A GPS receiver may also suffer from multipath errors, causing the navigation solution to be inaccurate and unreliable.

In this talk, we present our recent work on UAV navigation using not one, but multiple GPS receivers, either on the same UAV or across different UAVs fused with other navigational sensors, such as IMUs and vision. We integrate and take use of the partial GPS information from peer GPS receivers and are able to dramatically improve GPS availability. We apply advanced filtering algorithms to multiple GPS measurements on the same UAV to mitigate multipath errors. Furthermore, multiple UAVs equipped with on-board communication capabilities can cooperate by forming a UAV network to further improve navigation accuracy, reliability and security.

Grace Xingxin Gao is an assistant professor in the Aerospace Engineering Department at University of Illinois at Urbana-Champaign. She obtained her Ph.D. degree in Electrical Engineering from the GPS Laboratory at Stanford University in 2008. Before joining Illinois at Urbana-Champaign as an assistant professor in 2012, Prof. Gao was a research associate at Stanford University.

Prof. Gao has won a number of awards, including RTCA William E. Jackson Award and Institute of Navigation Early Achievement Award. She was named one of 50 GNSS Leaders to Watch by the GPS World Magazine. She has won Best Paper/Presentation of the Session Awards 10 times at ION GNSS+ conferences. For her teaching, Prof. Gao has been on the List of Teachers Ranked as Excellent by Their Students at University of Illinois multiple times. She won the College of Engineering Everitt Award for Teaching Excellence at University of Illinois at Urbana-Champaign in 2015. She was chosen as American Institute of Aeronautics and Astronautics (AIAA) Illinois Chapter’s Teacher of the Year in 2016.

Lava tubes are caves that underlie the moon.  Until recently, lunar cave exploration was impossible, since there was no known means to enter the closed tubes. Great holes, or "pits" have recently been discovered from orbit, and some of these appear to offer robotic access to caves.

Space agencies and private institutions plan to visit these potential caves and investigate them as potential lunar habitat sites.

My research has investigated rover configuration, mobility, electronics, power and operations for exploring lunar pits and caves with small robots.   I will present some of my PhD research related to these issues.

John Walker completed his aerospace PhD at Tohoku University in 2016.   His Mechanical Engineering degree was earned at the University of Alberta in 2005.  In 2010 he attended the International Space University in Strasbourg, France. This was followed by an internship at the Space Robotics Lab at Tohoku University in Japan where he began doing lunar rover research to support Hakuto, a leading Google Lunar X-Prize Team. He joined Hakuto officially as the rover development leader and completed his PhD in the Space Exploration Lab with research in lunar cave exploration robots.


Subscribe to FRC