Simultaneous localization and mapping (SLAM) has been widely used in autonomous robots and virtual reality. Existing SLAM algorithms can achieve impressive results in feature-rich environments but cannot work robustly in some challenging low-texture scenarios. In addition, sparse geometric map representation from SLAM is limited for many advanced tasks including robot obstacle avoidance and interactions which may require a high-level semantic understanding of environment layout and 3D object locations. However, current layout estimation object detection usually only works in Manhattan box rooms and not robust to various environment structures, camera views and object occlusions.
In this work, we propose a novel approach to solve SLAM and scene understanding in a unified framework and demonstrate that these two tasks can benefit each other, with the ability to work in large scale and diverse environments. We first build a new graphical model for single image understanding and develop efficient inference algorithm for it which can build a complete 3D model to provide constraints for state estimates and mapping. Then, we propose a new bundle adjustment system to jointly optimize camera poses, with objects and layouts considering the geometric and contextual relationships between them. We also naturally extend it to cluttered and dynamic environments.
Shichao Yang is a Ph.D. student in the Mechanical Engineering at Carnegie Mellon University, advised by Prof. Sebastian Scherer in the Robotics Institute. He received a B.S in Mechanical Engineering from Shanghai Jiao Tong University in 2013. His research focuses on visual simultaneous localization and mapping (SLAM) combined with semantic scene understanding, to improve the robot intelligence in challenging real-life environments.
Sebastian Scherer (Chair)
Derek Hoiem (UIUC)
Imaging spectrometers are invaluable instruments for robotic science exploration, enabling quantitative maps of physical and chemical properties at high spatial resolution. This is particularly valuable in remote missions to other planetary bodies like Mars. The PIXL instrument on the Mars2020 rover will deploy an arm-mounted X-Ray fluorescence spectrometer to map chemical composition at sub-millimeter scales. Its high resolution places dramatic new demands on instrument placement accuracy and measurement time. We address these challenges using novel onboard data analysis strategies inspired by FRC science autonomy research.
David R. Thompson is an alumnus of the Field Robotics Center. He is currently a technical group lead in the Imaging Spectroscopy group at the NASA Jet Propulsion Laboratory, and Investigation Scientist for the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) project. Other roles include science software lead for the NEAScout mission and autonomy software lead for the PIXL instrument. He is recipient of the NASA Early Career Achievement Medal and the JPL Lew Allen Award.
As self driving car technology advances, it is important for mobile robots and autonomous vehicles to navigate accurately. Vision-Enhanced Lidar Odometry and Mapping (VELO) is a new algorithm for simultaneous localization and mapping using a set of cameras and a lidar. By tightly coupling sparse visual odometry and lidar scan matching, VELO is able to achieve reduced drift error compared to using either one or the other method. Moreover, the algorithm is capable of functioning when either the lidar or the camera is blinded. Incremental Smoothing and Mapping is used to refine the pose-graph, further improving accuracy. Experimental results obtained using the publicly available KITTI data set reveal that VELO achieves around 1% translation error with respect to distance travelled, indicating it has comparable performance to state-of-the-art vision- and lidar-based SLAM methods.
Daniel Lu is an MS student at the Robotics Institute at Carnegie Mellon University advised by Prof. George Kantor. Daniel received his Bachelor's of Applied Science in Engineering Physics from the University of British Columbia in 2014. His research currently focuses on perception and pose estimation using a combination of cameras and lidar for autonomous terrestrial vehicles.
This talk will serve as a Robotics Speaking Qualifier.
A new control method is presented which solves reach-avoid problems by interpolating optimal solutions using convex combinations. It also provides formal guarantees for constraint satisfaction and safety. Reach-avoid problems are important control tasks, which arise in many modern application areas, including autonomous driving and robotic path planning. By computing the optimal input trajectories for finitely many extreme states only and combining them using convex combinations for all states in a continuous set, we obtain an efficient control policy. Moreover, our approach has very low online computation complexity. Thus it is applicable for fast dynamical systems. Iterating through this approach leads to feedback control and thereby to robustness and stability. Therefore, it combines the advantages of optimal open-loop control and robust closed-loop control. We consider this novel control approach for nonlinear systems affected by disturbances. Our approach is formal and provably correct. We demonstrate the new control method for a control problem in automated driving and show the advantages compared to classical control methods.
Bastian Schürmann is a PhD candidate at the Institute of Robotics and Embedded Systems at the Technical University of Munich, Germany. There he works in the Cyber-Physical Systems group with Professor Matthias Althoff. His research focuses on obtaining controllers with high performance and formal guarantees for safety-critical systems. This is achieved by combining methods from control and optimization with reachability analysis. Application areas include autonomous driving and human-robot interaction.
Bastian received his B.Sc. in Electrical Engineering from the University of Kaiserslautern in 2012. Parallel to completing a M.Sc. degree in Engineering Cybernetics at the University of Stuttgart, he finished an additional M.Sc. in Electrical Engineering at the University of California, Los Angeles under a Fulbright Fellowship in 2014. During this time he worked in the group of Professor Paulo Tabuada on correct-by-construction controller design.
utonomous outdoor localization is a challenging but important task for rovers. This is especially true in desert-like environments such as those on Mars, where features can be difficult to distinguish and GPS is not available. This work describes a localization system called MeshSLAM, which requires only stereo images as inputs. MeshSLAM uses the spatial geometry of rocks as landmarks in a GraphSLAM algorithm. These landmarks are termed “constellations,” and this work will present and compare methods of generating, describing and matching constellations. Motion is estimated through visual odometry.
This work will also discusses two new methods of detecting rocks in an image — one that uses superpixel clustering and ground plane fitting, and another that uses a convolutional neural network. The analysis of feature descriptors and descriptor matching that follows will show that accurate landmark matching can be achieved by systematically building convex hull boundary descriptors in each image, and rejecting outliers using RANSAC and motion-invariant rock features.
Several hundred images were collected by the rover Zoë from the Atacama desert in Chile. These images, as well as a set of synthetic data, are used to validate the system.
Samuel Yim is an M.S. student in the Robotics Institute advised by David Wettergreen. He received a B.S. in Engineering from Harvey Mudd College in 2014. His current research focuses on robustly detecting and describing features for SLAM applications.
David Wettergreen (Advisor)
This talk will serve as a RI Speaking Qualifier.
Lunch will be served.
The ever-growing applications of Unmanned Aerial Vehicles (UAVs) require UAVs to navigate at low altitude below 2000 feet. Traditionally, a UAV is equipped with a single GPS receiver. When flying at low altitude, a single GPS receiver may receive signals from less than four GPS satellites in the partially visible sky, not sufficient to conduct trilateration. In such a situation, GPS coordinates become unavailable and the partial GPS information is discarded. A GPS receiver may also suffer from multipath errors, causing the navigation solution to be inaccurate and unreliable.
In this talk, we present our recent work on UAV navigation using not one, but multiple GPS receivers, either on the same UAV or across different UAVs fused with other navigational sensors, such as IMUs and vision. We integrate and take use of the partial GPS information from peer GPS receivers and are able to dramatically improve GPS availability. We apply advanced filtering algorithms to multiple GPS measurements on the same UAV to mitigate multipath errors. Furthermore, multiple UAVs equipped with on-board communication capabilities can cooperate by forming a UAV network to further improve navigation accuracy, reliability and security.
Grace Xingxin Gao is an assistant professor in the Aerospace Engineering Department at University of Illinois at Urbana-Champaign. She obtained her Ph.D. degree in Electrical Engineering from the GPS Laboratory at Stanford University in 2008. Before joining Illinois at Urbana-Champaign as an assistant professor in 2012, Prof. Gao was a research associate at Stanford University.
Prof. Gao has won a number of awards, including RTCA William E. Jackson Award and Institute of Navigation Early Achievement Award. She was named one of 50 GNSS Leaders to Watch by the GPS World Magazine. She has won Best Paper/Presentation of the Session Awards 10 times at ION GNSS+ conferences. For her teaching, Prof. Gao has been on the List of Teachers Ranked as Excellent by Their Students at University of Illinois multiple times. She won the College of Engineering Everitt Award for Teaching Excellence at University of Illinois at Urbana-Champaign in 2015. She was chosen as American Institute of Aeronautics and Astronautics (AIAA) Illinois Chapter’s Teacher of the Year in 2016.
Lava tubes are caves that underlie the moon. Until recently, lunar cave exploration was impossible, since there was no known means to enter the closed tubes. Great holes, or "pits" have recently been discovered from orbit, and some of these appear to offer robotic access to caves.
Space agencies and private institutions plan to visit these potential caves and investigate them as potential lunar habitat sites.
My research has investigated rover configuration, mobility, electronics, power and operations for exploring lunar pits and caves with small robots. I will present some of my PhD research related to these issues.
John Walker completed his aerospace PhD at Tohoku University in 2016. His Mechanical Engineering degree was earned at the University of Alberta in 2005. In 2010 he attended the International Space University in Strasbourg, France. This was followed by an internship at the Space Robotics Lab at Tohoku University in Japan where he began doing lunar rover research to support Hakuto, a leading Google Lunar X-Prize Team. He joined Hakuto officially as the rover development leader and completed his PhD in the Space Exploration Lab with research in lunar cave exploration robots.
Autel Robotics was founded in 2014 based out of Shenzhen, China with presence in Seattle, Silicon Valley in USA and Munich, Germany. Autel (parent company) is the world's leading manufacturers and suppliers of professional diagnostic tools, equipments and accessories in the automotive aftermarket. Since mid-2014, we established Autel Robotics, dedicated to deliver ground-breaking solutions for new aerial exploration through our quadcopters and camera drone technology.
We are a team of industry professionals with a genuine passion for technology and years of engineering experience. We focus on transforming complex technology into simple solutions, creating easy-to-use aerial devices for photography/filming and imaging. Our current products include X-Star, a consumer quadcopter integrated with a 4K stabilized camera and indoor positioning system; It’s a combination of complex algorithms and advanced engineering delivered in one concise package. We also have Kestrol - a tilt-rotor UAV that combines the fixed-wing airplane’s energy-saving feature and the quadcopter's VTOL ability.
Please RSVP to assure an accurate headcount
About the Speakers:
Xinye Liu: Xinye is currently the Sr. Product Manager of Autel Robotics. Before that, she entered the drones industry right after graduation from Carnegie Mellon University in 2014. Since then, Xinye has combined her solid engineering background with business strategies, successfully led multiple projects including consumer quadcopters and even unmanned aerial vehicles. As an experienced drone industry professional, Xinye is willing to share her experience in this industry for the past 2 years.
Angela Wang: Angela is currently the Global Overseas HR Director of Autel Robotics. Angela has over 20 years of experiences in large multinational company including Procter & Gamble, Dell, Symantec in the international capacity working in China, Canada and USA. In recent years she has been in HR Business Partner roles helping business leaders to develop and implement people and organization development strategies. Angela Joined Autel since February 2016 based in Silicon Valley and she is responsible for all Autel International locations outside of China for Human Resources and Talent management.
We hope to see you there!
Branching structures are ubiquitous elements in several environments on Earth, from trees found in nature to man-made trusses and power lines. Being able to navigate such environments provides a difficult challenge to robots ill-equipped to handle the task. In nature, locomotion through such an environment is solved by apes through a process called brachiation, where movement is performed by hand-over-hand swinging.
This thesis outlines the development of a two-link Brachiating robot. We will present our work on implementing an Energy-based Controller where we inject or remove energy into the system before assuming the grasping posture. We will show that the controller can solve the ladder problem and swing-up for continuous contact brachiating gaits, and compare it to other control approaches in simulation. We will also show our work in developing a real-world brachiating robot, and show the implementation of our controller in this robot.
Zongyi Yang is a M.S. student in the Robotics Institute advised by David Wettergreen. He received a B.S. in Engineering Science ECE Option from University of Toronto in 2014. His current research focuses on robot brachiation.
David Wettergreen (Advisor)
Globally, horticulture is facing many challenges. The most significant of these challenges range from scalability to meet growing food demands, the impacts of constantly increasing labour requirements, produce loss (waste), yield security, hygiene and more. Most people can conceptualise how robotics will assist with many labour intensive horticultural roles to minimise or replace the direct labour requirement. This will ultimately help with other challenges like scalability, yield security, hygiene and food security but can the disruptive impact of robotics go beyond this? We think it will!
Each year around 50% of all produce is wasted. Taking an example from the Kiwifruit industry, we look at how the integration of MARS (mechanisation, automation, robotics and sensors) technologies through the value chain will be able to minimise waste, further adding to scalability and food security benefits of robotics. Two case studies are presented of technologies we have developed that will help deliver these robotic benefits. This will look at an autonomous orchard robot for tasks like harvesting and pollination, as well as a robotic apple packer that integrates into current packhouse systems.
Steven Saunders (Ngai Te Ahi) has 30 years’ experience in the Horticultural sector and is the founder, owner and Managing Director of the Plus Group of companies, specialising in horticulture management consultancy, global pollen production, robotics development, soil consultancy, international ventures, applied technology, research and development / innovation and science.
Steven is the co-founder of Newnham Park Innovation Centre in Te Puna, Tauranga, which hosts eight local export award winning independent companies predominately in the food sector. Steven is also a major stakeholder in the kiwifruit postharvest sector, an active Angel investor (a number of the Angel investments are food based), a Seed Co Investment Fund (SCIF) Director and investor representative, a Director of a number of privately owned companies, an elected member of the executive board for Priority One (driving economic growth in the Bay of Plenty), Board member of Enterprise Angels Tauranga, Crown appointed Director Landcare Research and a member of the Stanford Primary sector Alumni. Pollen Plus Ltd won the 2010 Bay of Plenty (BOP) Emerging Exporter of the Year award and Gro Plus won the 2007 Environment BOP Gallagher Innovation award and the 2007 Balance Nutrient Management award.
Steven is a founder of “WNT Ventures” tech incubator which was one of the 3 NZ Callaghan awarded tech incubators (a partnership between private sector investment and the government). Robotics Plus was awarded a 10 Million targeted research grant in a collaboration between Auckland University, Waikato University and Plant and Food Research. Robotics Plus features on the NZTE New Zealand Story.
The Plus Group (beneficially owned by the Saunders Family Trust) has a strong history of supporting New Zealand based research and development activities including Vision Mātauranga. This support extends well beyond co-funding several government assisted projects directly into self-funding research in the commercial and Maori environment. Steven was awarded the Tauranga Chamber of Commerce, Westpac “Business Leadership Award in 2014”