MSUAV@CMU

Micro and Small Unmanned Aerial Vehicle

Home
People
Research
System
Movies
Publications
Links

  Research Topics

  • Robust Real-Time SFM for SMAV

  • 3D Motion Planning for fixed-wing UAV


  Robust Real-Time SFM for SMAV

We currently investigate, implement and demonstrate core components of MAVIS, a 3D vision system incorporating robust Real-Time SFM for SMAV application scenarios. Automatic pose determination and extraction of 3D structure of the environment are critical to autonomous navigation and obstacle avoidance of SMAVs in constrained and crowded environments. Extracting such state information under various constraints (low quality video because of sensor payload limitations, minimal onboard processing capabilities, sudden and abrupt motions changes etc.) poses a significant challenge and requires development of innovative and robust algorithms. We investigate ways to
  • increase the reliability and robustness of SFM algorithms using commodity, COTS based motion sensors
  • exploit scene regularity by using layered approach, thus increasing the robustness of feature tracking
  • integrate, test and demonstrate complete 3D Vision system, consisting of various SFM modules with robustness at each processing stage to ensure reliable performance



The 3D vision system consists of three sub-systems as shown in the figure above: feature tracker and layer extractor, SMAV motion estimation, and 3D structure estimation. The Feature Detection module detects interested feature points in the images from the video camera. These detected feature points are tracked across frames in the video. We use motion sensors to compensate image motions caused by camera rotations. Each tracked feature point form a 2D track. These 2D tracks, together with the motion sensor information, are used to estimate the state of SMAV. We also maintain the dynamics of the SMAV, which stabilizes the motion estimation and deals with the degenerate cases of SFM. Once the SMAV motions are recovered, we can recover the 3D depth for each 2D feature track. We use layer-based video representation to exploit the scene regularity. The Layer Extraction module segments the reference frame into several layers, each layer a 2D image region approximating a plane in the 3D scene. These layers are used in feature point selection and tracking, as well as providing scene constraints in addition to camera geometry constraint for 3D estimation. Traditional SFM uses only feature points in estimating both motion and 3D, where 3D points nearby the epipole in the image contains large uncertainty. Robustness will be built into each of these sub-systems to assure reliable and robust performance under the conditions of low quality videos, degenerate motions, and restrict hard real-time requirement.




SFM with multiple cameras

Using a single camera for a SMAV has several limitations. First, if only equipped with a single camera, the SMAV must have this camera posed at a largely forward-looking orientation for navigation purpose. The mostly forward-flying motion of SMAV will then lead to a forward-moving camera -- a difficult and degenerate case for vision SFM. It is often the case that many points in the field of view (FOV) of the camera are near the epipole, and have small image motions due to small effective baseline between two views of a forward moving camera. As a result, the translation and the 3D depths of feature points can not be recovered accurately. Second, with a single camera, the SMAV will have very limit field of view and situational awareness of the scene around the SMAV. We propose to use a multiple cameras in SMAV. This is possible since the cameras are small and light-weight. The figure below shows an illustration where a MAV is equipped with three cameras looking at different directions. Such multi-camera system will significantly improve the robustness vision SFM algorithm. First, while the forward looking camera may undergo degenerate motion, the other cameras will undergo lateral translational motions that will benefit 3D recovery. Second, the effective FOV of the SMAV is significantly increased. This not only increases the situational awareness of the MAV, but also improves the numerical condition in the camera motion estimation step of the SFM algorithm. Third, it resolves the rotation v.s. translation ambiguity in SFM algorithm. The ambiguity is presented when there is only one camera, and the scene is planar or far away from the camera.




Sensor-fused SFM

We have already shown that the camera poses can be reliably estimated in most cases. We plan to use multiple cameras and commoditized motion sensors to further improve the quality of 3D reconstruction. We are exploring the possibility of fusing INS/GPS/Camera sensors for SMAV guidance, navigation, and control. The objective is to use commercial, off-the-shelf sensors that are very cheap (and relatively low quality) and still get a better overall SFM performance. Currently, there are several high-end, high quality INS systems that are used for guidance and control. These sensors are very expensive and not suitable for SMAV applications for various reasons. We want to take advantage of complementary strengths of commodity relatively low quality COTS sensors such as gyro and GPS. The sensors can address the weakness of each other. For example, Gyro can help in feature tracking in vision based SFM and feature tracking can help alleviate the drifting in the Gyro. Given multiple cameras and motion sensors (gyro and accelerometer), we will develop a Kalman Filter that can further improve the robustness of camera pose estimation.




  3D Motion Planning for fixed-wing UAV

According to the original scheme of two-phase planner first a collision-free path as well as a robot workspace are given as starting points. Then its final path is obtained after refining it by optimizing an initial feasible path with respect to a desired criteria. This strategy is very useful to cover a large UAV workspace and to make overall planning run fast. Here brief descriptions are given for the global planner and the local runtime planner to understand overall architecture of our planner design in the figure below.




Two-Phase Planner

In our planning architecture we first plan an obstacle-free path in a discretized workspace. The size of discretization and possible connections between nodes in the workspace are elaborately designed from given kinematic constraints of a UAV. So the output of the global planning never violates its kinematic constraints. Following the global path is easily achievable with a general waypoint controller because waypoints are kinematically feasible. Moreover the use of A* in the global grid planner makes a planned path never trapped in a local minimum and always guaranteed to be optimal in terms of its cost. This is a good news to the local planner because it is unprotected from local minima due to limited path planning horizon. So we select an appropriate point on the global path as a subgoal for the local planner.

The local planner compensates the drawbacks of the global planner: coarse discretization of configuration space(C-space), inability to avoid small or moving obstacles. With densely sampled motion primitives it can connect two configurations in a finer level. But its planning horizon is limited by the available computational power. The local planner is invoked when partial planning is needed over short segments of the global path at a time. That happens when the environment changes after the global plan has been made or small obstacles appear. In the global planning we tend to neglect small obstacles because the cell size is relatively large. (It sets to two thirds of UAV's minimum turning radius)

In case either planner fails to find a path for any reason, emergency stop planning is called up immediately to escape a critical situation. It is a backup plan to move the airplane to a safe region that has the lowest probability of crashing. This plan should run all the time at the background whenever airplane state or the environment is updated.


Global planning



Local planning



Two-phase planning (global and local planning are combined)