First Person Vision & Gaze Detection
First Person Vision (FPV) is a transformative system that can monitor, record and assist people in their daily lives at work or at play in a truly symbiotic manner. Our FPV device captures a person's full field of vision and specific gaze-based intent to provide the user with intelligent cues and guidance, personal assistance, training, information or entertainment.
3D Air Slalom: Vision-based Autonomous UAV Navigation
A 1.2 meter small fixed-wing, a beginner-level model airplane, is competently transformed into an autonomous airplane. The plane is equipped with a camera, IMU, and GPS and loaded with a vision-based autonomous navigation algorithm on board.
The camera detected multiple colored targets on the ground and the airplane completed the task sequentially with assigned directions. Since this task requires the UAV to complete a challenging slalom course with assigned directions, we call it "3D Air Slalom".
Motion Planning of a Small Fixed-Wing UAV
A small fixed-wing airplane inherently suffers from poor agility in motion and motion uncertainty in windy conditions. We developed a new motion planner that always returns a feasible motion sequence that the airplane can execute. Rather than finding the shortest path, the planner instead maximizes the chances of reaching the target, which is proven to be a more promising method for an error-prone small and lightweight drone.
Visual-Inertial UAV Attitude Estimation
A new sensor-fusion method that combines camera and inertial sensor data (accelerometers and gyroscopes) is developed for realtime airplane attitude (roll and pitch) estimation. Urban scenes are depicted as images where vertical and horizontal edges appear on many man-made structures and these line edges can be used as visual cues for the vehicle attitude.
A brand-new IMU calibration method is developed for in-field automatic calibration with no artificial apparatus. Natural force (gravity) or landmarks(distant scene objects) are sufficient for the calibration input. The existence of linear solutions for any IMU types is uncovered and illustrated using the factorization method which is well known in computer vision.
Gyro-assisted KLT Feature Tracking
[open source is available]
When a camera rotates rapidly or shakes severely, a conventional KLT (Kanade-Lucas-Tomasi) feature tracker becomes
vulnerable to large inter-image appearance changes. Tracking fails in the KLT optimization step, mainly due to an inadequate
initial condition equal to final image warping in the previous frame. Gyroscopes can greatly help the KLT tracker so that the result remains robust under fast camera-ego rotations. GPU implementation tracks 1000 features at a video rate.
Visual-Inertial UAV Attitude Estimation [ICRA'11]
This video shows a driftless attitude estimation result of a small UAV operating in the urban area. A RANSAC-based line classifier partitioned the line segments into vertical, horizontal, or outlier line groups by multiple vanishing point (VP) detection. Gyroscopes predict an instantaneous rotation change. Each line associated with either the vertical or horizontal VP is separately used to update the posterior attitude estimate in an Extended Kalman filter.
3D Air Slalom: Vision-based Autonomous UAV Navigation [ICRA'13]
Two ground targets are arbitarily placed on the ground (each target consists of four squares and the blue square indicates the goal direction). First the UAV fly around to search all the targets and identify the entering direction by their color arrangement. The motion planner then guides the UAV to reach each target at its correct direction.
Gyro-aided KLT Feature Tracking [IJRR'11]
A low-cost IMU (~$100) is attached to the camera. We intentionally shake the camera with left-right, up-down, and random rotations. See how much our gyro-aided KLT works better than a conventional KLT. The graph at the bottom shows the number of successfully tracked features for both methods (the red is the gyro-aided method and black is the image-only method) as well as corresponding gyroscope data. The red curve is almost consistent regardless of camera motions.
Visual Odometry [CVIU'10]
The camera/IMU are placed on the top of a car. We drive the car around a parking lot. Our sequential SFM (Structure-from-Motion) reconstructs the car trajectory and 3D scene structure in realtime. My gyro-aided KLT is also used to provide the robust feature track input to the SFM.