User Tools

Site Tools


Visual Odometry for an Autonomous Car - BMW Project

The work is a part of a Technical University Munich project that was accomplished with the collaboration with BMW Car IT.

It shows the tracking of a car using visual features that were obtained from the stereo camera setup mounted on the top of the car.

The upper part of the video shows the features being tracked. The lines indicate the motion between two consecutive frames and the colors of the points describe the depth information (the more red the closer).

The bottom video shows the estimated position of the car (estimate marker) together with the ground truth position (kitty_stereo_left marker).

The white dots are visual features extracted from the scene.

3D SLAM - Point Cloud Registration

3D SLAM using Kinect with feature matching + RANSAC.

This video shows the video feed from the camera, the 6D pose of the camera and the point clouds from the Kinect. Point clouds are taken from the Kinect every second and persist in the map for 15 seconds. The tracking algorithm runs at roughly 30 Hz.

The above videos are both running the same SLAM pipeline. 2D features are tracked using a combination of FAST/FREAK and matched efficiently by doing a restricted search (assuming features do not move far). Features are projected into 3D space using either 3D kinect data or 3D coordinates as calculated from stereo disparity. RANSAC is used to filter outliers and calculate a transformation of the camera between frames. No global optimization has been incorperated into this framework.

people/kidson/visual_odometry.txt · Last modified: 2013/05/30 21:14 by kidson