NetBot Laboratory | Texas A&M University
 
 
 

Vertical Line-based Visual Odometry in Urban area

Ji Zhang and Dezhen Song

  • Introduction
  •  

    Vertical lines are easy to be found in urban area, and easy to be extracted from images. They are insensitive to shadow and lighting conditions. In this project, our idea is to use the vertical lines such as buildings' edges and poles to estimation the robot movement. Since these vertical lines are sensitive to the robot horizontal movements, they become excellent landmarks providing accurate estimations of the robot ego motion on the road plane.

    In the odometry process, each pair of vertical lines provides an estimation result. Because of the different locations of the vertical line pairs, they introduce different amount of errors to the estimation results. We model the error propagate process and formulate the landmark (vertical line pair)  selection process and error variance minimization process.  We develop real time algorithms that assign each vertical line pair with a weight according to their locations in the camera coordinate. We investigating how different weighting mechanisms impact the accuracy of ego-motion estimation.

  • Experiments
  •                      

    We use a Sonly DSC-F828 camera mounted on a robot in the experiment. The camera horizontal field of view is set to be 50 degrees and the resolution of the camera is set to be 640*480 pixels. The robot is equipped with a MocroStrain 3MD-GX1 inertial measurement unit (IMU)  which provides angular 3D orientation readings. The robot movement is controlled by a laptop computer via local wireless network.

                            

    The experiment site is in front of the Evans library on Texas A&M University campus where there are plenty of vertical edges with a flat ground. The robot trajectory is set to be a zigzagging poly line with each odd step moving toward the depth direction and each even step moving toward the left side as shown in the above figures. The trajectory includes 31 step with the first step given as a reference and the following 30 steps to be estimated.

     

    Our latest vision odometry results [1] (blue trajectory in the above figure)  are rather accurate (about 2% relative error) if comparing to the robot real trajectory indicated by the black curve. At the same time, using only a single vertical line pair, which is the earlier method we proposed in [2], gives a slightly little worse result as illustrated by the red curve. If we uses the naive method that equally weights each vertical line pair, the results are much worse than the first two as illustrated by the green curve. Obviously, the experiment results show that using the optimized weights yields the most accurate robot movement estimation.

  • Papers
    1. Ji Zhang and Dezhen Song, Error Aware Monocular Visual Odometry using Vertical Line Pairs for Small Robots in Urban Areas, Special Track on Physically Grounded AI (PGAI), the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10), Atlanta, Georgia, USA, July 11-15, 2010 [pdf 350k][Video Demo in wmv format 8.7Mb]

    2. Ji Zhang and Dezhen Song, On the Error Analysis of Vertical Line Pair-based Monocular Visual Odometry in Urban Area, The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), St. Louis, USA, Oct. 11-15, 2009  [pdf 750k]