Camera-LIDAR Bimodal Lane Detection, Recognition and Tracking
Motivation
Autonomous vehicles are required to detect, recognize and track the lane boundaries in order to safely navigate on the road. It is necessary to develop a robust method to quantify the lane marking quality and identify the lane boundaries with multimodal sensory and prior inputs, such as camera, LIDAR, GPS/IMU, and prior maps, to assist autonomous driving. Here we release an initial version of multi-modal sensor fusion-based lane detection source code to the community, which is part of our research project aiming at measuring infrastructure quality for autonomous vehicles.
Lane Detection and Quality Assessment
We detect and quantify the quality of the immediate left and right lane boundaries with respect to the vehicle due to their importance in guiding the vehicle. We assume that the vehicle is equipped with one front view camera, one LIDAR system, and GPS/IMU units, which are common sensory configurations for autonomous vehicles.

   

            Fig.1. Sample sensor input and output                                                   Fig.2. System diagram

The software requires the sensory data to satisfy the following conditions:
  • The camera is pre-calibrated with known intrinsic parameters and nonlinear distortion removed;
  • Sensor readings are already synchronized;
  • The coordinate system transformations between any two sensors are known by calibration.
The software package is able to provide :
  • Lane marking pixel’s coordinates;
  • 3D lane marking points;
  • 3D B-spline lane boundary curve models;
  • Lane quality metrics.
The system diagram is in Fig. 2. We process camera images to extract road surface pixels, which are fused with LIDAR data to obtain surface models to facilitate the detection of lane markings in each modality. We intersect the lane marking points from the image and the LIDAR scan at LIDAR coordinates and generate a new set which contains dual-modal lane markings that are more robust than those in individual modalities. We then obtain the left and right lane boundary curves through a robust fitting of multiple models given the new lane marking set. We further use the curves to refine lane marking set in both modalities and compute the quality metrics.
 
Source Code
GitHub: [internal] [external] Please click external if your domain is not within tamu.edu domain.
Binary: [Demo] (Ubuntu 16.04)
Data: [KITTI synced + rectified data]
Virtual Lane Boundary Generation for Human-Compatible Autonomous Driving
As more and more companies are developing autonomous vehicles (AVs), it is important to ensure that the driving behavior of AVs is human-compatible because AVs will have to share roads with human drivers in the years to come. We propose a new tightly-coupled perception-planning framework to improve human-compatibility. Using GPS-camera-lidar multi-modal sensor fusion, we detect actual lane boundaries (ALBs) and propose availability-resonability-feasibility tests to determine if we should generate virtual lane boundaries (VLBs) or follow ALBs. When needed, VLBs are generated using a dynamically adjustable multi-objective optimization framework that considers obstacle avoidance, trajectory smoothness (to satisfy vehicle kinodynamic constraints), trajectory continuity (to avoid sudden movements), GPS following quality (to execute global plan), and lane following or partial direction following (to meet human expectation).


















                           Fig.1. Sample output. Green curves are the VLBs generated by our algorithm.

Source Code
GitHub: [internal] [external] Please click external if your domain is not within tamu.edu domain.
Data: [KITTI synced + rectified data]
Lane Marking Verification for High-definition Map Maintenance using Crowdsourced Videos
We propose to utilize crowdsourced videos from multiple vehicles at different times to facilitate LM verification in the HD map, which may not be generated by cameras. The vehicle is equipped with a front facing camera to observe the LMs. We assume that LMs are stationary and rigid which are represented as a set of labeled 3D points. We also assume that the camera is pre-calibrated, and the nonlinear distortion of images has been removed.
Sample Output
Sample Results: [Download]
Data Format: [ReadMe]
Credits
Dezhen Song Project Director, Concept Design
Binbin Li System Design, Primary Software Developer
Aaron Kingery Software Developer, Tester, and Documentation
Aaron Angert Software Developer, Tester, and Documentation
Ankit Ramchandani Software Developer, Tester, and Documentation
Thanks for C. Chou, H. Cheng, S. Yeh and D. Wang for their inputs and contributions to the NetBot Laboratory at Texas A&M University. We are also thankful for the support from National Science Foundation under NRI-1426752, and NRI-1526200, and in part by TxDot 0-6869.
Publications
  • Binbin Li, Dezhen Song, Ankit Ramchandani, Hsin-Min Cheng, Di Wang, Yiliang Xu, and Baifan Chen, Virtual Lane Boundary Generation for Human-Compatible Autonomous Driving: A Tight Coupling between Perception and Planning, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, Nov. 4-8, 2019. [pdf][Video]
  • Binbin Li, Dezhen Song, Haifeng Li, Adam Pike, and Paul Carlson, "Lane Marking Quality Assessment for Autonomous Driving", IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October, 1-5, 2018. [pdf][Video]
Contact
Please contact Dezhen Song (dzsong@cse.tamu.edu) for questions.