|
|
||
Motivation
|
||
Autonomous vehicles are required to detect, recognize and track the lane boundaries in order to safely navigate on the road. It is necessary to develop a robust method to quantify the lane marking quality and identify the lane boundaries with multimodal sensory and prior inputs, such as camera, LIDAR, GPS/IMU, and prior maps, to assist autonomous driving. Here we release an initial version of multi-modal sensor fusion-based lane detection source code to the community, which is part of our research project aiming at measuring infrastructure quality for autonomous vehicles. |
Lane Detection and Quality Assessment
|
||
We detect and quantify the quality of the immediate left and right lane
boundaries with respect to the vehicle due to their importance in guiding
the vehicle. We assume that the vehicle is equipped with one front view
camera, one LIDAR system, and GPS/IMU units, which are common sensory
configurations for autonomous vehicles. ![]() ![]() Fig.1. Sample sensor input and output Fig.2. System diagram The software requires the sensory data to satisfy the following conditions:
|
|
||
GitHub:
[internal] [external]
Please click external if your domain is not within tamu.edu domain. Binary: [Demo] (Ubuntu 16.04) Data: [KITTI synced + rectified data] |
|
||
As more and more companies are developing autonomous vehicles (AVs), it is important to ensure that the driving behavior of AVs is human-compatible because AVs will have to share roads with human drivers in the years to come.
We propose a new tightly-coupled perception-planning framework to improve human-compatibility.
Using GPS-camera-lidar multi-modal sensor fusion, we detect actual lane boundaries (ALBs) and propose availability-resonability-feasibility tests to determine if we should generate virtual lane boundaries (VLBs) or follow ALBs.
When needed, VLBs are generated using a dynamically adjustable multi-objective optimization framework that considers obstacle avoidance, trajectory smoothness (to satisfy vehicle kinodynamic constraints), trajectory continuity (to avoid sudden movements),
GPS following quality (to execute global plan), and lane following or partial direction following (to meet human expectation).
Fig.1. Sample output. Green curves are the VLBs generated by our algorithm. |
|
||
GitHub:
[internal] [external]
Please click external if your domain is not within tamu.edu domain.
Data: [KITTI synced + rectified data] |
|
||
We propose to utilize crowdsourced videos from multiple vehicles at different times to facilitate LM verification in the HD map, which may not be generated by cameras. The vehicle is equipped with a front facing camera to observe the LMs. We assume that LMs are stationary and rigid which are represented as a set of labeled 3D points. We also assume that the camera is pre-calibrated, and the nonlinear distortion of images has been removed. |
|
||
Sample Results: [Download]
Data Format: [ReadMe] |
|
|||
Dezhen Song | Project Director, Concept Design | ||
Binbin Li | System Design, Primary Software Developer | ||
Aaron Kingery | Software Developer, Tester, and Documentation | ||
Aaron Angert | Software Developer, Tester, and Documentation | ||
Ankit Ramchandani | Software Developer, Tester, and Documentation | ||
Thanks for C. Chou, H. Cheng, S. Yeh and D. Wang for their inputs and contributions to the NetBot Laboratory at Texas A&M University. We are also thankful for the support from National Science Foundation under NRI-1426752, and NRI-1526200, and in part by TxDot 0-6869. |
|
||
|
|
||
Please contact Dezhen Song (dzsong@cse.tamu.edu) for questions. |