研究目的
Investigating the fusion of vision-based instance segmentation and LIDAR-based segmentation to achieve accurate 2D bird's-eye view object segmentation for autonomous driving vehicles.
研究成果
The proposed semantic-enhanced LIDAR segmentation method with a modified T-linkage RANSAC outlier rejection algorithm improves vehicle segmentation and heading estimation. It achieves better performance in 2D IOU and AAHE compared to traditional methods, demonstrating the benefits of fusing camera and LIDAR data for autonomous driving applications.
研究不足
The method's performance is limited by the accuracy of the vision-based semantic segmentation and the quality of LIDAR data. Over-segmentation and under-segmentation issues may arise due to occlusions and limited vertical FOV of planar LIDAR.
1:Experimental Design and Method Selection:
The study combines a vision-based instance segmentation algorithm (Mask R-CNN) with a LIDAR-based segmentation algorithm to enhance vehicle segmentation. A modified T-linkage RANSAC is used for outlier removal.
2:Sample Selection and Data Sources:
The experiments are conducted on a reduced-resolution KITTI dataset and a Cadillac SRX dataset to evaluate the proposed method.
3:List of Experimental Equipment and Materials:
The equipment includes a Velodyne HDL-64E LIDAR sensor, a FLEA camera, and IBEOLUX sensors on a Cadillac SRX.
4:Experimental Procedures and Operational Workflow:
The method involves projecting LIDAR points onto corresponding images, applying semantic segmentation, and fusing the results with LIDAR-based segmentation. The T-linkage RANSAC algorithm is then applied for outlier removal.
5:Data Analysis Methods:
The performance is evaluated using precision, recall, 2D Intersection-Over-Union (IOU), and average absolute heading error (AAHE).
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容