研究目的
To accelerate pedestrian labeling in far-infrared image sequences for deep learning-based pedestrian detection systems.
研究成果
Using a weakly trained YOLOv3 detector significantly accelerates the annotation process (11 times faster than manual labeling). Re-training with a small set of FIR images improves pedestrian detection performance, even in low-resolution conditions, and is effective for automated labeling in far-infrared videos.
研究不足
The detector had limitations in detecting small or partially covered pedestrians; initial training set was small (250 annotations), leading to potential misclassifications and false detections. Performance may vary with different environmental conditions and camera resolutions.
1:Experimental Design and Method Selection:
The study integrated the YOLOv3 object detector into labeling software to automate pedestrian annotation in far-infrared videos. A Kalman filter-based tracker was used for tracking.
2:Sample Selection and Data Sources:
Video captured in a suburb area during winter was used; frames were extracted every 10th frame to minimize repetition, and frames without pedestrians were deleted.
3:List of Experimental Equipment and Materials:
Far-infrared camera (specific model not mentioned), YOLOv3 detector, labeling software.
4:Experimental Procedures and Operational Workflow:
Initial data annotation involved labeling 116 images with over 250 annotations. Automated labeling tool extracted frames, passed them to the pre-trained detector to generate ROIs, and saved coordinates. Manual validation followed.
5:Data Analysis Methods:
Performance was verified by analyzing 50 test images, comparing detections to manual annotations, and calculating speed improvement.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容