研究目的
To enable object detection methods to conquer the difficulties caused by the haze effect by proposing a multi-task learning-based method that jointly uses color and depth features.
研究成果
The proposed multi-task learning method with color and depth features effectively detects objects in haze environments by leveraging depth contrast and adaptive fusion, demonstrating robustness in diverse conditions such as fog and water, with superior performance compared to existing methods.
研究不足
The method relies on the dark channel prior for depth estimation, which may have unscaled measurements and blocking effects; performance could be affected by inhomogeneous illumination conditions; no public benchmark for haze environments was available, limiting dataset standardization.
1:Experimental Design and Method Selection:
The method uses a multi-task learning framework with two streams for color and depth features, employing kernel density estimation (KDE) for background modeling and a weighted fusion mechanism for combining results.
2:Sample Selection and Data Sources:
300 frames collected from public websites (e.g., YouTube videos) in fog and water environments, labeled by 20 volunteers as ground truth.
3:List of Experimental Equipment and Materials:
A PC with MATLAB 2013a software, core
4:4G processor, and 4G memory. Experimental Procedures and Operational Workflow:
Depth features are extracted using the dark channel prior model with skylight recognition; color and depth background models are built using KDE; multi-task learning optimizes weights; results are fused for object detection.
5:Data Analysis Methods:
Performance evaluated using PASCAL criterion (overlap C) and six metrics (precision, similarity, TPR, F-score, FPR, PWC) with comparisons to existing methods (BF-KDE, ST-MoG, Vibe, DECOLOR).
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容