研究目的
To develop a method for accurate 3-D reconstruction using RGB-D cameras that handles outliers and significant registration errors in depth maps.
研究成果
The proposed method significantly improves the accuracy and reduces ambiguity in 3-D reconstructions by iteratively fusing depth maps and refining camera poses, outperforming previous methods in handling outliers and registration errors.
研究不足
The re-registration may not work well if depth maps have insufficient overlap or redundancy, requiring careful scene capture. The method is dependent on the accuracy of initial camera poses and may not handle all types of noise or errors.
1:Experimental Design and Method Selection:
The method involves an iterative pipeline with depth map pre-filtering, fusion based on depth-dependent uncertainty, point cloud post-filtering, and re-registration of depth maps using ICP algorithm to refine camera poses.
2:Sample Selection and Data Sources:
Three datasets (CCorner, Office1, Office2) captured with Kinect V2 are used.
3:List of Experimental Equipment and Materials:
Microsoft Kinect V2 depth sensor, RGB images, and initial camera poses.
4:Experimental Procedures and Operational Workflow:
Depth maps are pre-filtered to remove outliers, fused into a non-redundant point cloud with uncertainty-based merging, post-filtered to remove erroneous points, and re-registered iteratively until convergence.
5:Data Analysis Methods:
Quantitative evaluation includes error distance calculations to ground truth planes and voxel-based completeness and compactness metrics using Jaccard index and compression ratio.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容