研究目的
To develop an automatic method for generating relative depth maps from monocular video sequences by leveraging dynamic occlusion cues and coherent forward/backward optical flow estimation.
研究成果
The proposed method successfully estimates depth ordering using motion occlusion cues with coherent forward/backward optical flow, showing slight improvements in performance while simplifying computation by removing costly steps like parametric motion estimation. It demonstrates applicability in AR and 3D video conversion, with results comparable to existing methods.
研究不足
The method relies on motion occlusions, which may not be effective in static scenes or with minimal motion. Computational complexity is reduced but still present due to optical flow and segmentation steps. Empirical parameter settings (e.g., Cth=0.15) might not generalize to all datasets.
1:Experimental Design and Method Selection:
The method involves estimating coherent forward/backward optical flow using a modified EpicFlow algorithm to preserve edges, computing occlusions based on this flow, partitioning the image using color and motion information via SLIC superpixels and binary partition trees, and assigning depth order using occlusion relations.
2:Sample Selection and Data Sources:
The CMU dataset and BVSD dataset were used for evaluation, with ground truth available for optical flow and depth ordering.
3:List of Experimental Equipment and Materials:
No specific hardware mentioned; software includes modified EpicFlow for optical flow, SED for edge detection, DeepMatching for point matching, and SLIC for superpixel generation.
4:Experimental Procedures and Operational Workflow:
Steps include optical flow estimation, occlusion detection, initial partitioning, segmentation via BPT pruning, and depth ordering. Parameters like Nn=10 and Nt=8 were set empirically.
5:Data Analysis Methods:
Performance evaluated using End Point Error (EPE) and Angular Error (AE) for optical flow, and local consistence order and Over Random Index (ORI) for depth ordering.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容