研究目的
Converting traditional 2D videos to 3D videos to solve the shortage of 3D contents by extracting depth information from 2D videos and synthesizing new images from existing viewpoints.
研究成果
The proposed method for depth extraction and hole-filling in 2D-to-3D video conversion shows better quality and less running time than state-of-the-art methods. However, it is not suitable for scenes with rotating objects, indicating a direction for future research.
研究不足
The method cannot fit for certain scenes including rotating objects. Future work plans to use more depth cues to accommodate more scenes.
1:Experimental Design and Method Selection:
The study proposes a depth extraction method based on dense edge-preserving optical flow and a hole-filling method using Gaussian Pyramid and Laplace Pyramid at cross scales.
2:Sample Selection and Data Sources:
Different datasets including the Middlebury dataset, MPI-Sintel dataset, FTV-3DV dataset, and 'Ballet' dataset from Microsoft Research are used for evaluation.
3:List of Experimental Equipment and Materials:
Implemented with C++ under Windows and tested on a computer with a Core i7-7700k at
4:20 GHz. Experimental Procedures and Operational Workflow:
The process includes extracting edge-preserving optical flow, converting optical flow into depth map, generating new virtual view images, and filling holes using pyramid methods.
5:Data Analysis Methods:
Evaluation methods include AVV (average variance value) for depth extraction results and PSNR (Peak Signal to Noise Ratio) for hole-filling quality.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容