研究目的
To propose a new framework for image-based 3D model retrieval by reducing it to a problem of Euclid-to-Riemann metric learning, enabling efficient matching between query images and 3D models.
研究成果
The proposed framework effectively reduces the semantic gap between Euclidean and Riemannian spaces, enabling high-precision image-based 3D model retrieval. It is flexible and can incorporate advanced image descriptors like deep learning features. Future work will extend to multi-image-based matching.
研究不足
The method relies on the quality of image descriptors and rendering; set-to-set matching could improve stability and accuracy; performance may be limited by the semantic gap between sketches and rendered views.
1:Experimental Design and Method Selection:
The framework involves modeling query images as points in Euclidean space and 3D model views as SPD matrices on a Riemannian manifold, then mapping both to a common Hilbert space using kernel methods for metric learning. An optimization algorithm is designed to learn the metric.
2:Sample Selection and Data Sources:
The Princeton ModelNet40 dataset (12,311 shapes from 40 categories) and SHREC'13 dataset (1,258 models from 90 categories with sketch images) are used, with standard training-testing splits.
3:List of Experimental Equipment and Materials:
Computational platform with CPU Intel Core i7-2600, 16-GB memory, GPU GTX TITAN X 12 GB; software implemented in C++ and MATLAB.
4:Experimental Procedures and Operational Workflow:
Offline stage: Render 3D models to obtain views, extract features (e.g., raw intensities or deep features from VGG-M/DeepFace), construct SPD matrices, and train the manifold learning algorithm. Online stage: For a query image or sketch, extract features, map to Hilbert space, and compute distances to retrieve similar 3D models.
5:Data Analysis Methods:
Performance evaluated using mean average precision (mAP), precision-recall curves, and other metrics like nearest neighbor, first tier, etc., compared with state-of-the-art methods.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容