研究目的
To tackle the problem of multi-modal spectral image super-resolution while constraining to a small dataset, using different modalities to improve neural network performance.
研究成果
The proposed method effectively combines multi-modal inputs for spectral super-resolution, demonstrating improved performance with economic resource usage. Future work can explore additional modalities.
研究不足
The work is limited to the data provided by the PIRM2018 challenge, which has a small number of images, potentially affecting generalization. It can be expanded to other modalities like different scales, near-infrared, or depth inputs.
1:Experimental Design and Method Selection:
The methodology involves a residual learning framework for multi-modal spectral image super-resolution, including image completion for upscaling, a two-stage pipeline with residual networks, and loss functions combining MRAE and SID.
2:Sample Selection and Data Sources:
The dataset from PIRM2018 Spectral Image Challenge is used, with Track1 containing 240 spectral images and Track2 containing 130 image pairs of spectral and color images.
3:List of Experimental Equipment and Materials:
A Titan X GPU is used for computation.
4:Experimental Procedures and Operational Workflow:
Steps include preprocessing with image completion (FAN algorithm) to generate HR candidates, training Stage-I network on multi-scale inputs, and Stage-II network on guided color images with transfer learning.
5:Data Analysis Methods:
Evaluation is conducted using MRAE, SID, and PSNR metrics on validation sets.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容