研究目的
To restore underwater images by estimating background light and scene depth using deep networks.
研究成果
The proposed method effectively restores underwater images by estimating background light and scene depth using deep networks. It outperforms state-of-the-art underwater image restoration methods on both synthetic and real underwater images, as demonstrated by higher PSNR and SSIM values, and better UCIQE and UIQM scores.
研究不足
The study does not address the potential variability in underwater conditions that may affect the performance of the proposed method. Additionally, the training data is synthesized from indoor scenes, which may not fully represent the diversity of underwater environments.
1:Experimental Design and Method Selection:
The study uses a 5-layer ConvNet for background light estimation and a multi-scale deep network architecture for scene depth estimation.
2:Sample Selection and Data Sources:
Synthetic underwater images are created using the NYU depth dataset v2, which contains 1449 pairs of aligned RGB and depth images of indoor scenes.
3:List of Experimental Equipment and Materials:
The study uses TensorFlow with GPU acceleration for training the networks.
4:Experimental Procedures and Operational Workflow:
The networks are trained using 12,000 synthetic underwater images, with Adam Optimizer and a learning rate set to 10^-
5:Data Analysis Methods:
The effectiveness of the proposed method is evaluated using mean square error (MSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), UCIQE, and UIQM metrics.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容