研究目的
To propose a deep bilinear model for blind image quality assessment (BIQA) that works for both synthetically and authentically distorted images.
研究成果
The proposed DB-CNN model demonstrates state-of-the-art performance on both synthetic and authentic IQA databases, which arises from the two-stream architecture for variation modeling, pre-training for better initializations, and bilinear pooling for meaningful feature blending. The model is versatile and extensible, capable of handling more distortion types and levels, and can be improved by considering other variants of bilinear pooling.
研究不足
The current work deals with synthetic and authentic distortions separately by fine-tuning DB-CNN on either synthetic or authentic databases. How to extend DB-CNN toward a more unified BIQA model, especially in the early feature extraction stage, is an interesting direction yet to be explored.
1:Experimental Design and Method Selection:
The model constitutes two streams of deep convolutional neural networks (CNN), specializing in synthetic and authentic distortion scenarios separately. For synthetic distortions, a CNN is pre-trained to classify the distortion type and level. For authentic distortions, a pre-trained CNN (VGG-16) is used for the image classification task. The two feature sets are bilinearly pooled into one representation for a final quality prediction.
2:Sample Selection and Data Sources:
The pre-training set is constructed based on the Waterloo Exploration Database and PASCAL VOC 2012, where images are synthesized with nine distortion types and two to five distortion levels.
3:List of Experimental Equipment and Materials:
The model is implemented using the MatConvNet toolbox.
4:Experimental Procedures and Operational Workflow:
The proposed DB-CNN is fine-tuned on target databases with a variant of the stochastic gradient descent method.
5:Data Analysis Methods:
The performance is evaluated using Spearman rank order correlation coefficient (SRCC) and Pearson linear correlation coefficient (PLCC).
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容