研究目的
To estimate the quality of decoded videos from the original video only using a convolutional neural network, avoiding the computationally expensive process of complete encoding and decoding.
研究成果
The proposed convolutional neural network can accurately predict decoded-video quality from the original video only, offering a computationally efficient alternative to conventional methods. This approach benefits quality guarantee tasks in applications such as video hosting in cloud and video-on-demand.
研究不足
The study is limited by the size of the dataset and the computational resources required for training the CNN. The proposed method's accuracy may vary with different video contents and quality metrics.
1:Experimental Design and Method Selection:
The study employs a convolutional neural network (CNN) to predict decoded-video quality from the original video only. The network is designed to be shallow to address the challenges of large video size and small dataset.
2:Sample Selection and Data Sources:
The proposed method is trained and validated with 220 640×360 videos from the VideoSET dataset, which contains various video contents.
3:List of Experimental Equipment and Materials:
The study uses H.264/AVC for encoding videos with a range of quantization parameters (QPs).
4:Experimental Procedures and Operational Workflow:
Each video is divided into spatial-temporal blocks, which are independently processed by the VQANet to predict their own VQA vectors. The VQA vector for the whole video is regressed by element-wise averaging all VQA vectors of all blocks.
5:Data Analysis Methods:
The accuracy of the prediction is evaluated using sum of absolute differences (SAD) and cross correlation (xcorr) between the ground truth and early predicted VQA values.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容