研究目的
To develop a deep learning method for illuminant estimation that does not require ground truth illuminant annotations for training, using an auxiliary task like object recognition to indirectly train the model.
研究成果
The proposed deep learning method for illuminant estimation, trained without ground truth illuminants using an auxiliary object recognition task, achieves competitive results in cross-dataset evaluations, outperforming some learned methods and matching others. It demonstrates the feasibility of learning illuminant estimation indirectly, with potential for extension to more complex scenarios like multiple illuminants in future work.
研究不足
The method may be biased by the illuminant distribution in the training dataset, potentially leading to non-neutral predictions. High variability in illuminants during pre-training could make the OR module too robust, hindering IE training. It is outperformed by methods using ground truth illuminants in in-dataset evaluations.
1:Experimental Design and Method Selection:
The method involves an Illuminant Estimation (IE) module and an Object Recognition (OR) module trained end-to-end. The IE module estimates the scene illuminant and corrects the image, which is then fed to the OR module for classification. The training uses only object recognition labels, not illuminant ground truth.
2:Sample Selection and Data Sources:
The VegFru dataset with over 90,000 images of 200 vegetable classes is used for training, as color is a discriminating feature. Evaluation is on Shi-Gehler and NUS datasets for color constancy.
3:List of Experimental Equipment and Materials:
A neural network based on AlexNet architecture, GPU for computation (NVIDIA Titan X Pascal donated).
4:Experimental Procedures and Operational Workflow:
Pre-train OR on VegFru without color jittering. Then train IE and OR end-to-end with color jittering (random illuminant augmentation from Gaussian distribution). During inference, only IE is used. Images are preprocessed by subtracting
5:Data Analysis Methods:
Angular recovery error is used to compare estimated and reference illuminants. Results are compared with state-of-the-art methods including parametric and learned approaches.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容