研究目的
To address the difficulty and expense of obtaining large amounts of labeled data for multispectral image change detection by proposing a generative discriminatory classified network (GDCN) that uses labeled data, unlabeled data, and generated fake data to improve performance with limited labeled data.
研究成果
The proposed GDCN method effectively addresses the challenge of limited labeled data in multispectral image change detection by incorporating unlabeled and generated data through adversarial training. It achieves competitive performance on real datasets, demonstrating robustness and accuracy in detecting changes. The method is fully unsupervised in practice, making it suitable for real-world applications. Future work could focus on further reducing dependency on preclassification and enhancing generalization across diverse datasets.
研究不足
The method relies on preclassification for initial labeled and unlabeled data, which may introduce errors if the preclassification is inaccurate. The performance is dependent on the quality and quantity of training data, and the network structure may require tuning for different datasets. Computational resources are needed for training deep networks, which could be a constraint in resource-limited settings.
1:Experimental Design and Method Selection:
The methodology involves designing a GDCN consisting of a discriminatory classified network (DCN) and a generator based on generative adversarial networks (GANs). The DCN classifies input data into changed, unchanged, and fake classes, while the generator creates synthetic data from noise to augment training. The approach uses adversarial training to enhance the DCN's ability to learn from limited labeled data and unlabeled data.
2:Sample Selection and Data Sources:
Four real multispectral remote sensing image datasets are used: Yandu Village, Weihe River, Minfeng, and Hongqi Canal datasets. These are obtained from satellites (WorldView-2 and GF-1) and include images from different times with ground truth reference maps. Labeled and unlabeled data are selected using preclassification methods like CVA and Otsu thresholding.
3:List of Experimental Equipment and Materials:
The experiments are run on an AMAX workstation with a Tesla K40c 12-GB GPU, using Python 3.6. No specific brands or models for other equipment are mentioned; the focus is on computational resources and software.
4:No specific brands or models for other equipment are mentioned; the focus is on computational resources and software.
Experimental Procedures and Operational Workflow:
4. Experimental Procedures and Operational Workflow: The workflow includes: (a) Preprocessing images and generating initial change detection results using CVA and Otsu methods. (b) Selecting reliable labeled and unlabeled data based on neighborhood criteria. (c) Training the GDCN with labeled data, unlabeled data, and generated fake data using minibatch stochastic gradient descent and Adam optimizer. (d) Testing the trained DCN on raw bitemporal images to produce final change maps. Parameters like λ (weight for unlabeled and fake data), α (proportion of labeled data), and ω (neighborhood size) are optimized.
5:Data Analysis Methods:
Performance is evaluated using metrics such as false negative (FN), false positive (FP), true positive (TP), true negative (TN), overall error (OE), overall accuracy (OA), and kappa coefficient (KC). Comparative analysis with methods like PCA, IR-MAD, DNN, and GAND is conducted.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容