研究目的
To improve the robustness of neural networks against different adversarial noise models that can remarkably deteriorate the performance of a neural network which otherwise perform really well on normal (unperturbed) test data.
研究成果
The proposed K-Support norm based training method shows significant improvement in robustness against adversarial noise as compared to state-of-the-art techniques. However, an improvement in robustness may not necessarily also improve generalization performance.
研究不足
Training neural network with a noise model may not always improve accuracy on both perturbed as well as normal test set. The K-Support method is not very robust against uniform random noise.
1:Experimental Design and Method Selection:
The study involves generating adversarial noise based on K-support norm to train neural networks. The methodology includes the use of multi-layer perceptron and convolutional neural networks on MNIST and STL-10 datasets.
2:Sample Selection and Data Sources:
The MNIST dataset contains 28×28 grey scale images of handwritten digits with 50,000 samples for training and 10,000 samples for testing. STL-10 dataset contains 96×96 pixel RGB images of 10 different objects classes, cropped to 48×48 pixels, converted to grey-scale, and normalized.
3:List of Experimental Equipment and Materials:
MXNET was used to train all models.
4:Experimental Procedures and Operational Workflow:
The experimental evaluation consisted of three main parts: generation of adversarial samples, training of the network using perturbed samples, and testing of the network using a normal and perturbed test set.
5:Data Analysis Methods:
The performance of neural network trained using proposed noise model was compared with several other training methods including normal (no noise in training data), dropout, and Goodfellow’s method.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容