研究目的
To employ Convolutional Neural Networks to monitor protected and natural reserves from illegal activities using drone images, distinguishing classes such as water, deforesting area, forest, and buildings.
研究成果
The approach using Convolutional Neural Networks with transfer learning is effective for environmental monitoring with drone images, achieving reasonable results as evaluated by experts. VGG19 performed slightly better in distinguishing challenging classes like buildings near swimming pools. Future work includes expanding the dataset and adding more classes for improved accuracy.
研究不足
Lack of labeled data for training, small dataset size (100 samples), qualitative evaluation only due to absence of ground-truth labels for test images, potential issues with class confusion (e.g., swimming pools misclassified as water), and dependence on manual patch extraction and expert analysis.
1:Experimental Design and Method Selection:
A transfer learning-based approach using pre-trained VGG16 and VGG19 models from ImageNet, fine-tuned with a custom dataset. The methodology involves extracting patches from drone images, resizing them, and training new classification layers.
2:Sample Selection and Data Sources:
Manually created and labeled dataset of 100 samples (25 per class) from drone images captured in a countryside area of S?o Paulo State, Brazil, focusing on challenging scenarios like shadows, algae, and mud.
3:List of Experimental Equipment and Materials:
Drone with onboard camera for image capture, computer with Keras framework for neural network implementation.
4:Experimental Procedures and Operational Workflow:
Images were captured by drone, patches of different sizes were extracted and resized to 150x150 pixels. The neural models were fine-tuned for 50 epochs using RMSprop optimizer and cross-entropy loss function, with 96 samples for training and 4 for validation. Predictions were made on unseen drone images by splicing them into patches.
5:Data Analysis Methods:
Qualitative analysis by experts in the field, with recognition rates around 95% during training.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容