研究目的
Investigating the performance of experimentally defined, reduced complexity deep convolutional neural network architectures for fire detection.
研究成果
Reduced complexity CNN architectures, experimentally defined from leading architectures in the field, can achieve high accuracy for the binary classification task of fire detection. These architectures significantly outperform prior work on non-temporal fire detection at lower complexity than prior CNN-based fire detection, offering classification accuracy within less than 1% of their more complex parent architectures at 3-4× the speed.
研究不足
The study focuses on non-temporal fire detection, which may not capture all aspects of fire behavior that temporal information could provide. Additionally, the computational performance gains are achieved at the cost of slightly reduced accuracy compared to more complex architectures.
1:Experimental Design and Method Selection:
The study systematically investigated variations in architectural configuration of AlexNet and InceptionV1 networks against overall performance on the fire image classification task. Performance was measured using evaluation parameters set out in Section 3 with network training performed on 25% of the fire detection training dataset and evaluated upon the same test dataset.
2:Sample Selection and Data Sources:
Fire image data compiled from Chenebert et al. (75,683 images) and the established visual fire detection evaluation dataset of Steffens et al. (20593 images) in addition to material from public video sources (youtube.com: 269,426 images) to give a wide variety of environments, fires and non-fire examples (total dataset: 365,702 images).
3:List of Experimental Equipment and Materials:
Nvidia Titan X GPU via TensorFlow (1.1 + TFLearn 0.3), Intel Core i5 2.7GHz CPU and 8GB of RAM.
4:1 + TFLearn 3), Intel Core i5 7GHz CPU and 8GB of RAM.
Experimental Procedures and Operational Workflow:
4. Experimental Procedures and Operational Workflow: Training from random initialisation using stochastic gradient descent with a momentum of 0.9, a learning rate of 0.001, a batch size of 64 and categorical cross-entropy loss.
5:9, a learning rate of 001, a batch size of 64 and categorical cross-entropy loss.
Data Analysis Methods:
5. Data Analysis Methods: True Positive Rate (TPR), False Positive Rate (FPR), F-score (F), Precision (P) and accuracy (A) statistics.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容