研究目的
To develop an effective solar power forecasting technique using convolutional neural networks (CNNs) and long-short term memory (LSTM) networks to predict the next day's solar power based on time series data from photovoltaic inverters and weather centers, addressing issues of data availability and preprocessing.
研究成果
The proposed CNN+LSTM network outperforms traditional regression methods and a state-of-the-art deep learning method (AE+LSTM) in solar power forecasting, demonstrating robustness with minimal preprocessing and the ability to utilize coarsely estimated weather data. This approach is effective for practical applications in energy management and market operations.
研究不足
The study relies on data from specific locations in South Korea, which may limit generalizability. The preprocessing is minimal, but outliers could still affect performance. Computational cost is high due to the use of deep learning models, and the method may not perform well with very noisy or incomplete data without additional refinement.
1:Experimental Design and Method Selection:
The study uses a deep neural network combining CNNs and LSTMs for time series analysis. The network is designed to extract features from input sequences of weather and PV data collected every 10 minutes over a day, with the goal of predicting the next day's solar power output.
2:Sample Selection and Data Sources:
Data is collected from 71 photovoltaic inverters across 14 sites in South Korea, spanning from February 29, 2012, to January 6,
3:Weather data is obtained from the Korea Meteorological Administration. List of Experimental Equipment and Materials:
20 A workstation with Intel(R) Core(TM) i7-6850K CPU, 125GB RAM, and three NVIDIA GeForce GTX 1080 Ti graphics cards is used. Software includes Python
4:x, Scikit-learn 2, Keras 8, and TensorFlow Experimental Procedures and Operational Workflow:
Data preprocessing involves substituting negative solar power values with zero, one-hot encoding categorical attributes, and normalizing numerical attributes. The dataset is split into training (75%), test (25%), and validation (10% of training) sets. The network is trained with varying parameters (e.g., filter sizes, hidden state sizes) and evaluated using performance metrics like MAPE, RMSE, and MAE.
5:Data Analysis Methods:
Performance is assessed using mean absolute percentage error (MAPE), root mean square error (RMSE), and mean absolute error (MAE). The network's architecture includes D-CNN for feature extraction and LSTM for sequence modeling.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容