研究目的
To build a conditional Generative Adversarial Network (cGAN) useful to establish the relationship between two loosely linked set of variables that show multitude of complex spatial features such as climate conditions to aerial image.
研究成果
The cGAN model successfully generates satellite imagery that shares many characteristics with real imagery, demonstrating the potential of deep learning for spatial pattern generation in geoscience. It could serve as a multipurpose tool for climate and landscape change analysis, though further validation is needed for specific applications like climate change scenario simulation.
研究不足
The model assumes space for time and steady state, not accounting for the time dynamics of landscape changes. Application to climate change scenarios should be treated as indicative and not as a prediction.
1:Experimental Design and Method Selection
Adapted a deep learning generative model, specifically a conditional Generative Adversarial Network (cGAN), to map environmental conditions to aerial images of landscapes.
2:Sample Selection and Data Sources
Gathered a total of 1,857 Sentinel 2 multispectral intakes from latitude 56°S to 60°N, divided into a grid of 11x11 km cell size totaling 94,289 samples. Environmental predictors included STRM v4, WorldClim variables, GLiM, and GlobeLand30’s categories.
3:List of Experimental Equipment and Materials
Deep learning model implemented in TensorFlow with a total of ~350 million parameters.
4:Experimental Procedures and Operational Workflow
Trained the cGAN to generate Sentinel-2 multispectral imagery given a set of climatic and terrain predictors. The model was then validated with a separate set of data.
5:Data Analysis Methods
Computed the Normalized Difference Vegetation Index (NDVI) and used information theory derived metrics for analysis.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容