基于无人机多光谱影像的深度学习与变化检测融合洪灾受损大棚提取

Integration of deep learning and change detection for flood-damaged greenhouse extraction using UAV multispectral imagery

  • 摘要: 快速、准确地提取受损大棚空间位置信息,对灾害损失评估和灾后重建有重要的意义。本文以海河"23·7"流域性特大洪水中天津市武清区杨营村泄洪区为研究对象,利用无人机多时相多光谱遥感数据,构建多维度空间特征,采用多种深度学习网络,对比不同影像分辨率,开展温室大棚的识别,判别洪灾受损大棚的研究。结果表明,多维度空间特征中Blue波段、Green波段以及NDVI指数的大棚分离度更高。基于0.2m分辨率的SegUnet网络的大棚分类模型精度最高,总体精度为99.02%,Kappa系数为0.92。基于不同时段的大棚识别结果,通过检测大棚的动态变化,检出受损大棚空间分布,总体精度达到98.87%,Kappa系数达到0.80。本文研究成果为无人机多光谱遥感在大棚识别、受灾评估以及灾后科学重建的应用提供了参考。

     

    Abstract: Rapid and accurate extraction of spatial location information of damaged greenhouses is of great significance for disaster loss assessment and post-disaster reconstruction. This paper takes the flood discharge zone in Yangying Village, Wuqing District, Tianjin City during the "23·7" catastrophic basin-wide flood of the Haihe River as the study area. Utilizing multi-temporal multispectral remote sensing data from UAVs, we constructed multidimensional spatial features and employed multiple deep learning networks to compare different image resolutions for greenhouse identification and flood-damaged greenhouse detection. The results indicate that the Blue band, Green band, and NDVI index in the multidimensional spatial features exhibited higher separability for greenhouses. The SegUnet-based classification model with 0.2m resolution achieved the highest accuracy, with an overall accuracy of 99.02% and a Kappa coefficient of 0.92. By detecting dynamic changes in greenhouses across different periods, the spatial distribution of damaged greenhouses was identified with an overall accuracy of 98.87% and a Kappa coefficient of 0.80. The research findings provide a reference for the application of UAV multispectral remote sensing in greenhouse identification, disaster impact assessment, and scientific post-disaster reconstruction.

     

/

返回文章
返回