基于无人机多光谱影像的深度学习与变化检测融合提取洪灾受损温室大棚

Integration of deep learning and change detection for flood-damaged greenhouse extraction using drone multispectral imagery

  • 摘要: 近年来, 全球洪水灾害频发。温室大棚作为农业生产的关键设施, 在洪水灾害中易遭受损毁, 因此快速、准确地提取受损温室大棚空间位置信息, 对灾害损失评估和灾后重建具有重要意义。本文以海河“23·7”流域性特大洪水中天津市武清区杨营村泄洪区为研究对象, 利用无人机多光谱遥感数据, 使用深度学习方法, 开展判别洪灾受损温室大棚的研究。首先, 构建多维度空间特征, 结合光谱反射率分布图和可分离性测度, 筛选出温室大棚的最佳分类波段和指数。采用多种深度学习网络, 对比不同影像分辨率, 识别温室大棚。最后利用多时相温室大棚识别结果的差异变化, 提高对洪灾受损温室大棚识别的精度和效率。结果表明, 1) 在识别温室大棚的过程中, 蓝光波段、绿光波段以及NDVI指数为敏感特征, 能获得较高的光谱分离度。2) 在Seg-UNet、Seg-UNet++与DeepLab V3+ 3种深度学习网络中, 基于Seg-UNet网络建立的温室大棚分类模型精度最高。在影像分辨率为0.1~2.0 m范围内, 温室大棚识别精度总体随影像分辨率的下降而降低。本文选择分辨率为0.2 m的影像开展温室大棚检出工作, 其总体精度和Kappa系数分别为99.02%和0.93。3) 对于不同时段的温室大棚影像, 利用模型迁移方法, 对基模型进行微调, 得到该时段的微调模型, 相较于直接识别, 微调模型的总体精度和Kappa系数分别提升1.13百分点和0.10。4)通过比较不同时段温室大棚识别结果, 检测温室大棚的动态变化, 识别受损温室大棚的空间分布, 总体精度达98.87%, Kappa系数达0.80。本文研究成果为无人机多光谱遥感在受损温室大棚识别、受灾评估以及灾后科学重建中的应用提供了参考。

     

    Abstract: In recent decades, flood disasters have occurred frequently worldwide, causing serious damage to greenhouses, as agricultural production is vulnerable to damage from flood disasters. Therefore, rapid and accurate extraction of spatial location information from damaged greenhouses is of great significance for disaster loss assessment and post-disaster reconstruction. With the development of computer technology, deep learning has become widely applied. To assess the damaged greenhouses by flood disaster, we chose the Yangying Village, Wuqing District, Tianjin City that was flooded from the end of July to August 2023 because of the catastrophic “23·7” basin-wide flood in the Haihe River Basin. Using drone-bound multispectral remote sensing images, we efficiently detected damaged greenhouses with a deep learning approach. First, multidimensional spatial features were constructed to select better bands and indices for detecting greenhouses based on spectral reflectance profiles and separability measures. Greenhouses were then detected in different resolution images using three types of deep learning networks. Finally, the damaged greenhouses were identified more correctly and efficiently according to the status changes in the greenhouses detected from the images obtained during different periods. As a result, the blue light band, green light band, and NDVI index were sensitive parameters with higher spectral separability for greenhouse extraction; the blue light band exhibited the highest separability, whereas the red, red-edge, and near-infrared light bands showed lower separability. Three networks, Seg-UNet, Seg-UNet++, and DeepLab V3+, had overall greenhouse recognition accuracies and Kappa coefficients better than 97% and 0.8, respectively. The model trained with the Seg-UNet network demonstrated the best performance in greenhouse identification, achieving the highest classification accuracy with an optimal epoch number of 40. For image resolutions ranging from 0.1–2.0 m, the model with 0.2 m resolution achieved the highest accuracy, with the overall accuracy of 99.02% and a Kappa coefficient of 0.93, respectively. For greenhouse images in different periods, the base model was fine-tuned through transfer learning to improve the overall accuracy and Kappa coefficient of the modified base model. Using change detection and comparing the greenhouse detection results during the non-flood and flood periods, damaged greenhouses were identified with an overall accuracy of 98.87% and a Kappa coefficient of 0.80. This study provides valuable insights into the application of drone-bound multispectral imagery for detecting damaged greenhouses, assessing disaster impacts, and supporting science-based post-disaster reconstruction.

     

/

返回文章
返回