基于无人机多光谱影像的深度学习与变化检测融合提取洪灾受损温室大棚

Integration of deep learning and change detection for flood-damaged greenhouse extraction using drone multispectral imagery

  • 摘要: 近年来, 全球洪水灾害频发。温室大棚作为农业生产的关键设施, 在洪水灾害中易遭受损毁, 因此快速、准确地提取受损温室大棚空间位置信息, 对灾害损失评估和灾后重建具有重要的意义。本文以海河“23·7”流域性特大洪水中天津市武清区杨营村泄洪区为研究对象, 利用无人机多光谱遥感数据, 使用深度学习方法, 开展判别洪灾受损温室大棚的研究。首先, 构建多维度空间特征, 结合光谱反射率分布图和可分离性测度, 筛选出温室大棚的最佳分类波段和指数。采用多种深度学习网络, 对比不同影像分辨率, 开展温室大棚的识别。最后利用多时相温室大棚识别结果的差异变化, 提高对洪灾受损温室大棚识别的精度和效率。结果表明, 1) 在识别温室大棚的过程中, 蓝光波段、绿光波段以及NDVI指数为敏感特征, 能获得较高的光谱分离度。2) 在Seg-UNet、Seg-UNet++与DeepLab V3+ 3种深度学习网络中, 基于Seg-UNet网络建立的温室大棚分类模型精度最高。在影像分辨率为0.1~2 m范围内, 温室大棚识别精度总体随影像分辨率的下降而降低。本文选择分辨率为0.2 m的影像开展温室大棚检出工作, 其总体精度和Kappa系数分别为99.02%和0.93。3) 对于不同时段的温室大棚影像, 利用模型迁移方法, 对基模型进行微调, 得到该时段的微调模型, 相较于直接识别, 微调模型的总体精度和Kappa系数分别提升1.13%和0.10。4)通过比较不同时段温室大棚识别结果, 检测温室大棚的动态变化, 识别受损温室大棚的空间分布, 总体精度达98.87%, Kappa系数达0.80。本文研究成果为无人机多光谱遥感在受损温室大棚识别、受灾评估以及灾后科学重建中的应用提供了参考。

     

    Abstract: In past decades, flood disasters occur frequently around the world, making serious damages to greenhouse, as one of agricultural production, are vulnerable to damage in flood disasters. Therefore, rapid and accurate extraction of spatial location information for damaged greenhouses is of great significance for disaster loss assessment and post-disaster reconstruction. With the development of computer technology, deep learning has been widely applied. In order to assess the damaged greenhouses by flood disaster, we chose the Yangying Village, Wuqing District, Tianjin City that was flooded from the end of July to August 2023 because of the catastrophic “23·7” basin-wide flood in the Haihe River Basin. Based on drone bound multispectral remote sensing images, we detected efficiently the damaged greenhouses in a deep learning approach. At first, multidimensional spatial features were constructed to select the better bands and indexes for detecting the greenhouses based on the spectral reflectance profiles and the separability measure. Then, the greenhouses were detected with different resolution images by three kinds of deep learning networks. Finally, the damaged greenhouses were identified more correctly and efficiently, according the status changes of greenhouse detected from the images obtained in different period. As a results, the blue band, green band, and NDVI index were sensitive parameters with higher spectral separability for greenhouse extraction, the blue band exhibited the highest separability, while the red band, red-edge band, and near-infrared band showed lower separability. Three networks, Seg-UNet, Seg-UNet++, and DeepLab V3+, have the overall accuracy of greenhouse recognition and Kappa coefficients respecting better than 97% and 0.8. the model trained with Seg-UNet network demonstrated the best performance in greenhouse identification with a higher classification accuracy. and had an optimal number of epochs of 40. For the image resolution ranging from 0.1~2m, the model with 0.2 m image resolution had the highest accuracy, with the overall accuracy of 99.02% and a Kappa coefficient of 0.93, respectively. For the greenhouse images in different periods, the base model was fine-tuned through transfer learning to improve the overall accuracy and the Kappa coefficient for the modified base model. Using change detection, comparing the greenhouse detection results during the non-flood to the flood periods, damaged greenhouses were identified with the overall accuracy of 98.87%, and a Kappa coefficient of 0.80. This study provides valuable insights for the application of drone-bound multispectral image in damaged greenhouse detection, disaster impact assessment, and science-based post-disaster reconstruction.

     

/

返回文章
返回