Patch-based deep learning architectures for sparse annotated very high resolution datasets - CentraleSupélec Access content directly
Conference Papers Year : 2017

Patch-based deep learning architectures for sparse annotated very high resolution datasets

Abstract

In this paper, we compare the performance of different deep-learning architectures under a patch-based framework for the semantic labeling of sparse annotated urban scenes from very high resolution images. In particular, the simple convolutional network ConvNet, the AlexNet and the VGG models have been trained and tested on the publicly available, multispectral, very high resolution Summer Zurich v1.0 dataset. Experiments with patches of different dimensions have been performed and compared, indicating the optimal size for the semantic segmentation of very high resolution satellite data. The overall validation and assessment indicated the robustness of the high level features that are computed with the employed deep architectures for the semantic labeling of urban scenes.
No file

Dates and versions

hal-02423037 , version 1 (23-12-2019)

Identifiers

Cite

Maria Papadomanolaki, Maria Vakalopoulou, Konstantinos Karantzalos. Patch-based deep learning architectures for sparse annotated very high resolution datasets. Joint Urban Remote Sensing Event (JURSE), Mar 2017, Dubai, United Arab Emirates. ⟨10.1109/JURSE.2017.7924538⟩. ⟨hal-02423037⟩
49 View
0 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More