%0 Conference Paper %F Oral %T Patch-based deep learning architectures for sparse annotated very high resolution datasets %+ Remote Sensing Laboratory %+ Centre de vision numérique (CVN) %+ Organ Modeling through Extraction, Representation and Understanding of Medical Image Content (GALEN) %A Papadomanolaki, Maria %A Vakalopoulou, Maria %A Karantzalos, Konstantinos %< avec comité de lecture %B Joint Urban Remote Sensing Event (JURSE) %C Dubai, United Arab Emirates %S Joint Urban Remote Sensing Event %8 2017-03-06 %D 2017 %R 10.1109/JURSE.2017.7924538 %K Labeling %K Training %K Computer architecture %K Machine learning %K Computational modeling %K Semantics %K Remote sensing %Z Engineering Sciences [physics]Conference papers %X In this paper, we compare the performance of different deep-learning architectures under a patch-based framework for the semantic labeling of sparse annotated urban scenes from very high resolution images. In particular, the simple convolutional network ConvNet, the AlexNet and the VGG models have been trained and tested on the publicly available, multispectral, very high resolution Summer Zurich v1.0 dataset. Experiments with patches of different dimensions have been performed and compared, indicating the optimal size for the semantic segmentation of very high resolution satellite data. The overall validation and assessment indicated the robustness of the high level features that are computed with the employed deep architectures for the semantic labeling of urban scenes. %G English %L hal-02423037 %U https://centralesupelec.hal.science/hal-02423037 %~ INRIA %~ INRIA-SACLAY %~ INRIA_TEST %~ CVN %~ TESTALAIN1 %~ CENTRALESUPELEC %~ INRIA2 %~ UNIV-PARIS-SACLAY %~ INRIA-SACLAY-2015 %~ CENTRALESUPELEC-SACLAY %~ INRIA2017 %~ GS-ENGINEERING %~ GS-COMPUTER-SCIENCE