Multiclass Labeling of Very High-Resolution Remote Sensing Imagery by Enforcing Nonlocal Shared Constraints in Multilevel Conditional Random Fields Model
Labeling, Remote sensing, Image segmentation, Semantics, Computational modeling, Optical imaging, Optical sensors
College of Natual Science and Mathematics, Geography and the Environment
In this study, we investigate the problem of multiclass pixel labeling of very high-resolution (VHR) optical remote sensing images. We propose a novel higher order potential function based on nonlocal shared constraints within the framework of a three-level conditional random field (CRF) model. The proposed approach combines classification knowledge discovery from labeled data with unsupervised segmentation cues derived from the cosegmentation of test data. The cosegmentation of unannotated test data incorporates nonlocal constraints, which are encoded in a novel truncated robust consistency potential function. The class labels are then updated iteratively by alternating between estimating semantic segmentations using CRF and integrating cosegmentation-derived labels in higher order potential functions to refine labeling results. We experimentally demonstrate the improved labeling accuracy of our approach compared with state-of-the-art multilevel CRF approaches based on quantitative and qualitative results. We also show that our approach can address the issue of lacking accurately labeled training data.
Copyright held by author or publisher. User is responsible for all copyright compliance.
Tong Zhang, et al. “Multiclass Labeling of Very High-Resolution Remote Sensing Imagery by Enforcing Nonlocal Shared Constraints in Multilevel Conditional Random Fields Model.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 9, no. 7, 2016, pp. 2854–2867. doi: 10.1109/jstars.2015.2510367.