HieraRS: A Hierarchical Segmentation Paradigm for Remote Sensing Enabling Multi-Granularity Interpretation and Cross-Domain Transfer
Tianlong Ai, Tianzhu Liu, Haochen Jiang, and Yanfeng Gu
Hierarchical land cover and land use (LCLU) classification aims to assign pixel-wise labels with multiple levels of semantic granularity to remote sensing (RS) imagery. However, existing deep learning-based methods face two major challenges: 1) They predominantly adopt a flat classification paradigm, which limits their ability to generate end-to-end multi-granularity hierarchical predictions aligned with tree-structured hierarchies used in practice. 2) Most cross-domain studies focus on performance degradation caused by sensor or scene variations, with limited attention to transferring LCLU models to cross-domain tasks with heterogeneous hierarchies (e.g.,LCLU to crop classification). These limitations hinder the flexibility and generalization of LCLU models in practical applications. To address these challenges, we propose HieraRS, a novel hierarchical interpretation paradigm that enables multi-granularity predictions and supports the efficient transfer of LCLU models to cross-domain tasks with heterogeneous tree-structured hierarchies. We introduce the Bidirectional Hierarchical Consistency Constraint Mechanism (BHCCM), which can be seamlessly integrated into mainstream flat classification models to generate hierarchical predictions, while improving both semantic consistency and classification accuracy. Furthermore, we present TransLU, a dual-branch cross-domain transfer framework comprising two key components: Cross-Domain Knowledge Sharing (CDKS) and Cross-Domain Semantic Alignment (CDSA). TransLU supports dynamic category expansion and facilitates the effective adaptation of LCLU models to heterogeneous hierarchies. In addition, we construct MM-5B, a large-scale multi-modal hierarchical land use dataset featuring pixel-wise annotations. Extensive experiments on MM-5B, Crop10m, and WHDLD validate the effectiveness and adaptability of the proposed HieraRS across diverse scenarios. The code and MM-5B dataset will be released at: https://github.com/AI-Tianlong/HieraRS.
- Release MM-5B dataset
- Release Crop10m dataset
- Release HieraRS code
- Release HieraRS weights
ℹ️ The dataset will be released immediately after further manual verification. The code and weights will be released once the paper is accepted.
MM-5B: Multi-Modal Five-Billion-Pixels is a large-scale, multi-modal, hierarchical Land Cover and Land Use (LCLU) dataset, built upon the Five-Billion-Pixels foundation.
MM-5B download links:
Baidu Netdisk |
Google Drive |
Zenodo
ℹ️ If you use MM-5B in your research, we kindly request that you also cite the dataset it is based on: Five-Billion-Pixels.
Crop10m: This dataset is used for crop classification experiments and originates from a cross-domain task presented in the paper. The labels are derived from the annual crop classification product proposed by You et al. The Sentinel-2 remote sensing imagery used was collected over Heilongjiang Province in northeastern China, covering cloud-free scenes from July to October 2019. |
|
Crop10m download links:
Baidu Netdisk |
Google Drive |
Zenodo
ℹ️ If you use Crop10m in your research, we kindly request that you also cite the dataset it is based on: The 10-m crop type maps in Northeast China during 2017–2019
Experimental Results Visualization on the MM-5B Dataset (GaoFen-2 Satellite Data).
Experimental Results Visualization on the MM-5B Dataset.
Experimental Results Visualization on the Crop10m Dataset.
We thank Tong et al. and You et al. for providing high-quality datasets to the remote sensing community. We are deeply grateful to every contributor of MMSegmentation and OpenMMLab for offering such robust and versatile open-source frameworks.
if you find it helpful, please cite
@article{HieraRS_2025,
title={HieraRS: A Hierarchical Segmentation Paradigm for Remote Sensing Enabling Multi-Granularity Interpretation and Cross-Domain Transfer},
author={Tianlong Ai, Tianzhu Liu, Haochen Jiang and Yanfeng Gu},
journal={arXiv preprint arXiv:2507.08741},
year={2025}
}