Skip to content

ChunmingHe/SEE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

61 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Segment concealed object with incomplete supervision, TPAMI, 2025

Chunming He, Kai Li, Yachao Zhang, Ziyun Yang, Youwei Pang, Longxiang Tang, Chengyu Fang, Yulun Zhang, Linghe Kong, Xiu Li and Sina Farsiu*

Abstract: Existing concealed object segmentation (COS) methods frequently utilize reversible strategies to address uncertain regions. However, these approaches are typically restricted to the mask domain, leaving the potential of the RGB domain underexplored. To address this, we propose the Reversible Unfolding Network (RUN), which applies reversible strategies across both mask and RGB domains through a theoretically grounded framework, enabling accurate segmentation. RUN first formulates a novel COS model by incorporating an extra residual sparsity constraint to minimize segmentation uncertainties. The iterative optimization steps of the proposed model are then unfolded into a multistage network, with each step corresponding to a stage. Each stage of RUN consists of two reversible modules: the Segmentation-Oriented Foreground Separation (SOFS) module and the Reconstruction-Oriented Background Extraction (ROBE) module. SOFS applies the reversible strategy at the mask level and introduces Reversible State Space to capture non-local information. ROBE extends this to the RGB domain, employing a reconstruction network to address conflicting foreground and background regions identified as distortion-prone areas, which arise from their separate estimation by independent modules. As the stages progress, RUN gradually facilitates reversible modeling of foreground and background in both the mask and RGB domains, directing the network's attention to uncertain regions and mitigating false-positive and false-negative results. Extensive experiments demonstrate the superior performance of RUN and highlight the potential of unfolding-based frameworks for COS and other high-level vision tasks.

🔥 News

  • 2025-06-04: We release the code, the pretrained models, and the results.

  • 2025-06-04: We release this repository.

🔗 Contents

  • Usage
  • Results
  • Citation
  • Acknowledgements

⚙️ Usage

1. Prerequisites

Note that SEE is only tested on Ubuntu OS with the following environments.

  • Creating a virtual environment in terminal: conda create -n SEE python=3.8.
  • Installing necessary packages: conda env create -f environment.yml

2. Downloading Training and Testing Datasets

  • Download the training set (COD10K-train) used for training
  • Download the testing sets (COD10K-test + CAMO-test + CHAMELEON + NC4K ) used for testing
  • Refer to the COS repository for more datasets.

3. Training Configuration

Run both steps in one code block:

Step 1: Train the segmenter

python Train.py  --epoch YOUR_EPOCH  --lr YOUR_LEARNING_RATE  --batchsize YOUR_BATCH_SIZE  --trainsize YOUR_TRAINING_SIZE  --train_root YOUR_TRAININGSETPATH  --val_root  YOUR_VALIDATIONSETPATH  --save_path YOUR_CHECKPOINTPATH

Step 2: Co-training with SAM

The pretrained model is stored in Google Drive. After downloading, please change the file path in the corresponding code.

python segment-anything/train_semi_single_withsam.py

4. Testing Configuration

Our well-trained model is stored in Google Drive. After downloading, please change the file path in the corresponding code.

python Test.py  --testsize YOUR_IMAGESIZE  --pth_path YOUR_CHECKPOINTPATH  --test_dataset_path  YOUR_TESTINGSETPATH

5. Evaluation

  • Matlab code: One-key evaluation is written in MATLAB code, please follow the instructions in main.m and just run it to generate the evaluation results.

🔍 Results

We achieved state-of-the-art performance on camouflaged object detection, polyp image segmentation, medical tubular object segmentation, and transparent object detection. More results can be found in the paper.

Quantitative Comparison (click to expand)
  • Results in Table 1 of the main paper

Visual Comparison (click to expand)
  • Results in Figure 4 of the main paper

Related Works

RUN: Reversible Unfolding Network for Concealed Object Segmentation, ICML 2025.

Strategic Preys Make Acute Predators: Enhancing Camouflaged Object Detectors by Generating Camouflaged Objects, ICLR 2024.

Weakly-Supervised Concealed Object Segmentation with SAM-based Pseudo Labeling and Multi-scale Feature Grouping, NeurIPS 2023.

Camouflaged object detection with feature decomposition and edge reconstruction, CVPR 2023.

Concealed Object Detection, TPAMI 2022.

You can see more related papers in awesome-COS.

📎 Citation

If you find the code helpful in your research or work, please cite the following paper(s).

@article{he2025segment,
  title={Segment concealed object with incomplete supervision},
  author={He, Chunming and Li, Kai and Zhang, Yachao and Yang, Ziyun and Tang, Longxiang and Zhang, Yulun and Kong, Linghe and Farsiu, Sina},
  journal={TPAMI},
  year={2025}
}

Concat

If you have any questions, please feel free to contact me via email at chunminghe19990224@gmail.com or chunming.he@duke.edu.

Acknowledgement

The code is built on WS-SAM and FEDER. Please also follow the corresponding licenses. Thanks for the awesome work.

About

Official implementation of segment concealed object with incomplete supervision

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published