Gait Recognition in the Wild: A Large-scale Benchmark and NAS-based Baseline
Xianda Guo, Zheng Zhu, Tian Yang, BeiBei Lin, Junjie Huang, Jiankang Deng, Guan Huang, Jie Zhou, Jiwen Lu.
- [2025/2] This paper has been accepted to T-PAMI.
- [2024/6/24] Training and evaluation code release.
- [2024/1] Paper released on arXiv.
We provide the following tutorials for your reference:
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -u -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs configs/sposgait/sposgait_large_GREW_supertraining_triplet.yaml --phase train
python -m torch.distributed.launch
DDP launch instruction.--nproc_per_node
The number of gpus to use, and it must equal the length ofCUDA_VISIBLE_DEVICES
.--cfgs
The path to config file.--phase
Specified astrain
.
--log_to_file
If specified, the terminal log will be written on disk simultaneously.
You can run commands in train.sh to train different models.
多卡搜索
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -u -m torch.distributed.launch --nproc_per_node=8 opengait/search.py --cfgs ./configs/sposgait/sposgait_large_GREW_supertraining_triplet.yaml --max-epochs 20
Train a model by
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -u -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./configs/sposgait/retrain/sposgait_large_GREW-train20000id_retrain.yaml --phase train
python -m torch.distributed.launch
DDP launch instruction.--nproc_per_node
The number of gpus to use, and it must equal the length ofCUDA_VISIBLE_DEVICES
.--cfgs
The path to config file.--phase
Specified astrain
.
--log_to_file
If specified, the terminal log will be written on disk simultaneously.
You can run commands in train.sh to train different models.
Evaluate the trained model by
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./configs/sposgait/retrain/sposgait_large_GREW-train20000id_retrain.yaml --phase test
--phase
Specified astest
.--iter
Specify an iteration checkpoint.
You can run commands in test.sh to train different models.
Participants must package the submission.csv for submission using zip xxx.zip $CSV_PATH and then upload it to codalab.
CUDA_VISIBLE_DEVICES=0 python -u -m torch.distributed.launch --nproc_per_node=1 opengait/calculate_flops_and_params.py --cfgs configs/sposgait/retrain/sposgait_large_GREW-train20000id_retrain.yaml
If this work is helpful for your research, please consider citing the following BibTeX entries.
@inproceedings{zhu2021gait,
title={Gait recognition in the wild: A benchmark},
author={Zhu, Zheng and Guo, Xianda and Yang, Tian and Huang, Junjie and Deng, Jiankang and Huang, Guan and Du, Dalong and Lu, Jiwen and Zhou, Jie},
booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
pages={14789--14799},
year={2021}
}
@ARTICLE{10906429,
author={Guo, Xianda and Zhu, Zheng and Yang, Tian and Lin, Beibei and Huang, Junjie and Deng, Jiankang and Huang, Guan and Zhou, Jie and Lu, Jiwen},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Gait Recognition in the Wild: A Large-scale Benchmark and NAS-based Baseline},
year={2025},
volume={},
number={},
pages={1-18},
keywords={Gait recognition;Benchmark testing;Training;Three-dimensional displays;Legged locomotion;Cameras;Videos;Streams;Face recognition;Neural architecture search;Large-scale Gait Recognition;Biometric Authentication;Neural Architecture Search},
doi={10.1109/TPAMI.2025.3546482}
}
Note: This code is only used for academic purposes, people cannot use this code for anything that might be considered commercial use.