This repository contains the official implementation for the SIGGRAPH'25 (TOG) UniRig framework, a unified solution for automatic 3D model rigging, developed by Tsinghua University and Tripo.
Paper: One Model to Rig Them All: Diverse Skeleton Rigging with UniRig
Rigging 3D models β creating a skeleton and assigning skinning weights β is a crucial but often complex and time-consuming step in 3D animation. UniRig tackles this challenge by introducing a novel, unified framework leveraging large autoregressive models to automate the process for a diverse range of 3D assets.
Combining UniRig with keyframe animation produces these following results:
The full UniRig system consists of two main stages:
- Skeleton Prediction: An GPT-like transformer autoregressively predicts a topologically valid skeleton hierarchy using a novel Skeleton Tree Tokenization scheme.
- Skinning Weight & Attribute Prediction: A Bone-Point Cross Attention mechanism predicts per-vertex skinning weights and relevant bone attributes (e.g., for physics simulation) based on the predicted skeleton and input mesh geometry.
This repository provides the code implementation for the entire framework vision, with components being released progressively.
- Unified Model: Aims to handle diverse model categories (humans, animals, objects) with a single framework.
- Automated Skeleton Generation: Predicts topologically valid skeleton structures. (β Available in current release)
- Automated Skinning Prediction: Predicts per-vertex skinning weights. (β Available in current release)
- Bone Attribute Prediction: Predicts attributes like stiffness for physics-based secondary motion. (β³ Coming Soon)
- High Accuracy & Robustness: Achieves state-of-the-art results on challenging datasets (as shown in the paper with Rig-XL/VRoid training).
- Efficient Tokenization: Uses Skeleton Tree Tokenization for compact representation and efficient processing.
- Human-in-the-Loop Ready: Designed to potentially support iterative refinement workflows.
We are open-sourcing UniRig progressively. Please note the current status:
Available Now (Initial Release):
- β Code: Implementation for skeleton and skinning prediction.
- β Model: Skeleton & Skinning Prediction checkpoint trained on Articulation-XL2.0. Available on Hugging Face.
- β Dataset: Release of the Rig-XL and VRoid datasets used in the paper. We also filtered out 31 broken models in the training dataset which do not affect the performance of the final model.
- β Training code.
Planned Future Releases:
- β³ Full UniRig model checkpoints (Skeleton + Skinning) trained on Rig-XL/VRoid, replicating the paper's main results.
We appreciate your patience as we prepare these components for release. Follow VAST-AI-Research announcements for updates!
-
Prerequisites:
- Python 3.11
- PyTorch (tested with version >=2.3.1)
-
Clone the repository:
git clone https://github.com/VAST-AI-Research/UniRig cd UniRig
-
Set up a virtual environment (recommended):
conda create -n UniRig python=3.11 conda activate UniRig
-
Install dependencies:
python -m pip install torch torchvision python -m pip install -r requirements.txt python -m pip install spconv-{you-cuda-version} python -m pip install torch_scatter torch_cluster -f https://data.pyg.org/whl/torch-{your-torch-version}+{your-cuda-version}.html --no-cache-dir python -m pip install numpy==1.26.4
spconv
is installed from this repo, torch_scatter
and torch_cluster
are installed from this site. Also, there is a high chance that you will encounter flash_attn installation error, go to its original repo and follow its installation guide.
-
Download Model Checkpoint: The currently available skeleton prediction model checkpoint is hosted on Hugging Face and will typically be downloaded automatically by the provided scripts/functions.
-
(Optional, for importing/exporting .vrm) Install the blender addon: The blender addon is modifed from VRM-Addon-for-Blender.
Make sure you are in the root directory of the project, then:
python -c "import bpy, os; bpy.ops.preferences.addon_install(filepath=os.path.abspath('blender/add-on-vrm-v2.20.77_modified.zip'))"
Notice that aside from vroid, all models are selected from Objaverse. Just download mapping.json
if you already have Objaverse dataset (or need to download from web).
The json contains all ids of the models with type
indicating their category and url
specifying where to download. url
is the same with fileIdentifier
in Objaverse.
Training/validation split is put in datalist
folder.
π Note:
All floating-point values are stored in float16
format for compression.
Put the dataset in dataset_clean
, go back to root, and run the command to export FBX model:
from src.data.raw_data import RawData
raw_data = RawData.load("dataset_clean/rigxl/12345/raw_data.npz")
raw_data.export_fbx("res.fbx")
π Dataset Format (click to expand)
All models are converted into world space.
-
vertices
:
Position of the vertices of the mesh, shape(N, 3)
. -
vertex_normals
:
Normals of the vertices, processed byTrimesh
, shape(N, 3)
. -
faces
:
Indices of mesh faces (triangles), starting from 0, shape(F, 3)
. -
face_normals
:
Normals of the faces, shape(F, 3)
. -
joints
:
Positions of the armature joints, shape(J, 3)
. -
skin
:
Skinning weights for each vertex, shape(N, J)
. -
parents
:
Parent index of each joint, whereparents[0]
is alwaysNone
(root), shape(J)
. -
names
:
Name of each joint. -
matrix_local
:
The local axis of each bone; aligned to Y-up axis, consistent with Blender.
Generate a skeleton for your 3D model using our pre-trained model. The process automatically analyzes the geometry and predicts an appropriate skeletal structure.
# Process a single file
bash launch/inference/generate_skeleton.sh --input examples/giraffe.glb --output results/giraffe_skeleton.fbx
# Process multiple files in a directory
bash launch/inference/generate_skeleton.sh --input_dir <your_input_directory> --output_dir <your_output_directory>
# Try different skeleton variations by changing the random seed
bash launch/inference/generate_skeleton.sh --input examples/giraffe.glb --output results/giraffe_skeleton.fbx --seed 42
Supported input formats: .obj
, .fbx
, .glb
, and .vrm
# Skin a single file
bash launch/inference/generate_skin.sh --input examples/skeleton/giraffe.fbx --output results/giraffe_skin.fbx
# Process multiple files in a directory
bash launch/inference/generate_skin.sh --input_dir <your_input_directory> --output_dir <your_output_directory>
Note that the command above uses an edited-version from skeleton phase. The results may degrade significantly if the skeleton is inaccurate β for example, if tail bones or wing bones are missing. Therefore, it is recommended to refine the skeleton before performing skinning in order to achieve better results.
Combine the predicted skeleton with your original 3D model to create a fully rigged asset:
# Merge skeleton from skeleton prediction
bash launch/inference/merge.sh --source results/giraffe_skeleton.fbx --target examples/giraffe.glb --output results/giraffe_rigged.glb
# Or merge skin from skin prediction
bash launch/inference/merge.sh --source results/giraffe_skin.fbx --target examples/giraffe.glb --output results/giraffe_rigged.glb
Note that there will be no skinning if you try to merge a skeleton file(giraffe_skeleton.fbx
). Use the predicted skinning result(giraffe_skin.fbx
) instead!
Validate the metrics mentioned in the paper. This is for academic usage.
First, Download the processed dataset from Hugging Face and extract it to the dataset_clean
.
Then run the following command:
python run.py --task=configs/task/validate_rignet.yaml
To export skeleton & mesh, set record_res
to True
in the config file configs/system/ar_validate_rignet.yaml
.
The code may be a bit messed up β hopefully this will be addressed in a future update from the VAST team.
π Custom Data Preparation (click to expand)
In `configs/data/rignet.yaml`, `input_dataset_dir` specifies the original model folder and `output_dataset_dir` specifies the output folder to store npz files. After changing them, run the following command to process the data:bash launch/inference/preprocess.sh --config configs/data/<yourdata> --num_runs <number of threads to run>
π Train Skeleton Model (click to expand)
This section provides the configuration files needed to reproduce the results trained on the Rignet dataset, as described in the paper. Several configuration components are required:
-
data:
To tell dataloader where and how to load. Defined in
configs/data/rignet.yaml
. The program will try to find data in<output_dataset_dir>/<relative path in datalist>/raw_data.npz
, so you need to put processed dataset underdataset_clean
. -
transform:
Data augmentations. Defined in
configs/transform/train_rignet_ar_transform.yaml
.For details on the augmentation operations, refer to
src/data/augment.py
. -
tokenizer:
To tell model how to encode skeletons. Defined in
configs/tokenizer/tokenizer_rignet.yaml
-
system:
Control on training process. Defined in
configs/system/ar_train_rignet.yaml
. In this config, the training process will export generation results after 70 epochs every 4 epochs. You can also changesampling methods
in it. -
model:
Defined in
configs/model/unirig_rignet.yaml
, and you can change base transformer model here.Note:
n_positions
must > sum of the conditional embedding length and the maximum number of skeleton tokens. -
task:
The final training config. Defined in
configs/task/train_rignet_ar.yaml
. This integrates all components above, and also configuresloss
,optimizer
, andscheduler
. You can find optimizers and schedulers initialization insrc/system/optimizer.py
andsrc/system/scheduler.py
.The
trainer
section controls GPU/node usage (multi-node training is not tested).The
wandb
section enables logging with Weights & Biases, and thecheckpoint
section configures the checkpoint saving strategy.You can comment out
wandb
andcheckpoint
if you donβt need logging or final model checkpoints.
During training, the checkpoints will be saved to experiments/<experimentname>
.
To run the training, use the following command:
python run.py --task=configs/task/train_rignet_ar.yaml
The best results typically appear around epoch 120, after approximately 18 hours of training on 4Γ RTX 4090 GPUs.
It is also noted that in ar training, lower validation CE loss does NOT necessarily imply better skeleton generation results. This can be verified in the following picture:
After training, change resume_from_checkpoint
to path of the final model to see the results in the inference task. Create a new inference task named rignet_ar_inference_scratch.yaml
in configs/task
:
mode: predict # change it to predict
debug: False
experiment_name: test
resume_from_checkpoint: experiments/train_rignet_ar/last.ckpt # final ckpt path
components:
data: quick_inference # inference data
system: ar_inference_articulationxl # any system without `val_interval` or `val_start_from` should be ok
tokenizer: tokenizer_rignet # must be the same in training
transform: train_rignet_ar_transform # only need to keep the normalization method
model: unirig_rignet # must be the same in training
data_name: raw_data.npz
writer:
__target__: ar
output_dir: ~
add_num: False
repeat: 1
export_npz: predict_skeleton
export_obj: skeleton
export_fbx: skeleton
trainer:
max_epochs: 1
num_nodes: 1
devices: 1
precision: bf16-mixed
accelerator: gpu
strategy: auto
and run:
bash launch/inference/generate_skeleton.sh --input examples/giraffe.glb --output examples/giraffe_skeleton.fbx --skeleton_task configs/task/rignet_ar_inference_scratch.yaml
π Train Skin Model (click to expand)
The process of skinning training is very similar to skeleton training. You can modify `configs/task/train_rignet_skin.yaml`, and run:python run.py --task=configs/task/train_rignet_skin.yaml
If you run into pyrender related issues, change vertex_group_confis/kwargs/voxel_skin/backend
to open3d
in configs/transform/train_rignet_skin_transform
. This also means you need to change it in prediction mode.
Note that this task takes up at least 60GB memory on a single gpu, even with batch_size=2
in data config. You can change batch_size
to 1, increase accumulate_grad_batches
in the task, and decrease num_train_vertex
in configs/model/unirig_skin.yaml
for less memory requirement.
After training, change resume_from_checkpoint
to path of the final model to see the results in the inference task. Create a new inference task named rignet_skin_inference_scratch.yaml
in configs/task
:
mode: predict # change it to predict
debug: False
experiment_name: test
resume_from_checkpoint: experiments/train_rignet_skin/last.ckpt # final ckpt path
components:
data: quick_inference # inference data
system: skin
transform: inference_skin_transform # do not need skin vertex_group
model: unirig_skin # must be the same in training
data_name: raw_data.npz
writer:
__target__: skin
output_dir: ~
add_num: False
repeat: 1
save_name: predict
export_npz: predict_skin
export_fbx: result_fbx
trainer:
max_epochs: 1
num_nodes: 1
devices: 1
precision: bf16-mixed
accelerator: gpu
strategy: auto
inference_mode: True
and run:
bash launch/inference/generate_skin.sh --input examples/skeleton/giraffe.fbx --output results/giraffe_skin.fbx --skin_task configs/task/rignet_skin_inference_scratch.yaml
Available models are hosted on the: https://huggingface.co/VAST-AI/UniRig
- For generation: CUDA-enabled GPU with at least 8GB VRAM
@article{zhang2025unirig,
title={One Model to Rig Them All: Diverse Skeleton Rigging with UniRig},
author={Zhang, Jia-Peng and Pu, Cheng-Feng and Guo, Meng-Hao and Cao, Yan-Pei and Hu, Shi-Min},
journal={arXiv preprint arXiv:2504.12451},
year={2025}
}
We would like to thank the following open-source projects and research works:
- OPT for model architecture
- 3DShape2VecSet for 3D shape representation
- SAMPart3D and Michelangelo for shape encoder implementation
- Articulation-XL2.0 for a curated dataset
We are grateful to the broader research community for their open exploration and contributions to the field of 3D generation.