ICCV 2025
- Checkpoints release on Hugging Face
- Inference code for novel view synthesis
- Inference code for novel pose synthesis
- An NVIDIA GPU with CUDA support is required.
- We have tested on a single A4500 and A100 GPU.
- The minimum GPU memory required for inference is 20GB.
- Operating system: Linux
Instructions can be found in INSTALL.md.
We support both view and pose synthesis tasks. For each task, we present a quick demo and guidelines for running with custom data.
bash demo_scripts/nv.sh
If you want to use your own image for inference, please create a new folder (e.g., assets/demo_images in our case) and move your own image into this folder. Note that all the intermediate renderings and the final result will be generated inside this folder.
To run with custom data,
demo_scripts/nv.sh: Replace theIMAGE_PATHin L3 with your own image path.configs/inference/novel_views.yaml: Replacedata.obs_img_pathin L10 with your own image path
bash demo_scripts/np.sh
In this quick demo, we do self-reenactment. If you want to do cross-reenactment, you need to transfer the SMPL shape of the source image to the target SMPL parameters.
If you want to use your own source image / driving video for inference, please create a new folder (e.g., assets/demo_videos in our case) and move your own video into this folder. Note that all the intermediate renderings and the final result will be generated inside this folder.
For the configs, you need to do the following modifications:
-
demo_scripts/np.sh: Make your driving video path beVIDEO_PATHin L4; source image path beIMAGE_PATHin L9 -
configs/inference/novel_pose.yaml: L9data.obs_img_pathbe your source image path; L10data.driving_video_dirbe the parent folder that contains the driving video
If you find this code useful for your research, please cite it using the following BibTeX entry.
@article{lu2025gas,
title={Gas: Generative avatar synthesis from a single image},
author={Lu, Yixing and Dong, Junting and Kwon, Youngjoong and Zhao, Qin and Dai, Bo and De la Torre, Fernando},
journal={arXiv preprint arXiv:2502.06957},
year={2025}
}
This project builds on components that were implemented and released by SHERF, Champ, MimicMotion. We thank the authors for releasing the code. Please also consider citing these works if you find them helpful.
The purpose of this work is for research only. All the copyrights of the demo images and videos are from community users. If there is any infringement or offense, please get in touch with us (yixing@cs.unc.edu), and we will delete them in time.
