Skip to content

hustvl/DiffusionVL

Repository files navigation

DiffusionVL: Translating Any Autoregressive Models into
Diffusion Vision Language Models

SOTA dVLM Performance with <5% Data & 2.0× Inference Speedup!

Lunbin Zeng1,*, Jingfeng Yao1,*, Bencheng Liao1, Hongyuan Tao1, Wenyu Liu1, Xinggang Wang1, 📧

1Huazhong University of Science and Technology

*equal contribution, 📧corresponding author, xgwang@hust.edu.cn

arXiv Hugging Face

📰 News

  • [2025.12.25] 🎄 We have completed our release plan ahead of schedule. DiffusionVL is now fully open-sourced. Merry Christmas to the community!
  • [2025.12.18] 🎉 Our paper DiffusionVL is released on arXiv! We also release the DiffusionVL models translated from Qwen2.5VL on Hugging Face.

🚀 Release Plan

  • Release paper
  • Release DiffusionVL model weights (translated from AR-VLMs)
  • Release DiffusionVL model weights (translated from AR-LMs)
  • Release evaluation code
  • Release training code

📄 Introduction

The diffusion paradigm has emerged as a promising alternative to autoregressive (AR) models, offering the potential for efficient parallel decoding. However, existing diffusion vision language models (dVLMs) largely lag behind mainstream autoregressive vision language models in performance, primarily due to the capability limitations of their base diffusion language models.

DiffusionVL bridges this gap by answering a fundamental question: Can we directly translate any existing autoregressive models into powerful diffusion vision language models? We propose a diffusion finetuning framework that "translates" any pretrained AR model into a diffusion vision language model through a simple paradigm shift and modality shift. Unlike prior dVLMs restricted by fixed generation lengths, DiffusionVL introduces a novel block decoding strategy. This allows for arbitrary-length generation and KV-cache reuse. With this integrated design, despite training with less than 5% of the training data required by previous methods, DiffusionVL translated from AR-VLMs achieves a state-of-the-art performance among exsiting dVLMs and delivers a 2.0× inference speedup.

✨ Highlights

  • Universal Translation Framework: Translate any AR models into dVLMs with a simple yet effective approach.

  • Superior Performance: Achieve SOTA dVLM performance using <5% training data (738K vs 16.5M samples).

  • 2.0× Faster Inference: Block decoding strategy enables KV-cache reuse and 2.0× speedup over previous dVLMs.

Benchmark Image Framework

🚀 Get Started

Document Description
Installation Environment setup, data and model preparation
Training & Evaluation Train and evaluate DiffusionVL models
Inference Quick inference with pre-trained models

❤️ Acknowledgements

This repo is mainly built on Qwen2.5-VL, LLaDA-V, BD3LMs and SDAR, lmms-eval. We thank the authors for their open-source contributions.

📝 Citation

If you find our work useful, please cite our paper:

@misc{zeng2025diffusionvltranslatingautoregressivemodels,
      title={DiffusionVL: Translating Any Autoregressive Models into Diffusion Vision Language Models},
      author={Lunbin Zeng and Jingfeng Yao and Bencheng Liao and Hongyuan Tao and Wenyu Liu and Xinggang Wang},
      year={2025},
      eprint={2512.15713},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2512.15713},
}

About

[ArXiv 2025] DiffusionVL: Translating Any Autoregressive Models into Diffusion Vision Language Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •