Skip to content

WenCongWu/FreDFT

Repository files navigation

FreDFT: Frequency Domain Fusion Transformer for Visible-Infrared Object Detection

This is an official PyTorch implementation for our FreDFT. Paper can be download in FreDFT

1. Dependences

Create a conda virtual environment and activate it.

  1. conda create --name MOD python=3.9
  2. conda activate MOD
  3. pip install -r requirements.txt

2. Datasets download

Download these datasets and create a dataset folder to hold them.

  1. FLIR dataset: FLIR
  2. LLVIP dataset: LLVIP
  3. M3FD dataset: M3FD

3. Pretrained weights

Download our FreDFT weights and create a weights folder to hold them.

  1. FLIR dataset: FreDFT_FLIR.pt
  2. LLVIP dataset: FreDFT_LLVIP.pt
  3. M3FD dataset: FreDFT_M3FD.pt

4. Training our FreDFT

Dataset path, GPU, batch size, etc., need to be modified according to different situations.

python train.py

5. Test our FreDFT

python test.py

6. Citation

If you find FreDFT helpful for your research, please consider citing our work.

@article{Wu2025,
  title={FreDFT: Frequency Domain Fusion Transformer for Visible-Infrared Object Detection}, 
  author={Wencong Wu and Xiuwei Zhang and Hanlin Yin and Shun Dai and Hongxi Zhang and Yanning Zhang},
  journal={arXiv preprint arXiv:2511.10046},
  year={2025}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages