Skip to content

DefaultBoxGenerator device mismatch: torch.cat fails when model is on CUDA (CPU vs cuda:0) Body: #9414

@Chethan-Babu-stack

Description

@Chethan-Babu-stack

🐛 Describe the bug

When an SSD model using DefaultBoxGenerator runs on CUDA, forward can raise:

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument tensors in method wrapper_cat)

The failure happens in anchor_utils.py inside DefaultBoxGenerator._grid_default_boxes, at torch.cat((shifts, wh_pairs), dim=1).

To Reproduce

  • Use scripted ssd300_vgg16 model that uses DefaultBoxGenerator
  • Put the model and inputs on CUDA: model.cuda(), input tensors on cuda:0.
  • Call model(images) (or run a detection forward pass).
  • The error occurs during anchor/default-box generation.

Root cause

  • _wh_pairs are always on CPU
    In DefaultBoxGenerator.init, _generate_wh_pairs() is called with default device=torch.device("cpu"), so every tensor in self._wh_pairs stays on CPU.

  • _grid_default_boxes does not take a device
    It only receives dtype. It builds shifts_x / shifts_y with torch.arange(...).to(dtype=dtype) and no device=, so they are created on the current/default device (e.g. CUDA when the model is on GPU).

So when building default_box = torch.cat((shifts, wh_pairs), dim=1):

  1. shifts is on the feature-map device (e.g. cuda:0),
  2. wh_pairs comes from self._wh_pairs[k], which is on CPU,
    hence the CPU vs CUDA mismatch.

Expected behavior
Default boxes should be computed entirely on the same device as the feature maps (e.g. CUDA when the model is on GPU), so torch.cat and the rest of the forward pass succeed.

Suggested fix

  • Thread the feature-map device from forward into _grid_default_boxes (e.g. add a device argument).
  • In _grid_default_boxes:
  1. Create shifts_x / shifts_y on that device (e.g. torch.arange(..., device=device) then .to(dtype=dtype)).
  2. Use self._wh_pairs[k].to(dtype=dtype, device=device) (with the same clamp logic as now) before repeat and torch.cat.

Traceback (TorchScript / original)

Traceback of TorchScript, serialized code (most recent call last):
  File "code/__torch__/torchvision/models/detection/anchor_utils.py", line 73, in forward
        _wh_pair = _wh_pairs1[k]
      wh_pairs = torch.repeat(_wh_pair, [torch.mul(f_k[0], f_k[1]), 1])
      default_box = torch.cat([shifts, wh_pairs], 1)
                    ~~~~~~~~~ <--- HERE
      _22 = torch.append(_12, default_box)
    return torch.cat(_12)

Traceback of TorchScript, original code (most recent call last):
  File ".../torchvision/models/detection/anchor_utils.py", line 232, in forward
            wh_pairs = _wh_pair.repeat((f_k[0] * f_k[1]), 1)
            default_box = torch.cat((shifts, wh_pairs), dim=1)
                          ~~~~~~~~~ <--- HERE
            default_boxes.append(default_box)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument tensors in method wrapper_cat)

Versions

Collecting environment information...
PyTorch version: 2.9.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A

OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39

Python version: 3.13.9 | packaged by conda-forge | (main, Oct 22 2025, 23:33:35) [GCC 14.3.0] (64-bit runtime)
Python platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 566.26
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Caching allocator config: N/A

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-14650HX
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 1
BogoMIPS: 4838.39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni vnmi umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 576 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 24 MiB (12 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected

Versions of relevant libraries:
[pip3] flake8==7.3.0
[pip3] mypy_extensions==1.1.0
[pip3] numpy==2.3.4
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3] nvidia-cusparse-cu12==12.5.8.93
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-nccl-cu12==2.27.5
[pip3] nvidia-nvjitlink-cu12==12.8.93
[pip3] nvidia-nvtx-cu12==12.8.90
[pip3] pytorch_classification_hema==1.0
[pip3] pytorch_detection_hema==1.0
[pip3] torch==2.9.0
[pip3] torchmetrics==1.8.2
[pip3] torchvision==0.24.0
[pip3] triton==3.5.0
[conda] Could not collect

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions