Official Baseline Implementation for Track 3
Based on Place3D -- "Is Your LiDAR Placement Optimized for 3D Scene Understanding?"
(https://github.com/ywyeli/Place3D)
π Prize Pool: $2,000 USD for Track 3 Winners
Track 3: Sensor Placement challenges participants to design LiDAR-based 3D perception models, including those for 3D object detection, that can adapt to diverse sensor placements in autonomous systems.
Participants will be tasked with developing novel algorithms that can adapt to and optimize LiDAR sensor placements, ensuring high-quality 3D scene understanding across a wide range of environmental conditions, such as weather variances, motion disturbances, and sensor failures.
To be added.
- Venue: IROS 2025, Hangzhou (Oct 19-25, 2025)
- Registration: Google Form (Open until Aug 15)
- Contact: robosense2025@gmail.com
| Prize | Award |
|---|---|
| π₯ 1st Place | $1000 + Certificate |
| π₯ 2nd Place | $600 + Certificate |
| π₯ 3rd Place | $400 + Certificate |
| π Innovation Award | Cash Award + Certificate |
| Participation | Certificate |
The track 3 dataset is prepared using data and settings from the Place3D framework.
In Phase 1, we provide a trainval dataset consisting of camera images from six views and LiDAR data from four different placements. You may choose to train your model on one placement and validate it on others, or design methods that leverage all placements to achieve better generalization.
The dataset contains 200 scenes (8,000 frames) in total, split into 125 scenes (5,000 frames) for training and 75 scenes (3,000 frames) for validation.
All LiDAR data from different placements are collected simultaneously, and they share the same metadata and annotation files. You can create symbolic links pointing to the desired data path, in order to conveniently switch between LiDAR placements.
| File name | Description | Split |
|---|---|---|
| track3_phase1_lidar_blobs_only_placement_0 | LiDAR data, placement Center | train, val |
| track3_phase1_lidar_blobs_only_placement_1 | LiDAR data, placement Line | train, val |
| track3_phase1_lidar_blobs_only_placement_2 | LiDAR data, placement Trapezoid | train, val |
| track3_phase1_lidar_blobs_only_placement_3 | LiDAR data, placement Pyramid | train, val |
| track3_phase1_camera_blobs_only | Camera images of 6 views | train, val |
| track3_phase1_sensor_file_blobs | LiDAR (Center) + Camera data | train, val |
| track3_phase1_metadata | metadata including annotations (gt) | train, val |
| track3_phase1and2_map | maps | - |
In Phase 2, we provide a test dataset that includes camera images from six views and LiDAR data from six different placements. All LiDAR data are organized in a single folder, so you do not need to switch paths when running tests.
Please use the Phase 1 trainval dataset to train and validate your model, and then evaluate its performance on the Phase 2 test set. See the following sections for detailed instructions on how to submit your results.
| File name | Description | Split |
|---|---|---|
| track3_phase2_lidar_blobs_only_test_part | LiDAR data | test |
| track3_phase2_camera_blobs_only_test_part | Camera images of 6 views | test |
| track3_phase2_sensor_file_blobs_test_part | LiDAR + Camera data | test |
| track3_phase2_metadata | metadata without annotations (gt) | test |
| track3_phase1and2_map | maps | - |
To be added.
We provide a simple demo to run the baseline model.
The code is built with following libraries:
- Python >= 3.8, <3.9
- OpenMPI = 4.0.4 and mpi4py = 3.0.3 (Needed for torchpack)
- Pillow = 8.4.0 (see here)
- PyTorch >= 1.9, <= 1.10.2
- tqdm
- torchpack
- mmcv = 1.4.0
- mmdetection = 2.20.0
- numpy = 1.23
After installing these dependencies, please run this command to install the codebase:
cd projects/bevfusion
python setup.py developWe also provide a Dockerfile to ease environment setup. To get started with docker, please make sure that docker is installed on your machine. After that, please execute the following command to build the docker image:
cd projects/bevfusion/docker && docker build . -t bevfusionWe can then run the docker with the following command:
docker run --gpus all -it -v `pwd`/../data:/dataset --shm-size 16g bevfusion /bin/bashWe recommend the users to run data preparation (instructions are available in the next section) outside the docker if possible. Note that the dataset directory should be an absolute path. Within the docker, please run the following command to clone our repo and install custom CUDA extensions:
git clone https://github.com/robosense2025/track3 && cd projects/bevfusion
python setup.py developYou can then create a symbolic link data to the /dataset directory in the docker.
Add dataset_utils path to your PYTHONPATH. Edit ~/.bashrc or ~/.zshrc:
export PYTHONPATH="$PYTHONPATH:/[YOUR_PARENT_FOLDER]/track3/projects/dataset_utils"Apply the change:
source ~/.bashrc # or ~/.zshrcPlease download the dataset from here. Our dataset is in nuScenes format, but requires our customized tools.
We typically need to organize the useful data information with a .pkl or .json file in a specific style for organizing annotations. To prepare these files, run the following command:
python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenesAfter data preparation, you will be able to see the following directory structure (as is indicated in mmdetection3d):
mmdetection3d
βββ mmdet3d
βββ tools
βββ configs
βββ data
β βββ nuscenes
β β βββ maps
β β βββ samples
β β βββ v1.0-test
| | βββ v1.0-trainval
β β βββ nuscenes_database
β β βββ nuscenes_infos_train.pkl
β β βββ nuscenes_infos_val.pkl
β β βββ nuscenes_infos_test.pkl
β β βββ nuscenes_dbinfos_train.pkl
To be added.
For LiDAR detector, please run:
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yamlYou will be able to run:
torchpack dist-run -np 8 python tools/test.py [config file path] pretrained/[checkpoint name].pth --eval [evaluation type]For example:
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml pretrained/track3-baseline.pth --eval bboxFor Phase 1 submission, run the following command:
torchpack dist-run -np 8 python tools/test.py [config file path] pretrained/[checkpoint name].pth --eval bbox --eval-options 'jsonfile_prefix=./results'For example:
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml pretrained/track3-baseline.pth --eval bbox --eval-options 'jsonfile_prefix=./results'After running, you will find a file named results_nusc.json in ./results. Zip this file and submit results_nusc.zip to Codabench.
For Phase 2 submission, in /track3/projects/bevfusionconfigs/nuscenes/default.yaml,
-
update line
289toann_file: ${dataset_root + "nuscenes_infos_test.pkl"}. -
Comment out lines
207-211that loads annotations (LoadAnnotations3D) in the test pipeline. -
Comment out lines
247-249that includes the keys of gt data (Collect3D).
Then, run the same command from Phase 1 after making the above changes.
The evaluation metrics are same as nuScenes. (Do not install the official nuscenes-devkit, as it may cause compatibility issues.)
- Registration: Google Form
- Phase 1 Deadline: August 15th
- Phase 2 Deadline: September 15th
- Awards Announcement: IROS 2025
- Challenge Website: robosense2025.github.io
- Track Details: Track 3 Page
- Track Dataset: HuggingFace Page
- Baseline Model: HuggingFace Page
- Related Paper: arXiv:2403.17009
- Email: robosense2025@gmail.com
- Official Website: https://robosense2025.github.io
- Issues: Please use GitHub Issues for technical questions
If you use the code and dataset in your research, please cite:
@inproceedings{li2024place3d,
title = {Is Your LiDAR Placement Optimized for 3D Scene Understanding?},
author = {Ye Li and Lingdong Kong and Hanjiang Hu and Xiaohao Xu and Xiaonan Huang},
booktitle = {Advances in Neural Information Processing Systems},
year = {2024}
}π€ Ready to sense the world robustly? Register now and compete for $2,000!
π Register Here | π Challenge Website | π§ Contact Us
Made with β€οΈ by the RoboSense 2025 Team



