Skip to content

tub-rip/DERD-Net

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DERD-Net: Learning Depth from Event-based Ray Densities (NeurIPS 2025 Spotlight)

Official repository for DERD-Net: Learning Depth from Event-based Ray Densities, by Diego Hitzges*, Suman Ghosh*, and Guillermo Gallego.

*Equal contribution.

DERD-Net: Learning Depth from Event-based Ray Densities

Citation

If you use this work in your research, please cite it as follows:

@InProceedings{Hitzges25neurips,
  title     = {{DERD-Net}: Learning Depth from Event-based Ray Densities},
  author    = {Hitzges, Diego and Ghosh, Suman and Gallego, Guillermo},
  booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
  year      = {2025}
}

Framework

Alt Text

Data-Preprocessing

  • Create Disparity Space Images (DSIs) from events and camera pose
  • In case of stereo event vision, fuse DSIs from two or more cameras
  • Select pixels with sufficient ray counts in the DSI

Input

  • Local subregion of the DSI around each pixel (Sub-DSI)
  • Each Sub-DSI is one data point and processed independently and parallely by the network

Neural Network

Alt Text

Output

  • Pixel-wise depth estimation for each Sub-DSI:
    • Single value at selected pixel for single-pixel network version
    • 3x3 grid at selected and 8-neighboring pixels for multi-pixel network version

Results

Drones (MVSEC)

Alt Text

Using three-fold cross-validaton on the MVSEC indoor_flying sequences, our approach drastically outperforms comparable methods:

  • Using purely monocular data, our method achieves comparable results to existing stereo methods.
  • When applied to stereo data, it strongly outperforms all state-of-the-art (SOTA) approaches, reducing the mean absolute error by at least 42%.
  • Our method also allows for increases in depth completion by more than 3-fold while still yielding a reduction in median absolute error of at least 30%.

Driving (DSEC)

Alt Text

Installation & Usage

The code for our approach is provided in Jupyter Notebooks, each of which contains detailed usage instructions. They are located in the notebooks folder and cover the following functionalities:

To use these notebooks, follow the installation guide below:

1. Clone the Repository

git clone https://github.com/tub-rip/DERD-Net.git
cd DERD-Net

2. Set Up the Environemnt

We provide an environment.yml file to ensure compatibility with all dependencies. It can be installed using Conda.

conda env create -f environment.yml
conda activate derdnet_env

3. Launch Jupyter Notebook

The following command opens the Jupyter interface in your browser. You can then open and run the notebooks listed above.

jupyter notebook

If you are new to Jupyter, see this quick beginner’s guide to help you get started.

Models

Pretrained models are available in the models folder. These include weights for both the single-pixel and multi-pixel versions of DERD-Net. They can can be used directly within the provided Jupyter notebooks:

  • Simply place the desired .pth file from the models directory.
  • The model will be automatically loaded as specified in the corresponding notebook.

Generating Input DSIs

Disparity Space Images (DSIs) can be obtained by running dvs_mcemvs with the parameter save_dsi=true in the config file, like in this example. Please note that saving DSIs occupy significant disk space.

For a quick start, sample DSIs from the MVSEC flying1 sequence are provided here.

License

This work is released under MIT License.

Related Works

Additional Resources on Event-based Vision