This is the pyTorch code for the course project in 263-0600-00L Research in Computer Science conducted at ETH Zürich and supervised by Dr. Sergey Prokudin. It extends NeRF with synthetic depth information to reduce the needed number of input images.
Virtual reality (VR) and augmented reality (AR) immerse the user in a new digital world. However, representing realworld scenes and objects digitally is very challenging. Realistic lighting and high details are hard to model. An approach that solves some of the mentioned shortcomings was introduced with Representing Scenes as Neural Radiance Fields for View Synthesis (NeRF). NeRF can produce photorealistic novel views but needs many RGB input images to train. In this work, we explore how NeRF can be extended with synthetic depth information to reduce the needed number of input images.
NeRF trained with 2 views:
RGB-D NeRF trained with 2 views:
Install requirements:
pip install -r requirements.txt
Download the Fern dataset here.
And then put it into the ./data folder.
The pretrained models and evaluations on the test set can be found here: link
For example, our pre-trained model for Fern trained on 2 views can be found under:
├── results.zip
│ ├── Fern
│ ├── ├── DS-RGB-D-Fern-2
│ ├── ├── ├── 050000.tar
And the output of NeRF:
├── results.zip
│ ├── Fern
│ ├── ├── DS-RGB-Fern-2
│ ├── ├── ├── 050000.tar
To train a DS-NeRF on the example fern dataset:
python run_nerf.py --config configs/fern_d.txt
It will create an experiment directory in ./logs, and store the checkpoints and rendering examples there.
There is also a config file for the scene ship: ship_d.txt. Make sure you downloaded the dataset first.
This code borrows heavily from DS-NeRF and nerf-pytorch. Special thanks go out to the supervisor of this work, Dr. Sergey Prokudin for proposing this interesting topic and for the kind guidance throughout the work.

