[Project Website] [Arxiv] [Dataset (6GB)]
-
cd to the unzip directory
-
build our docker image
docker build -t factnerf -f Dockerfile .
-
download our dataset and put it at $FACTNERF_ROOT/data
$FACTNERF_ROOT/data/SYN
$FACTNERF_ROOT/data/SYN/sce_a_train
...
export FACTNERF_ROOT=$(pwd)
# check if input data exists
ls $FACTNERF_ROOT/data
# set GPU
export CUDA_VISIBLE_DEVICES=0
Training
cd $FACTNERF_ROOT
python framework/run_main.py -f configs/SYN/factorednerf/sce_a.yaml --mode train
Rendering
#faster rendering using a smaller resolution
python framework/run_main.py -f configs/SYN/factorednerf/sce_a.yaml --mode render_valid_q -c map__final --dw 4 --fnum 4
# rendering (no downsampling)
python framework/run_main.py -f configs/SYN/factorednerf/sce_a.yaml --mode render_valid_q -c map__final --dw 1
Please download the checkpoint file output-syn.zip and unzip to $FACTNERF_ROOT
Some codes are adapted from the awesome repositories: NiceSlam and Neural Scene Graphs. We appreciated their efforts in open-sourcing their implementation. We also thank the authors of DeformingThings4D for allowing us to upload our synthetic dataset. Please be aware of all corresponding licenses.
@misc{wong2023factored,
title={Factored Neural Representation for Scene Understanding},
author={Yu-Shiang Wong and Niloy J. Mitra},
year={2023},
eprint={2304.10950},
archivePrefix={arXiv},
primaryClass={cs.CV}
}