Skip to content

[WACV 2025] NeRFs are Mirror Detectors: Using Structural Similarity for Multi-View Mirror Scene Reconstruction with 3D Surface Primitives

Notifications You must be signed in to change notification settings

vc-bonn/nerfs-are-mirror-detectors

Repository files navigation

NeRFs are Mirror Detectors: Using Structural Similarity for Multi-View Mirror Scene Reconstruction with 3D Surface Primitives

Leif Van Holland1  ·  Michael Weinmann2  ·  Jan Müller1  ·  Patrick Stotko1  ·  Reinhard Klein1

1University of Bonn     2Delft University of Technology

WACV 2025

teaser

Abstract

While neural radiance fields (NeRF) led to a breakthrough in photorealistic novel view synthesis, handling mirroring surfaces still denotes a particular challenge as they introduce severe inconsistencies in the scene representation. Previous attempts either focus on reconstructing single reflective objects or rely on strong supervision guidance in terms of additional user-provided annotations of visible image regions of the mirrors, thereby limiting the practical usability. In contrast, in this paper, we present NeRF-MD, a method which shows that NeRFs can be considered as mirror detectors and which is capable of reconstructing neural radiance fields of scenes containing mirroring surfaces without the need for prior annotations. To this end, we first compute an initial estimate of the scene geometry by training a standard NeRF using a depth reprojection loss. Our key insight lies in the fact that parts of the scene corresponding to a mirroring surface will still exhibit a significant photometric inconsistency, whereas the remaining parts are already reconstructed in a plausible manner. This allows us to detect mirror surfaces by fitting geometric primitives to such inconsistent regions in this initial stage of the training. Using this information, we then jointly optimize the radiance field and mirror geometry in a second training stage to refine their quality. We demonstrate the capability of our method to allow the faithful detection of mirrors in the scene as well as the reconstruction of a single consistent scene representation, and demonstrate its potential in comparison to baseline and mirror-aware approaches.

Installation

Using Python 3.12, run the following commands for installation. Adjust torch and CUDA version, if necessary.

pip install torch==2.8.0 torchvision==0.23.0 torchaudio==2.8.0 --index-url https://download.pytorch.org/whl/cu129
pip install -r requirements.txt

Training

First, you need to download the TraM-NeRF dataset from the official repo and extract it into data.

unzip -d data tramnerf_scenes.zip

The training is split into three stages: Initial optimization, mirror detection and then joint optimization of scene and mirrors. These steps have to be executed manually.

Initial optimization

Launch run.py with the no_mirror_repr version of the config you want to run. You can find these in the experiments directory. For example, for scene 2:

python run.py -config scene_2_no_mirror_repr.gin

This will create a tensorboard log in the folder logs with which you can monitor the progress of the training. Additionally, it will save the config and model checkpoints periodically.

Mirror detection

To be able to estimate the positions of the mirrors, the result of the initial optimization has to be rendered with the render_depths.py script. You have to pass the path to the logs directory from the first step, e.g.

python render_depths.py --run logs/nerfmd/scene_2_no_mirror_repr-YOUR_RUN_ID

Next, compute the PC scores with:

python compute_photo_consistency.py --run logs/nerfmd/scene_2_no_mirror_repr-YOUR_RUN_ID --use_mipmapping --use_culling

Now, you can estimate the mirror positions:

python estimate_mirrors.py --run logs/nerfmd/scene_2_no_mirror_repr-YOUR_RUN_ID --num_clusters 3 --mirror_type rectangle

Joint optimization of mirrors and scene

After the script is done, the output can be found in results/scene_2/nerfmd/scene_2_no_mirror_repr-YOUR_RUN_ID/mirrors.json. You should now create a copy of the base_est_mirrors.gin config and add the mirror description from the mirrors.json to it. Give the config name a suitable name, e.g. scene_2_est_mirrors.gin. Then, run the config with

python run.py -config scene_2_est_mirrors.gin

Note: For your convenience, the experiments directory already contains the mirror detections for all scenes evaluated in the paper.

Evaluation

After the training is done, you can render the test images using

python render_rgb.py --run logs/nerfmd/scene_2_est_mirrors-YOUR_RUN_ID/

and run the evaluation with

python eval.py --run logs/nerfmd/scene_2_est_mirrors-YOUR_RUN_ID/

The metrics will be saved in results/scene_2/nerfmd/scene_2_est_mirrors-YOUR_RUN_ID/metrics.json

Citation

If you use the code for your own research, please cite our work as

@inproceedings{holland2025nerfs,
  title={NeRFs are Mirror Detectors: Using Structural Similarity for Multi-View Mirror Scene Reconstruction with 3D Surface Primitives},
  author={Holland, Leif Van and Weinmann, Michael and M{\"u}ller, Jan U and Stotko, Patrick and Klein, Reinhard},
  booktitle={2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
  pages={1795--1807},
  year={2025},
  organization={IEEE}
}

Acknowledgements

This work has been funded by the DFG project KL 1142/11-2 (DFG Research Unit FOR 2535 Anticipating Human Behaviour), and additionally by the Federal Ministry of Education and Research of Germany and the state of North-Rhine Westphalia as part of the Lamarr-Institute for Machine Learning and Artificial Intelligence and by the Federal Ministry of Education and Research under Grant No. 01IS22094E WEST-AI.

About

[WACV 2025] NeRFs are Mirror Detectors: Using Structural Similarity for Multi-View Mirror Scene Reconstruction with 3D Surface Primitives

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages