ReTR: Modeling Rendering via Transformer for
Generalizable Neural Surface Reconstruction
NeurIPS 2023

Note:

we change our titile following the suggestions from reviewers/ACs, the camera-ready version paper and code will comming soon.
original name:Rethinking Rendering in Generalizable Neural Surface Reconstruction: A Learning-based Solution

Abstract

Generalizable neural surface reconstruction techniques have attracted great attention in recent years. However, they encounter limitations of low confidence depth distribution and inaccurate surface reasoning due to the oversimplified volume rendering process employed. In this paper, we present Reconstruction TRansformer (ReTR), a novel framework that leverages the transformer architecture to redesign the rendering process, enabling complex photon-particle interaction modeling. It introduces a learnable meta-ray token and utilizes the cross-attention mechanism to simulate the interaction of photons with sampled points and render the observed color. Meanwhile, by operating within a high-dimensional feature space rather than the color space, ReTR mitigates sensitivity to projected colors in source views. Such improvements result in accurate surface assessment with high confidence. We demonstrate the effectiveness of our approach on various datasets, showcasing how our method outperforms the current state-of-the-art approaches in terms of reconstruction quality and generalization ability.

overview

Proposed Method

Reconstruction Results

Citation

Acknowledgements


The website template was borrowed from Jon Barron.