Computergrafik

SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary Image collections

Mark Boss1, Andreas Engelhardt1, Abhishek Kar2, Yuanzhen Li2, Deqing Sun2, Jonathan T. Barron2Hendrik P. A. Lensch1 and Varun Jampani2
University of Tübingen1 , Google2
NeurIPS 2022

Abstract

Inverse rendering of an object under entirely unknown capture conditions is a fundamental challenge in computer vision and graphics. Neural approaches such as NeRF have achieved photorealistic results on novel view synthesis, but they require known camera poses. Solving this problem with unknown camera poses is highly challenging as it requires joint optimization over shape, radiance, and pose. This problem is exacerbated when the input images are captured in the wild with varying backgrounds and illuminations. In such image collections in the wild, standard pose estimation techniques fail due to very few estimated correspondences across images. Furthermore, NeRF cannot relight a scene under any illumination, as it operates on radiance (the product of reflectance and illumination). We propose a joint optimization framework to estimate the shape, BRDF, and per-image camera pose and illumination. Our method works on in-the-wild online image collections of an object and produces relightable 3D assets for several use-cases such as AR/VR. To our knowledge, our method is the first to tackle this severely unconstrained task with minimal user interaction.

Links

Bibtex

@inproceedings{boss2022samurai,
  title={{SAMURAI}: {S}hape {A}nd {M}aterial from {U}nconstrained {R}eal-world {A}rbitrary {I}mage collections},
  author={Boss, Mark and Engelhardt, Andreas and Kar, Abhishek and Li, Yuanzhen and Sun, Deqing and Barron, Jonathan T. and Lensch, Hendrik P.A. and Jampani, Varun},
  year={2022,
  booktitle = {NeurIPS}
}