Reconstructing the shape and appearance of real-world objects using measured 2D images has been a long-standing problem in computer vision.
In this paper, we introduce a new analysis-by-synthesis technique capable of producing high-quality reconstructions through robust coarse-to-fine optimization and physics-based differentiable rendering.
Unlike most previous methods that handle geometry and reflectance largely separately, our method unifies the optimization of both by leveraging image gradients with respect to both object reflectance and geometry.
To obtain physically accurate gradient estimates, we develop a new GPU-based Monte Carlo differentiable renderer leveraging recent advances in differentiable rendering theory to offer unbiased gradients while enjoying better performance than existing tools like PyTorch3D and redner.
To further improve robustness, we utilize several shape and material priors as well as a coarse-to-fine optimization strategy to reconstruct geometry.
We demonstrate that our technique can produce reconstructions with higher quality than previous methods such as COLMAP and Kinect Fusion.
We thank Chenglei Wu, Yujia Chen, Christoph lassner, Sai Bi, Zhengqin Li, Giljoo Nam, Yue Dong, Hongzhi Wu, Zhongshi Jiang as well as the anonymous reviewers for their valuable discussions. We thank the digital artist James Warren from Facebook Reality Labs for modeling and rendering the two table scenes, and Inseung Hwang from KAIST for making comparisons with [Nam et al. 2018]. This work was supported in part by NSF grants 1900783 and 1900927.
@article{luan2021unified,
author = {Luan, Fujun and Zhao, Shuang and Bala, Kavita and Dong, Zhao},
title = {Unified Shape and SVBRDF Recovery using Differentiable Monte Carlo Rendering},
journal = {ArXiv},
year = {2021},
}