Vid2Curve: Simultaneous Camera Motion Estimation and Thin Structure Reconstruction from an RGB Video

the University of Hong Kong

Max-Planck-Institute for Informatics

the University of Hong Kong

National Tsinghua University

Max-Planck-Institute for Informatics

the University of Hong Kong

SIGGRAPH 2020
Paper
</Code>
Data

Overview Video

Abstract

Thin structures, such as wire-frame sculptures, fences, cables, power lines, and tree branches, are common in the real world. It is extremely challenging to acquire their 3D digital models using traditional image-based or depth-based reconstruction methods because thin structures often lack distinct point features and have severe self-occlusion. We propose the first approach that simultaneously estimates camera motion and reconstructs the geometry of complex 3D thin structures in high quality from a color video captured by a handheld camera. Specifically, we present a new curve-based approach to estimate accurate camera poses by establishing correspondences between featureless thin objects in the foreground in consecutive video frames, without requiring visual texture in the background scene to lock on. Enabled by this effective curve-based camera pose estimation strategy, we develop an iterative optimization method with tailored measures on geometry, topology as well as self-occlusion handling for reconstructing 3D thin structures. Extensive validations on a variety of thin structures show that our method achieves accurate camera pose estimation and faithful reconstruction of 3D thin structures with complex shape and topology at a level that has not been attained by other existing reconstruction methods.

Method

Method overview. Given a sequence of RGB images of a 3D thin structure, we first segment out the structure in the foreground to obtain a sequence of binary masks and corresponding one-pixel wide 2D curves in the preprocessing step (a). To solve the optimization problem formulated in Section 4.2 of our paper, we first initialize the camera poses and a 3D curve network using two properly selected image frames from the input video (Section 4.2.1). (b) Then we adopt an iterative structure optimization strategy that adds and processes the remaining image frames progressively to update the camera poses of observed views so far and refine the estimated curve network in an alternating manner (Section 4.2). (c) The surface of the thin structure is modeled as a sweep surface along the constructed 3D curve with a circular section whose radius estimated from image observations (Section 4.3). The final reconstructed output is a clean, smooth thin structure with high geometric and topological fidelity to the original wire model.

Results

Reconstruction results. The gallery of real world 3D thin structures reconstructed using our method. Our method reconstructs a wide variety of wire objects in high quality.

Reconstruction of synthetic models. The synthetic dataset contains nine wire models. Four of these models, shown in the bottom row, have a varying thickness.

Acknowledgments

We thank Shiwei Li, Amy Tabb, Rhaleb Zayer for their help with experiments. This work was partially funded by the Research Grant Council of Hong Kong (GRF 17210718), ERC Consolidator Grant 770784, Lise Meitner Postdoctoral Fellowship, Ministry of Science and Technology of Taiwan (108-2218-E-007-050 and 107-2221-E-007-088-MY3).

Citation

@article{wang2020vid2curve,
  title={Vid2Curve: Simultaneous Camera Motion Estimation and Thin Structure Reconstruction from an RGB Video},
  author={Wang, Peng and Liu, Lingjie and Chen, Nenglun and Chu, Hung-Kuo and Theobalt, Christian and Wang, Wenping},
  Journal={ACM Trans. Graph. (SIGGRAPH)},
  year={2020},
  Number={4},
  Volume={38},
  DOI={https://doi.org/10.1145/3386569.3392476},
  Publisher={ACM}
}