Visual Hull Rendering with Multi-view Stereo

Yang Liu1
yngliu@ucdavis.edu

George Chen2
george-qian.chen@st.com

Nelson Max1
max2@llnl.gov

Christian Hofsetz1
chofsetz@ucdavis.edu

Peter McGuinness2
peter.mcguinness@st.com


1Center for Image Processing and Integrated Computing
2343 Academic Surge
University of California, Davis
1 Shields Ave.
Davis, CA 95616
United States

2AST La Jolla Lab STMicroelectronics
4690 Executive Dr.
San Diego, CA 92121
United States

Abstract

We present a system for rendering novel viewpoints from a set of calibrated and silhouette-segmented images using the visual hull together with multi-view stereo. The visual hull predicted from the object silhouettes is used to restrict the search range of the multi-view stereo. This reduces redundant computation and the possibility of incorrect matches. Unlike previous visual hull approaches, we do not need to recover a polyhedral model. Instead, the visual hull is implicitly described by the silhouette images and synthesized using their projections onto a set of planes. This representation allows an efficient implementation on current pixel-shader graphics cards, yielding frames at interactive rates. We also introduce a library of image filters to improve rendering results along edges and silhouette profiles.