Visual Hull Rendering with Multi-view Stereo

Yang Liu1

George Chen2

Nelson Max1

Christian Hofsetz1

Peter McGuinness2

1Center for Image Processing and Integrated Computing
2343 Academic Surge
University of California, Davis
1 Shields Ave.
Davis, CA 95616
United States

2AST La Jolla Lab STMicroelectronics
4690 Executive Dr.
San Diego, CA 92121
United States


We present a system for rendering novel viewpoints from a set of calibrated and silhouette-segmented images using the visual hull together with multi-view stereo. The visual hull predicted from the object silhouettes is used to restrict the search range of the multi-view stereo. This reduces redundant computation and the possibility of incorrect matches. Unlike previous visual hull approaches, we do not need to recover a polyhedral model. Instead, the visual hull is implicitly described by the silhouette images and synthesized using their projections onto a set of planes. This representation allows an efficient implementation on current pixel-shader graphics cards, yielding frames at interactive rates. We also introduce a library of image filters to improve rendering results along edges and silhouette profiles.