David Guinnip
University of Kentucky
Department
of Computer Science
40502 Lexington
United States
e-mail: dtguin0@pop.uky.edu |
http://www.metaverselab.org/research/mastertexture/ |
Keywords: Image-based rendering, texture mapping, compression, image processing
Abstract
We introduce an encoding technique that supports efficient view-dependent image-based rendering applications that combine photorealistic images with an underlying surface mesh. The representation increases rendering efficiency, reduces the space required to store large numbers of object views, and supports direct image-based editing for realistic object manipulation. The Master Texture Space encoding transforms an original set of exemplar images into a set of Master Textures that share a globally consistent set of texture coordinates based on underlying object geometry and independent of camera positions used to create the exemplar views. An important property of the master texture space is that an arbitrary but fixed pixel position in all the Master Textures correspond to the same point on the object surface. This property increases rendering efficiency for real-time dynamic image-based rendering applications because new texture coordinates do not have to be loaded as a function viewpoint. In addition, changes in a Master Texture image can be rapidly propagated to all views to add/remove features, generate new viewpoints, and remove artifacts to a pre-existing image-based scene. Results presented here demonstrate that the technique reduces real-time rendering rates by 1.6 milliseconds for reasonably complex models on a commodity graphics card. Two example scenarios demonstrate how the Master Texture encoding supports efficient update of an image-based model.