GPU-based Appearance Preserving Trimmed NURBS Rendering

Guthe,M., Balázs,Á., Klein,R.

Abstract:
Trimmed NURBS are the standard surface representation used in CAD/CAM systems and accurate visualization of trimmed NURBS models at interactive frame rates is of great interest for industry. To support modification and/or animation of such surfaces, a GPU-based trimming and tessellation algorithm has been developed recently. First, the NURBS is approximated with a bi-cubic hierarchy of B´ezier patches on the CPU and then these are tessellated on the GPU. Since this approach only took the geometric error of an approximation into account, the various illumination artifacts introduced by the chosen bi-cubic approximation and the subsequent tessellation were neglected. Although this problem could be solved partially by calculating exact per-pixel normals on the GPU, the shading error introduced due to the bi-cubic approximation would remain. Furthermore, the long fragment shader required for per-pixel normals would lead to unacceptably low performance.
In this paper we present a novel bi-cubic approximation algorithm that takes the normal approximation error into account. In addition, we also define a new error measure to calculate the required grid resolution for the bi-linear approximation. In combination, this allows GPU-based NURBS tessellation with guaranteed visual fidelity. Our new method is also capable of high quality visualization of further attributes like curvature, temperature, etc. on surfaces with little or no modification.



Crowd Self Organization, Streaming and Short Path Smoothing

Stylianou,S., Chrysanthou,Y.

Abstract:
Pedestrians cooperate by forming lanes inside dense crowd in order to facilitate flow and prevent complete passage blocking. Our aim is to re-enforce this self-organization phenomenon in dense crowd for the purpose of virtual crowd animation and navigation simplification. The mechanism of a Flow Grid is introduced to measure flow over an area. The Flow Grid is a perception mechanism of the surrounding area and favors dynamic lane formation (streams). It provides feedback to the navigation algorithm of the avatars, to enable them to choose a route that both meets their goal (wanted direction) and a trajectory that assists in self-organization of the crowd.
A very simplified yet fairly effective navigation method suitable for dense crowds is also presented. It demonstrates that Self-organization of the avatars can help in simplifying local navigation. The method produces short distance, intermediate positions ahead in time and, as a post-processing step, smoothes them out before the avatar needs to use them.



Wheelie - Using a Scroll-Wheel Pen in Complex Virtual Environment Applications

Wögerbauer,M., Fuhrmann,A.L.

Abstract:
Input devices and system control techniques for complex virtual environment (VE) applications are still an open field of research. We propose the use of a scroll-wheel as an extra, dedicated input stream on a tracked stylus for means of system control. We demonstrate how this enhanced stylus can be used together with an appropriate user interface to quickly select commands, change tools and adjust parameters.
This user interface consists of two different styles: a toolbar and a graphical menu system, both accessible by the same hand that holds the stylus. The scroll-wheel extension does in no way impair the conventional use of the stylus.



Real Time Rendering of Atmospheric Lighting and Volumetric Shadows

Biri,V., Arques,D., Michelin,S.

Abstract:
Real time rendering of atmospheric light scattering is one of the most difficult lighting effect to achieve with computer graphics hardware and software. Most techniques are based on the accumulation of slices or virtual planes which may be too expensive in terms of fillrate and texture memory. This paper presents a new real time rendering method which provides a great performance improvement over previous methods.
The method is based on an analytical expression of the light transport equation and shadow planes associated with a spatial coherence technique. Using intensively graphics hardware, it provides real time rendering of lighting scattering effects for homogeneous and directional point light sources with volumetric shadows. Realistic images can be produced in real time for usual graphic scenes and at a high level framerate for complex scenes, allowing animation of lights, objects or even participating media. The method proposed in this paper has low impact on fillrate and doesn't produce any aliasing artifacts.



Visual Exploration of Seismic Volume Datasets

Ropinski,T., Steinicke,F., Hinrichs,K.

Abstract:
This paper introduces a novel method supporting the interactive exploration of volumetric subsurface data. To facilitate better insights into the datasets we propose the application of focus and context visualization metaphors. Using these metaphors users can emphasize arbitrary parts of a dataset or remove occluding information interactively to focus on the region of interest. In addition to these visualization issues we will explain how the focus and context metaphors can be combined with VR-based interaction techniques to allow the efficient exploration within more immersive VR environments. In particular, we will discuss how to control the focus and context metaphor to highlight the region of interest in combination with the usage of visual bookmarks to track potentially interesting regions within large volumetric subsurface datasets.



GPU-Friendly High-Quality Terrain Rendering

Schneider,J., Westermann,R.

Abstract:
In this paper, we present a LOD rendering technique for large, textured terrain, which is well-suited for use on programmable graphics hardware (GPUs). In a pre-process, we tile the domain, and we compute for each tile a discrete set of LODs using a nested mesh hierarchy. This hierarchy is progressively encoded. At run time, continuous LODs can simply be generated by interpolation of per-vertex height values on the GPU. Any mesh re-triangulation of the tiles is avoided at run time. As the number of triangles in the mesh hierarchy is substantially decimated, and by progressive transmission of vertices, our approach reduces bandwidth requirements significantly. During a typical fly-over we can guarantee extremely small pixel errors at very high frame rates.



Real-time Plane-sweep with Local Strategy

Nozick,V., Michelin,S., Arques,D.

Abstract:
Recent research in computer vision has made significant progress in the reconstruction of depth information from two-dimensional images. A new challenge is to extend these techniques to video images. Given a small set of calibrated video cameras, our goal is to render on-line dynamic scenes in real-time from new viewpoints. This paper presents an image-based rendering system using photogrametric constraints without any knowledge of the geometry of the scene. Our approach follows a plane-sweep algorithm extended by a local dynamic scoring that handles occlusions. In addition, we present an optimization of our method for stereoscopic rendering which computes the second image at low cost. Our method achieves real-time framerate on consumer graphic hardware thanks to fragment shaders.



Optimizing GPU Volume Rendering

Ruijters,D., Vilanova,A.

Abstract:
Volume Rendering approaches employing the GPU capabilities offer high performance on off-the-shelf hardware. In this article, we discuss the various bottlenecks found in the graphics hardware when performing GPU-based Volume Rendering. The specific properties of each bottleneck and the trade-offs between them are described.

Further we present a novel strategy to balance the load on the identified bottlenecks, without compromising the image quality. Our strategy introduces a two-staged space-leaping, whereby the first stage applies bricking on a semi-regular grid, and the second stage uses octrees to reach a finer granularity. Additionally we apply early ray termination to the bricks. We demonstrate how the two stages address the individual bottlenecks, and how they can be tuned for a specific hardware pipeline. The described method takes into account that the rendered volume may exceed the available texture memory. Our approach further allows fast run-time changes of the transfer function.

Tests show that, depending on the used graphics hardware, for a sparse vascular 512^3 data set, with only 3% of the voxels containing visual data, 87% to 99% of the rendering time per frame can be saved compared to non-optimized GPU volume rendering.



3D Reconstruction and Visualization of Spiral Galaxies

Hildebrand,K., Magnor,M., Froehlich,B.

Abstract:
Spiral Galaxies are among the most stunning objects in the night sky. However, reconstructing a 3D volumetric model of these astronomical objects
from conventional 2D images is a hard problem, since we are restricted to our terrestrial point of view.
This work consists of two contributions. First, we employ a physically motivated, GPU-based volume rendering algorithm which models the complex interplay of scattering and extinction of light in interstellar space. Making use of general galactic shape information and far-infrared data, we secondly present a new approach to recover 3D volumes of spiral galaxies from conventional 2D images. We achieve this by an analysis-by-synthesis optimization using our rendering algorithm to minimize the difference between the rendition of the reconstructed volume and the input galaxy image. The presented approach yields a plausible volumetric structure of spiral galaxies which is suitable for creating 3D visualization, e.g., for planetarium shows or other educational purposes.



Volume Wires : A Framework for Empirical Nonlinear Deformation of Volumetric Datasets

Walton,S.J., Jones,M.W.

Abstract:
We introduce a new framework for non-linear, non-reconstructive deformation of volumetric datasets. Traditional techniques for deforming volumetric datasets non-linearly usually involve a reconstruction stage, where a new deformed volume is reconstructed and then sent to the renderer. Our intuitive sweep-based technique avoids the drawbacks of reconstruction by creating a small attribute field which defines the deformation, and then sending it with the original volume dataset to the rendering stage. This paper also introduces acceleration techniques aimed at giving interactive control of deformation in future implementations.



View-dependent Tetrahedral Meshing and Rendering using Arbitrary Segments

Sondershaus,R., Strasser,W.

Abstract:
We present a meshing and rendering framework for tetrahedral meshes that constructs a multi resolution representation and uses this representation to adapt the mesh to rendering parameters.
The mesh is partitioned into several segments which are simplified independently. A multiresolution representation is con-structed by merging simplified segments and again simplifying the merged segments. We end up with a (binary) hierarchy of segments whose parent nodes are the simplified versions of their children nodes. We show how the segments of arbitrary levels can be connected efficiently such that the mesh can be adapted fast to rendering parameters at run time.
This hierarchy is stored on disc and segments are swapped into the main memory as needed. Our algorithm ensures that the adapted mesh can always be treated like a not-segmented mesh from outside and thus can be used by any renderer. We demonstrate a segmentation technnique that is based on an octree although the multiresolution representation itself does not
rely on any paticular segmentation technique.



Detecting Holes in Point Set Surfaces

Bendels,G.H., Schnabel,R., Klein,R.

Abstract:
Models of non-trivial objects resulting from a 3d data acquisition process (e.g. Laser Range Scanning) often contain holes due to occlusion, reflectance or transparency. As point set surfaces are unstructured surface representations with no adjacency or connectivity information, defining and detecting holes is a non-trivial task. In this paper we investigate properties of point sets to derive criteria for automatic hole detection. For each point, we combine several criteria into an integrated boundary probability. A final boundary loop extraction step uses this probability and exploits additional coherence properties of the boundary to derive a robust and automatic hole detection algorithm.



Making Grass and Fur Move

Banisch,S., Wüthrich,C.A.

Abstract:
This paper introduces physical laws into the real-time animation of fur and grass. The main idea to achieve this, is to combine shell-based rendering and mass-spring systems. In a preprocessing step, a volume array is filled with the structure of fur and grass by a method based on exponential functions.
The volumetric data is used to generate a series of two dimensional, semitransparent textures that encode the presence of hair or of the blades. In order to render the fur volume in real-time, these shell textures are applied to a series of layers extruded above the initial surface. Moving fur can be achieved by horizontally displacing these shell layers at runtime through a mass-spring mesh. Four different mass-spring topologies - different arrangements of masses and springs over the grass--covered surface - are introduced and used for animation. Two of them allow the shell layers to separate laterally, so that the "parting" of grass can be simulated. Performance observations prove mass-spring systems to be well-suited for the real-time simulation of fur and grass dynamics.



Similarity Brushing for Exploring Multidimensional Relations

Novotny,M., Hauser,H.

Abstract:
Displaying multidimensional information has always been a challenge. Projecting multiple dimensions into a two dimensional display is one of the core tasks of information visualization. The human visual system is limited to a low number of dimensions and therefore the human-oriented projection does not easily combine the whole information contained in the original space.

This paper introduces a new interaction tool, that implants the n-dimensional information into a low dimensional view and bridges the projection space with the original space in an intuitive and simple way. In one direction the tool performs $n$-dimensional data-driven brushing based on screen space interaction. In the opposite direction it allows for interactive visual exploration of the original multidimensional space in an infovis display. The implementation is presented using a standard scatterplot but it can be extended to many other infovis techniques as the concept does not depend on the screen space configuration.



Hardware-Accelerated Collision Detection using Bounded-Error Fixed-Point Arithmetic

Raabe,A., Hochgürtel,S., Zachmann,G., Anlauf,J.K.

Abstract:
A novel approach for highly space-efficient hardware-accelerated collision detection is presented. This paper focuses on the architecture to traverse bounding volume hierarchies in hardware. It is based on a novel algorithm for testing discretely oriented polytopes (DOPs) for overlap, utilizing only fixed-point (i.e., integer) arithmetic. We derive a bound on the deviation from the mathematically correct result and give formal proof that no false negatives are produced.
Simulation results show that real-time collision detection of complex objects at rates required by force-feedback and physically-based simulations can be obtained. In addition, synthesis results prove the architecture to be highly space efficient. We compare our FPGA-optimized design with a fully parallelized ASIC-targeted architecture and a software implementation.



Improving Quality of Free-Viewpoint Image by Mesh Based 3D Shape Deformation

Yaguchi,S., Saito,H.

Abstract:
In this paper, we present a method to synthesize high-quality virtual viewpoint image targeting the detailed texture objects. About 30 images are taken from multiple uncalibrated cameras around the object, and the Visual Hull model is reconstructed with Shape from Silhouette method. To deform 3D surface model that is converted from Visual Hull Model using the information such as image texture and object silhouette, the difference between the real object and the reconstructed model is evaluated as a cost function of optimization problem.
Our deforming model algorithm is based on single vertex iterative shifting. The vertex of surface triangle mesh is moved to the selected candidate point that maximizes the cost function. The cost function is consisted by four constraint criteria, texture correlation, smoothness, object silhouette, and mesh shape regularity. In addition to the cost function, such as judging mesh direction and combining / dividing meshes are applied for refined 3D models to avoid mesh folding and mesh size unevenness. The refined model provides a quite accurate dense corresponding relationship between the input images, so that high quality image can be synthesized at virtual viewpoint.
We also demonstrate the proposed method by showing virtual viewpoint images to applying the real image that are taken from multiple uncalibrated cameras.



Reusing Frames in Camera Animation

Méndez-Feliu,A., Sbert,M., Szirmay-Kalos,L.

Abstract:
Rendering an animation in a global illumination framework is a very costly process. Each frame has to be computed with high accuracy to avoid both noise in a single frame and flickering from frame to frame. Recently an efficient solution has been presented for camera animation, which reused the results computed in a frame for other frames via reprojection of the first hits of primary rays. This solution, however, is biased since it does not take into account the different probability densities that generated the different contributions to a pixel. In this paper we present a correct, unbiased solution for frame reuse. We show how the different contributions can be combined into an unbiased solution using multiple importance sampling. The validity of our solution is tested with an animation using path-tracing technique, and the results are compared with both the classic independent approach and the previous unweighted, biased, solution.