vIsage - A visualization and debugging framework for distributed system applications

Lipski,C., Berger,K., Magnor,M.

Abstract:
We present a Visualization, Simulation, And Graphical debugging Environment (vIsage) for distributed systems. Time-varying spatial data as well as other information from different sources can be displayed and superimposed in a single view at run-time. The main contribution of our framework is that it is not just a tool for visualizing the data, but it is a graphical interface for a simulation environment. Real world data can be recorded, played back or even synthesized. This enables testing and debugging of single components of complex distributed systems. Being the missing link between development, simulation and testing, e.g., in robotics applications, it was designed to significantly increase the efficiency of the software development process.



Estimation of Joint Types and Joint Limits from Motion Capture Data

Engell-Norregard,M., Erleben K.

Abstract:
It is time-consuming for an animator to explicitly model joint types and joint limits of articulated figures. In this paper we describe a simple and fast approach to automated joint estimation from motion capture data of articulated figures. Our method will make the joint modeling more efficient and less time consuming for the animator by providing a good starting estimate that can be fine-tuned or extended by the animator if she wishes, without restricting her artistic freedom. Our method is simple, easy to implement and specific for the types of articulated figures used in interactive animation such as computer games. Other work for joint limit modeling consider more complex and general purpose models. However, these are not immediately suitable for inverse kinematics skeletons used in interactive applications.



Interactively Refining Object-Recognition System

Eissele,M., Sanftmann,H., Ertl, T.

Abstract:
Existing techniques for object identification often make use of a combination of multiple algorithms and sensors to achieve adequate results. In this paper we propose a real-time system to efficiently combine multiple object-recognition techniques, appropriate for mobile Augmented Reality applications. We focus on the challenge to differentiate objects with only marginal distinguishing features that can often only be identified from specific points of view, and solve this problem by interactively guiding the user during the recognition process. The system is based on a hierarchy to organize model data and control the corresponding feature-detection techniques as shown in a prototypical implementation. Furthermore, recognition techniques are chosen based on context information, e.g. feature type, reliability of sensor data, etc.



Statistical Reconstruction of Indoor Scenes

Jenke,P., Huhle,B., Straßer,W.

Abstract:
In this paper we consider the problem of processing scanned datasets of man-made scenes such as building interiors and office environments. Such datasets are produced in huge quantity and often share a simple structure with sharp crease lines. However, their usual acquisition with mobile devices often leads to poor data quality and established reconstruction methods fail -- at least at reconstructing sharp features. We propose to overcome the lack of reliable information by using a strong shape prior in the reconstruction method: we assume that the scene can be represented as a collection of cuboid shapes, each covering a subset of the data. The optimal configuration of cuboids is found by formulating the reconstruction problem as a discrete \emph{maximum a posteriori} (MAP) optimization in a statistical sense. We propose a greedy algorithm which iteratively extracts new shape candidates and optimizes over the shape of the cuboids. A new candidate is selected by scoring its ability to reconstruct previously uncovered data points. The iteration converges at the first significant drop in the score of new candidates. Our method is fast and extremely robust to noisy and incomplete data which we show by applying it to scanned datasets acquired with different devices.



Adaptive Streaming and Rendering of Large Terrains: A Generic Solution

Lerbour,R., Marvie,J.-E., Gautron,P.

Abstract:
We describe a generic solution for remote adaptive streaming and rendering of large terrains. The challenge is to ensure a fast rendering and a rapidly improving quality with any user interaction, network capacity and rendering system performance. We adapt to these constraints so loading and rendering speeds do not depend on the size of the database. We can thus use any database with any client device. Our solution relies on a generic data structure to adaptively handle data from the server hard disk to the client rendering system. The same methods apply whatever is done with these data: only the data themselves and the rendering system vary. We base our data structure on existing solutions with good properties and add new methods to handle it more efficiently. In particular we avoid loading irrelevant or redundant data and we request the most important data first. We also avoid costly data structure operations as much as possible, in favor of "in-place" data updates and selection using sample masks.



An Improved Technique for Full Spectral Rendering

Radziszewski,M., Boryczko,K., Alda,W.

Abstract:
In this paper we present an improved approach to full spectral rendering. The technique is optimized for quasi-Monte Carlo ray tracing, however the underlying physical theory can be applied to any global illumination scheme. We start with explanation of the necessity of full spectral rendering in any correct global illumination system. Then we present, step by step, a rendering scheme using full spectrum simulation. First, we give details on a random point sampling as a method of representing spectra, then we introduce improved spectral sampling technique, designed to reduce variance of image of wavelength dependent phenomena, and finally we show how to integrate the novel sampling technique with selected ray tracing algorithms.



Locally adaptive marching cubes through iso-value variation

Glanznig,M., Malik,M.M., Gröller,M.E.

Abstract:
We present a locally adaptive marching cubes algorithm. It is a modification of the marching cubes algorithm where instead of a global iso-value each grid point has its own iso-value. This defines an iso-value field, which modifies the case identification process in the algorithm. The marching cubes algorithm uses linear interpolation to compute intersections of the surface with the cell edges. Our modification computes the intersection of two
general line segments, because there is no longer a constant iso-value at each cube vertex. An iso-value field enables the algorithm to correct biases within the dataset like low frequency noise, contrast drifts, local density variations and other artefacts introduced by the measurement process. It can also be used for blending between different isosurfaces (e.g., skin, veins and bone in a medical dataset).



Computation and Visualization of Fabrication Artifacts

Malik,M.M., Heinzl,Ch., Gröller,M.E,

Abstract:
This paper proposes a novel technique to measure fabrication artifacts through direct comparison of a reference surface model with the corresponding industrial CT volume. Our technique uses the information from the surface model to locate corresponding points in the CT dataset. We then compute various comparison metrics to measure differences (fabrication artifacts) between the two datasets. The differences are presented to the user both visually as well as quantitatively. Our comparison techniques are divided into two groups namely geometry-driven comparison techniques and visual-driven comparison techniques. The geometry-driven techniques provide an overview, while the visual-driven techniques can be used for a localized examination and for determining precise information about the differences between the datasets.



Dynamic Virtual Textures

Taibo,J., Seoane,A., Hernández,A.

Abstract:
The real-time rendering of arbitrarily large textures is a problem that has long been studied in terrain visualization. For years, different approaches have been published that have either expensive hardware requirements or other severe limitations in quality, performance, or versatility. The biggest problem is usually a strong coupling between geometry and texture, both regarding database structure as well as LOD management.
This paper presents a new approach to high resolution real-time texturing of dynamic data that avoids the drawbacks of previous techniques and offers additional possibilities. The most important benefits are: out-of-core texture visualization from dynamic data, efficient per-fragment texture LOD computation, total independence from the geometry engine, high quality filtering and easiness of integration with user custom shaders and multitexturing. Because of its versatility and independence from geometry, the proposed technique can be easily and efficiently applied to any existing terrain geometry engine in a transparent way.



Selective Deblocking Method Using a Tranform Table of Different Dimension DCT

Lim,T., Ryu,,J. Jeong,J.

Abstract:
In this paper, we propose a selective deblocking algorithm that reduces block discontinuities in DCT domain. Our algorithm applies a deblocking procedure to each line of adjacent 3 blocks, so the block is divided into several line vectors. There are three Low Pass filters that are applied differently to 1×24 DCT values according to each condition of adjacent 3 vectors for conserving image details, and we use a transform table between different dimension DCTs (1×8 and 1×24 DCT) for reducing a computational cost. The experimental results show that the proposed algorithm makes good results on an improvement of subjective image quality and a computational efficiency.



Flexible Configurable Stream Processing Of Point Data

Boesch,J., Pajarola,R.

Abstract:
To efficiently handle the continuously increasing raw point data-set sizes from high-resolution laser-range scanning devices or baseline stereo and multi-view 3D object reconstruction systems, powerful geometry processing solutions are required. We present a flexible and run-time configurable system for efficient out-of-core geometry processing of point cloud data that significantly extends and greatly improves the stream-based point processing framework introduced in XX. In this system paper we introduce an optimized and run-time extensible implementation, a number of algorithmic improvements as well as new stream-processing functionality. As a consequence of the novel and improved system architecture, implementation and algorithms, a dramatically increased performance can be demonstrated as shown in our experimental results.



Algebraic 3D Reconstruction of Planetary Nebulae

Wenger,S., Fernandez,A., Morisset,J.C., Magnor,M.

Abstract:
Distant astrophysical objects like planetary nebulae can normally only be observed from a single point of view. Assuming a cylindrically symmetric geometry, one can nevertheless create 3D models of those objects using tomographic methods. We solve the resulting algebraic equations efficiently on graphics hardware. Small deviations from axial symmetry are then corrected using heuristic methods, because the arising 3D models are, in general, no longer unambiguously defined. We visualize the models using real-time volume rendering. Models for actual planetary nebulae created by this approach match the observational data acquired from the earth's viewpoint, while also looking plausible from other viewpoints for which no experimental data is available.



Interactive Exploration of Large Event Datasets in High Energy Physics

Hermann,M., Greß,A., Klein,R.

Abstract:
In high energy physics the structure of matter is investigated through particle accelerator experiments where particle collisions (events) occur at such high energies that new particles are produced. Providing tools for interactive visual inspection of billions of such events occurring in an experiment in an intuitive way is a challenging task. In order to solve this problem we built on previous approaches for visual browsing through image databases and extend them in several ways in order to allow efficient navigation through the collision event datasets. The key features of our novel browsing technique are its applicability to the very large event datasets, a more intuitive selection method for specifying a region of interest, and finally a clustering-based technique that further simplifies and improves the navigation process. We demonstrate the potential of our novel visual inspection system by integrating it into an event display application for the COMPASS experiment at CERN.



Pick-by-Vision: An Augmented Reality supported Picking System

Reif,R., Guenther,W.A.

Abstract:
Order picking is one of the most important process steps in logistics. Because of their flexibility human beings cannot be replaced by machines. But if workers in order picking systems are equipped with a head-mounted display, Augmented Reality can improve the information visualization.
In this paper the development of such a system - called Pick-by-Vision - is presented. The system is evaluated in a user study performed in a real storage environment. Important logistics figures as well as subjective figures were measured. The results show that Pick-by-Vision can improve considerably order picking processes.



Prefiltered Gradient Reconstruction for Volume Rendering

Csébfalvi,B., Domonkos,B.

Abstract:
The quality of images generated by volume rendering strongly depends on the applied continuous reconstruction method. Recently, it has been shown that the reconstruction of the underlying function can be improved by a discrete prefiltering. In volume rendering, however, an accurate gradient reconstruction also plays an important role as it provides the surface normals for the shading computations. Therefore, in this paper, we propose prefiltering schemes in order to increase the accuracy of the estimated gradients yielding higher image quality. We search for discrete prefilters of minimal support which can be efficiently used in a preprocessing as well as on the fly.



Interactive Editing of Upholstered Furniture

Schwartz,Ch., Degener,P., Klein,R.

Abstract:
Fast visualization of industrial parts for rapid prototyping is nowadays eased by the fact that CAD construction data is readily available in most cases. Upholstery constitutes an important exception as its shape is not given a priori but the result of complex physical interactions between hard bodies, soft cushioning and elastic sheets. In this paper we propose an interactive visualization and editing method for upholstery that infers physically plausible surfaces from a sewing pattern. Our method supports fast design decisions by allowing easy and intuitive modifications of the inferred surface at any time.

We also propose a reconstruction method for point clouds that is specifically targeted at upholstery. We argue that the sewing pattern encodes important information about shape and material deformations of the final surface and consequently use it as a prior in our reconstruction algorithm. The practicability of our method is demonstrated on two real world data sets.



Hybrid sort-first/sort-last rendering for dense material particle systems

Latapie,S.

Abstract:
This paper describes a solution designed for efficient visualization of large and dense sets of particles, typically generated by molecular dynamics simulations in materials science. This solution is based on a hybrid distributed sort-first/sort-last architecture, and meant to work on a generic commodity cluster feeding a tiled display. The package relies on VTK framework with various extensions to achieve statistical occlusion culling, smart data partitioning and GPU-accelerated rendering.



User Motion Prediction in Large Virtual Environments

Pribyl,J.,Zemcik,P.

Abstract:
Motion prediction of various objects is important for work of many people. In some cases the prediction is requested to be accurately for near time queries and in some other cases the prediction is requested to be accurately for distant time queries. For near time queries the techniques can assume the trajectory of an object can be represented by mathematical functions. These formulas are often called motion functions and they use recent movements to predict future locations of the objects. For distant time queries it is impossible to use simple mathematical formulas, because the movement trajectory between current time and the distant future time is too complicated. One of the suitable methods to describe such movement is prediction based on object’s trajectory pattern. For this purpose, the movement history of the object is needed. Consequently, data mining methods can mine trajectory patterns from the historic movements and these patterns can be used to predict the future objects movements. The best contemporary methods use combination of trajectory pattern and motion functions. This means that in case no trajectory pattern is found, the motion function is used to determine object’s near location. Using the trajectory pattern prediction principle a new approach to optimize communication between client and server in large virtual environments is introduced. The short time and long time prediction queries are used to minimize the overall amount of downloaded data and to deliver the probably requested parts of the scene in priority.



Low cost finger tracking for a virtual blackboard

Rustico,E.

Abstract:
This paper presents a complete and inexpensive system to track the movements of a physical pointer on a flat surface. Any opaque object can be used as a pointer (fingers, pens, etc.) and it is possible to discriminate whether the surface is being touched or just pointed at. The system relies on two entry-level webcams and it uses a fast scanline-based algorithm. An automatic wizard helps the user during the initial setup of the two webcams. No markers, gloves or other hand-held devices are required. Since the system is independent from the nature of the pointing surface, it is possible to use a screen or a projected wall as a virtual touchscreen. The complexity of the algorithms used by the system grows less than linearly with resolution, making the software layer very lightweight and suitable also for low-powered devices like embedded controllers.



Repairing Heavy Damaged CAD-models

Emelyanov,A., Astakhov,Y.

Abstract:
The presented work is related to the problem of repairing incomplete reconstructed (damaged) CAD-models. To solute this problem, a general concept of the repairing using various types of mathematical fields is proposed. One method developed within the framework of this concept is described in details. This method as the base uses interpolation of a given successfully reconstructed surface to estimate the behavior of the corresponding missing one. Ability of the method to repair heavy damaged CAD-models has been proved. It has a big potential for further development, because the main advantage of the presented concept is that it creates an framework to effective use together various methods of missing surface estimation.



Quadrilateral mesh generation from point cloud by Monte Carlo method

Roth,A., Juhasz,I.

Abstract:
We present a Monte Carlo method that generates a quadrilateral mesh from a point cloud. The proposed algorithm evolves an initial quadrilateral mesh towards the point cloud which mesh is constructed by means of the skeleton of the input points. The proposed technique proves to be useful in case of relatively complex point clouds that describe smooth and non-self-intersecting surfaces with junctions/branches and loops. The esulted quadrilateral mesh may be used to reconstruct the surfaces by means of tensor product patches such as B-spline or NURBS.



Real-Time Dense and Accurate Parallel Optical Flow using CUDA

Marzat,J., Dumortier,Y., Ducrot,A.

Abstract:
A large number of processes in computer vision are based on the image motion measurement, which is the projection of the real displacement on the focal plane. Such a motion is currently approximated by the visual displacement field, called optical flow. Nowadays, a lot of different methods are commonly used to estimate it, but a good trade-off between execution time and accuracy is hard to achieve with standard integrations. This paper tackles the problem by proposing a parallel implementation of the well-known pyramidal algorithm of Lucas & Kanade, in a Graphics Processing Unit (GPU). It is programmed using the Compute Unified Device Architecture from NVIDIA corporation, to perform a dense and accurate velocity field at about 15 Hz with a 640x480 image definition.



Responsive Grass

Orthmann,J., Salama,Ch.R., Kolb,A.

Abstract:
Large natural environments are often essential for todays computer games. Interaction with the environment is widely implemented in order to satisfy the player's expectations of a living scenery and to help increasing the immersion of the player. Within this context our work describes an efficient way to simulate a responsive grass layer with todays graphics cards in real-time. Clumps of grass are approximated by two billboard representations. GPU-based distance maps of scene objects are employed to test for penetrations and for resolving them. Adaptive refinement is necessary to preserve the shape of deformed billboards. A recovering process is applied after the deformation which restores the original that is to say the undeformed and efficient shape. The primitives of each billboard are assembled during the rendering process. Their vertices are dynamically lit within an ambient occlusion based irradiance volume. Alpha-to-Coverage completes the illusion as it is used to simulate the semitransparent nature of grass.



k-Nearest Moving Least Square Approximation using Gaussian Function

Jeon,Y.M., Lee,B.G., Lee,M.B., Yoon,J.H.

Abstract:
In this paper, we present a moving least squared method based on Gaussian functions. Compared to the classical method, our algorithm uses a different approach; we use shifts of a Gaussian basis function instead of polynomial function based on k-nearest neighboring data points. Experimental results are presented for real-world rangefinder 3D scatter data.



Geometric Diversity for Crowds on the GPU

Lister,W., Laycock,R.G., Day,A.M.

Abstract:
Pure geometric techniques have emerged as viable real-time alternatives to those traditionally used for rendering crowds. However, although capable of drawing many thousands of individually animated characters, the potential for injecting intra-crowd diversity within this framework remains to be fully explored. For urban crowds, a prominent source of diversity is that of clothing and this work presents a technique to render a crowd of clothed, virtual humans whilst minimising redundant vertex processing, overdraw and memory consumption. By adopting a piecewise representation, given an assigned outfit and pre-computed visibility metadata, characters can be constructed dynamically from a set of sub-meshes and rendered using skinned instancing. Using this technique, a geometric crowd of 1,000 independently clothed, animated and textured characters can be rendered at 40 fps.



View-Dependent Multiresolution Modeling on the GPU

Gumbau,J., Chover,M., Ramos,F., Puig-Centelles,A.

Abstract:
Throughout more than a decade, researchers on level-of-detail techniques have oriented their efforts towards developing better techniques and adapting their solutions to new hardware. Nevertheless, we consider that there is still a gap for efficient yet simple multiresolution models which fully exploit the possibilities offered by current GPUs. In this paper we present a new level-of-detail framework which moves the extraction process from updating indices to updating vertices. This feature allows us to perform culling and geomorphing in a vertex-basis. Furthermore, it simplifies the update of indices to eliminate degenerate information. The model is capable of offering both uniform and variable resolution. In this sense, a silhouette-based criterion has been included. Finally, it is important to comment that the model is completely integrated in the GPU and needs no CPU/GPU communication once all the information is correctly loaded in hardware memory.



Giga-Voxel Rendering from Compressed Data on a Display Wall

Parys,R., Knittel,G.

Abstract:
We present a parallel system capable of rendering multi-gigabyte data sets on a multi-megapixel display wall at interactive rates. The system is based on Residual Vector Quantization which allows us to render extremely large data sets out of the graphics memory. At 0.75 bits per voxel, such large data sets can even be kept on a consumer-level graphics card. As an example we compress the whole full color "Visible Human Female" data set, approximately 21GByte in size, down to 700MByte. Taking advantage of the fixed code length and the extremely simple decompression scheme of RVQ, all decompression is done on the GPU at very high rates. For each frame the data set is decompressed into small subvolumes which are rendered front to back. Classification and shading can be moved into the decompression step, speeding up the rendering pass.
We present the performance of the system running on a cluster of 16 PCs, each equipped with a modern graphics card including 1GByte of video memory. Each PC drives one display of a 4x4 display wall with a total resolution of 10240x6400 (65M) pixels.



3D Interaction Techniques for 6 DOF Markerless Hand-Tracking

Schlattmann,M., Na Nakorn,T., Klein,R.

Abstract:
Recently, stable markerless 6 DOF video based hand-tracking devices became available. These devices track the position and
orientation of the user's hand in different postures with at least 25 frames per second. Such hand-tracking allows for using the
human hand as a natural input device. However, the absence of physical buttons for performing click actions and state changes
poses severe challenges in designing an efficient and easy to use 3D interface on top of such a device. In particular, solutions
have to be found for clicking menu items, selecting objects and coupling and decoupling the object's movements to the user's
hand (i.e. grabbing and releasing). In this paper, we introduce a novel technique for grabbing and releasing objects, an efficient
clicking operation for selection purposes and last but not least a novel visual feedback in order to support the ease of using this
device. All techniques are integrated in a novel 3D interface for immersive virtual manipulations. Several user experiments
were performed, which show the superior applicability of this new 3D interface.



Extracting CAD Features from Point Cloud Cross-sections

Kyriazis,I., FudosI., PaliosL.

Abstract:
We present a new method for extracting features of a 3D object targeted to CAD modeling directly from the point cloud of its surface scan. The objective is to obtain an editable CAD model that is manufacturable and describes accurately the structure and topology of the point cloud. The entire process is carried out with the least human intervention possible. First, the point cloud is sliced interactively in cross sections. Each cross section consists of a 2D point cloud. Then, a collection of segments represented by a
set of feature points is derived for each slice, describing the cross section accurately, and providing the basis for an editable feature-based CAD model. For the extraction of the feature points,
we exploit properties of the convex hull and the Voronoi diagram of the point cloud.



Integrating Tensile Parameters in Hexahedral Mass-Spring System for Simulation

Baudet,V., Beuve,M., Jaillet,F., Shariat,B., Zara,F.

Abstract:
Besides finite element method, mass-spring systems are
widely used in Computer Graphics. It is indubitably the simplest and most intuitive deformable model. This discrete model allows to perform interactive deformations with ease and to handle complex
interactions. Thus, it is perfectly adapted to generate visually plausible animations. However,
a drawback of this simple formulation is the relative difficulty to control efficiently physically realistic behaviours. Indeed, none of the existing models has succeeded in dealing with this satisfyingly. We demonstrate that this restriction cannot be overpassed with the classical mass-spring model, and we propose a new general 3D formulation that reconstructs the geometrical model as an assembly of elementary hexahedral "bricks". Each brick (or element) is then transformed into a mass-spring system. Edges are replaced by springs that connect masses representing the vertices. The key point of our approach is the determination of the stiffness springs to reproduce the correct mechanical properties (Young's modulus, Poisson's ratio) of the reconstructed object. We validate our methodology by performing some numerical experiments. Finally, we evaluate the accuracy of our approach, by comparing our results with the deformation obtained by finite element method.



CUDA based Level Set Method for 3D Reconstruction of Fishes from Large Acoustic Data

Sharma,O., Anton,F.

Abstract:
Acoustic images present views of underwater dynamics, even in high depths. With multi-beam echo sounders (SONARs), it is possible to capture series of 2D high resolution acoustic images. 3D reconstruction of the water column and subsequent estimation of fish abundance and fish species identification is highly desirable for planning sustainable fisheries. Main hurdles in analysing acoustic images are the presence of speckle noise and the vast amount of acoustic data. This paper presents a level set formulation for simultaneous fish reconstruction and noise suppression from raw acoustic images. Despite the presence of speckle noise blobs, actual fish intensity values can be distinguished by extremely high values, varying exponentially from the background. Edge detection generally gives excessive false edges that are not reliable. Our approach to reconstruction is based on level set evolution using Mumford-Shah segmentation functional that does not depend on edges in an image. We use the implicit function in conjunction with the image to robustly estimate a threshold for suppressing noise in the image by solving a second differential equation. We provide details of our estimation of suppressing threshold and show its convergence as the evolution proceeds. We also present a GPU based streaming computation of the method using NVIDIA's CUDA framework to handle large volume data-sets. Our implementation is optimised for memory usage to handle large volumes.



Memoryless Simplification using Normal Deviation

Hussain,M.

Abstract:
A greedy algorithm based on a new simple and robust measure of geometric fidelity has been proposed for efficient decimation of polygonal models. The main distinguishing feature among simplification algorithms existing in the literature is how they measure the local geometric distortion caused by a local decimation operation. For computing local geometric distortion, a new measure is proposed by exploiting the normal field deviation that occurs locally as a result of a decimation operation. The resulting algorithm has a good tradeoff between quality, speed, and memory usage. It is robust in the sense that it is versed with the potential of simplifying, with almost the same accuracy and running time, various kinds of polygonal models having different levels of complexity, automatically prevents fold-overs, and preserves visually important features. Subjective and objective comparisons have been presented to validate the assertion.



GPU-only Terrain Rendering for Walk-through

Sunyong,P., Kyoungsu,O.

Abstract:
Terrain takes a very important role in making a scene more realistic. Many efforts to accurately represent terrain , however, have confined their usages to flight simulation and most of them have relied on CPU. In this paper, we present a full GPU-based real-time terrain rendering algorithm through ray-casting. Since it requires no geometrical structure like a polygonal mesh, it doesn't need any LOD(Level-Of-Detail) policies, most of which gives much burden on CPU. As a result, it enhances the whole performance of the system. Our method grants a complete freedom to the view point and its direction, so objects can move around so freely in the air or on the surface that it can be directly applied to any computer games and virtual reality system. To better the rendering quality, we applied curved patches to the height field. On the way, we suggest a simple and useful method to evaluate a ray-patch intersection. We implemented all the processes on GPU, and obtained tens to hundreds of frame rates with a variety of resolutions of height maps: 256x256~8192x8192(texel2).



Reconstructing Indoor Scenes with Omni-Cameras

Bauer,F., Stamminger,M., Meister,M.

Abstract:
We present a system similar to Debevec's Facade that improves the reconstruction of indoor scenes from photographs.
With confined spaces it is often impractical to use regular photos as the base of the reconstruction. Combining pinhole cameras with fisheye shoots or photographs of any kind of reflective, parametrisable body such as light probes eases this problem. We call the later camera setup an omni-camera, because it enables us to acquire as much information as possible from a given viewpoint. Omni-cameras make it possible to reconstruct the geometry of an entire room from just one view.
Removing the pinhole camera constraint invalidates some key assumptions made in Façade. This paper shows how to work around the problems arising from this approach by adding scene specific knowledge to the solver as well as using a genetic approach.



Development and evaluation of a virtual reality patient simulation (VRPS)

Nestler,S., Huber,M., Echtler,F., Dollinger,A., Klinker,G.

Abstract:
In disasters and mass casualty incidents (MCIs) paramedics initially determine the severeness of all patients' injuries during the so-called triage. In order to enhance disaster preparedness continuous training of all paramedics is indispensable. Due to the fact that large disaster control exercises are laborious and expensive, additional training on a small scale makes sense. Therefore we designed and developed a virtual reality patient simulation (VRPS) to train paramedics in this disaster triage. The presented approach includes gesture based interactions with the virtual patients in order to simulate the triage process as realistically as possible.

The evaluated approach focuses on the training of paramedics in disaster triage according to the mSTaRT triage algorithm on a multi-touch table top device. At Munich fire department fully-qualified paramedics performed 160 triage processes with the triage simulation. The accuracy of the triage processes was compared to previous disaster control exercises with real mimes. The presented results of this explorative evaluation will be the basis for future, larger evaluations.



3D Skeleton Extraction from Volume Data Based on Normalized Gradient Vector Flow

Yoon,S.M., Malerczyk,C., Graf,H.

Abstract:
Markerless 3D skeleton visualization of deformable bodies continues to be a major challenge in terms of providing intuitive and uncluttered renderings that allow the user to better understand their data. This paper presents a three-dimensional skeleton extraction of deformable objects based on a normalized gradient vector flow to analyze and visualize their characteristics. First, target objects are reconstructed by image-based visual hulls from known intrinsic and extrinsic camera parameters and silhouettes which are extracted by kernel density estimation based on an efficient background subtraction. Our 3D skeleton extraction methodology deploys a normalized gradient vector flow technique which is a vector diffusion approach based on partial differential equations. The markerless 3D skeletonization of deformable objects from multiple images might be applied to analyze the 3D motion of target objects enabling an indepth study from within an arbitrary viewpoint.



Static and Dynamic Methods for Facial Expression Recognition in Color Image Sequences

Al-Hamadi,A., Niese,R., Michaelis,B.

Abstract:
In human machine interaction systems, emotion recognition is becoming an imperative feature as it gives the machine more human like capabilities. This work proposes here new static and dynamic based methods and their comparison for facial expression recognition in color image sequences. Computer vision 3d techniques are applied to determine real world geometric measures and to build a static geometric feature vector. Optical flow based motion detection is additionally carried out, which delivers the dynamic flow feature vector. Support vector machine classification is used to recognize the expression using the geometric feature vector while k-nearest neighbor classification is used for analyzing the flow feature vector. The usage of 3d and color information for the proposed methods achieves robust feature detection and expression classification besides covering in/out of plane head rotations and back and forth movements. Further, a wide range of human skin color is dealt with in the training and the test samples.



A Novel System for Automatic Hand Gesture Spotting and Recognition in Stereo Color Image Sequences

Elmezain,M., Al-Hamadi,A., Michaelis,B.

Abstract:
Automatic gesture spotting and recognition is a challenging task for locating the start and end points that correspond to a gesture of interest in Human-Computer Interaction. This paper proposes a novel gesture spotting system that is suitable for real-time implementation. The system executes gesture segmentation and recognition simultaneously without any time delay based on Hidden Markov Models. In the segmentation module, the hand of the user is tracked using mean-shift algorithm, which is a non-parametric density estimator that optimizes the smooth similarity function to find the direction of hand gesture path. In order to spot key gesture accurately, a sophisticated method for designing a non-gesture model is proposed, which is constructed by collecting the states of all gesture models in the system. The non-gesture model is a weak model compared to all trained gesture models. Therefore, it provides a good confirmation for rejecting the non-gesture pattern. To reduce the states of the non-gesture model, similar probability distributions states are merged based on relative entropy measure. Experimental results show that the proposed system can automatically recognize isolated gestures with 97.78% and key gestures with 93.31% reliability for Arabic numbers from 0 to 9.



Simple emphatic user interface

Stanek,S.

Abstract:
We present a simple user interface combining h-Anim with Perlin's face. The main application is for exploring virtual environments especially those representing real environments with places, buildings or objects that belong to cultural heritage or those with historical past, famous story or something interesting. Therefore, information about them should be delivered to user. This kind of information is usually full of emotions and that is why the most suitable way (from user interface point of view) is to deliver it with emphatic storytelling. We are introducing our simple emphatic system (implementation uses ActiveX objects, VRML, ECMA Script, Java Script) that uses simple hardware configuration with web cams used for capturing user’s presence and his/her head movements and if possible capturing position of some facial features, defined in MPEG-4 standard, and used to recognize user’s simple emotions. User presence, head movements, and simple emotions are used to create simple emphatic user interface. In this paper we present our results already used in some application projects for virtual museums.



The Elucidation of Planar Aesthetic Curves

Gobithaasan,R.U.,Jamaludin,M.A., Miura,K.T.

Abstract:
A compact formula for Logarithmic Curvature Histogram (LCH) and its gradient for planar curves have been proposed. Using these entities and the analysis of Generalized Cornu Spiral (GCS), the mathematical definition for a curve to be aesthetic has been introduced to overcome the ambiguity that occurs in measuring the beauty of a curve. In the last section, detailed examples are shown on how LCH and its gradient represented as a straight line equation can be used to measure the aesthetic value of planar curves.



Efficient Medial Voxel Extraction for Large Volumetric Models

Michikawa,T., Nakazaki, S., Suzuki, H.,

Abstract:
Here we propose a method for medial voxel extraction from large volumetric models based on an out-of-core framework. The method improves upon geodesic-based approaches to enable the handling of large objects. First, distance fields are constructed from input volumes using an out-of-core algorithm. Second, medial voxels are extracted from these distance fields through multi-phase evaluation processes. Trivial medial or non-medial voxels are evaluated by the low-cost pseudo-geodesic distance method first, and the more expensive geodesic distance computation is run last. Using this strategy allows most of the voxels to be extracted in the low-cost process. This paper outlines a number of results regarding the extraction of medial voxels from large volumetric models. Our method also works in parallel, and we demonstrate that computation time becomes even shorter in multi-core environments.



Feature-supported Multi-hypothesis Framework for Multi-object Tracking using Kalman Filter

Pathan,S.S., Al-Hamadi,A., Michaelis,B.

Abstract:
A Kalman filter is a recursive estimator and has widely been used for tracking objects. However, unsatisfying tracking of moving objects is observed under complex situations (i.e. split, merge, occlusion, and shadow) which are challenging for classical Kalman tracker. This paper describes a feature-assisted multi-hypothesis framework for tracking moving objects under complex situations using Kalman Tracker. In this framework, a hypothesis (i.e. merge, split, new) is generated on the basis of contextual association probability which identifies the status of moving objects in the respective occurrences. The association among moving objects is measured by multi-featured similarity criteria which include spatial size, color and trajectory. Color similarity probability is measured by the proposed correlation-weighted histogram intersection (CWHI). The probabilities of size and trajectory similarities are computed and combined with fused normalized color correlation. The accumulated association probability results in online hypothesis generation. This hypothesis assists Kalman tracker when complex situations appear in real-time tracking (i.e. traffic surveillance, pedestrian tracking).Our algorithm achieves robust tracking with 97.3% accuracy, and 0.07% covariance error in different real-time scenarios.



GPU-Based Adaptive-Subdivision for View-Dependent Rendering

Bauman,G., Livny,Y., El-Sana J.

Abstract:
In this paper, we present a novel view-dependent rendering approach for large polygonal models. In an offline stage, the input model is simplified to reach a light coarse representation. Each simplified face is then assigned a displacement map, which resembles the geometry of the corresponding patch on the input model. At runtime, the coarse representation is transmitted to the graphics hardware at each frame. Within the graphics hardware, the GPU subdivides each face with respect to the view-parameters, and adds fine details using the assigned displacement map. Initial results show that our implementation achieves quality images at high frame rates.



Flocking Boids with Geometric Vision, Perception and Recognition

Holland,J., Semwal,S.K.

Abstract:
In the natural world, we see endless examples of the behavior known as flocking. From the perspective of graphics simulation, the mechanics of flocking has been reduced down to a few basic behavioral rules. Reynolds coined the term Boid to refer to any simulated flocking, and simulated flocks by collision avoidance, velocity matching, and flock centering. Though these rules have been given other names by various researchers, implementing them into a simulation generally yields good flocking behavior. Most implementations of flocking use a forward looking visual model in which the boids sees everything around it. Our work creates a more realistic model of avian vision by including the possibility of a variety of geometric vision ranges and simple visual recognition based on what boids can see. In addition, a perception algorithm has been implemented which can determine similarity between any two boids. This makes it possible to simulate different boids simultaneously. Results of our simulations are summarized.