Textiles usually exhibit much larger resistance to
in-plane deformation than to bending deformation. However, the latter essentially determines the formation of folds and wrinkles which in turn govern the overall appearance of the cloth. The resulting numerical problem is inherently stiff and hence
susceptible to instability. This overview is devoted to a closer investigation of bending deformation. Approaches known from the field of engineering can describe the problem of bending in a physically accurate way. However, the nature of the governing equations is such that they cannot be discretised with the standard methods currently used in cloth
simulation. Since curvature is a central variable, we
introduce related concepts from differential geometry and describe the transition to the discrete setting. Different approaches are discussed and demands on an approach for correctly modelling the bending behaviour
of cloth are formulated.
Janney,P., Amur,H., Sridhar,G., Sridhar,V.
Recent research has given more importance on the optimization of the indexing structure; however there is a great need for research in the area of image representation for efficient retrieval. In this paper we present a method for reducing dimensions in multi-dimensional multimedia data while preserving similarities between different images. We propose to reduce the multi-dimensionality of the feature vectors to a single unique key called the number. Using this number as an effective key to represent image, we could achieve better efficiency in image matching and retrieval.
Ilmonen,T., Takala,T., Laitinen,J.
This paper describes two methods that can be used to enhance the looks of particle systems. These methods fit applications that use modern graphics hardware for rendering. The first method removes clipping artifacts. These artifacts often appear when a fuzzy particle texture intersects solid geometry, resulting in a visible, undesireable edge in the rendered graphics. This problem can be overcome by softening the edges with proper shading algorithms.
The second method of this paper presents the use of a five-component color model. The parameters of the model are: red, green, blue, alpha and burn. The first four color components have their usual meaning while the "burn" parameter is used to control the additivity of the color. This color model allows particles' alpha blending to range from pure additive (useful for rendering flames) to normal smoke-style rendering. This method can be implemented on most graphics hardware, even without shader support.
Chambelland,J.-Ch., Daniel,M., Brun,J.-M.
This paper adresses the problem of least-square fitting with rational pole curves. The issue is to minimize a sum of squared Euclidean norms with respect to three types of unknowns: the control points, the node values, and the weights. An iterative algorithm is proposed to solve this problem. The method alternates between three steps to converge towards a solution. One step uses the projection of the data points on the approximant to improve the node values, the two others use a gradient based technique to update the control point positions and the weight values. Experimental results are proposed with rational B\'ezier and NURBS curves.
Bhattarai, D., Karki, B.
We have proposed a scheme to support interactive visualization at space-time multiresolution of the atomistic simulation data. In this scheme we have adopted two perspectives that differ in their purposes and in the way they process and render the data. First, the complete or nearly complete dataset is rendered using animation, particle-pathline and color-mapped-dimension techniques to achieve an overall idea of the spatio-temporal behavior of the atomic system under consideration. Second, additional data are generated on the fly and analyzed/visualized using a combined graph-theoretic and statistical approach to gain better and more detailed insight into the desired spatio-temporal information. It is also shown that the proposed approach can greatly assist us to better understand various important atomic (molecular) properties and processes including bond-breaking/reconstruction, radial distribution, atomic coordination, clustering, structural stability, defects and diffusion.
García,R.J., C. Urea,C., J. Revelles,J, Lastra,M., Montes,R.
This article compares empirically and theoretically different optimizations of Density Estimation on the Tangent Plane, which is a Density Estimation technique for global illumination. This technique is based on Photon Maps and provides increased accuracy when the surfaces are not locally smooth or continuous. The first optimization works by creating a set of candidate rays for each radiance calculation. The second optimization uses spatial indexing of the discs around the radiance calculation points. An analytical study of the order of complexity of the algorithms, as well as an heuristic study of the calculation time for the different values of the parameters involved, has been performed. Some rules are given in order to identify the most suitable optimization for a given radiance calculation.
Heurtebise,X., Thon,S., Gesquiere,G.
In a virtual sculpture project, we would like to sculpt in real-time 3D objects sampled in volume elements (voxels). The drawback of this kind of representation is that a very important number of voxels is required to represent large and detailed objects. As a consequence, the memory cost will be very important and the user/object interaction will be slowed down. In order to allow real-time performance, we propose in this paper a multiresolution model that represents the object with more or less detailed levels thanks to a 3D wavelet transform. We use the marching cubes algorithm to display a triangular surface of the 3D object in various resolutions. To update quickly this surface during sculpture process, we propose a storage method for all the triangles that allows to rebuild only the modified parts of the 3D object. Moreover, to speed up processing and user/object interaction, we also propose a cache system to store in memory the most frequently used levels of details of the 3D object.
Bleser,G., Wohlleber,C., Becker, M., Stricker, D.
Accurate acquisition of camera position and orientation is crucial for realistic augmentations of camera images. Computer vision based tracking algorithms, using the camera itself as sensor, are known to be very accurate but also time-consuming. The integration of inertial sensor data provides a camera pose update at 100 Hz and therefore stability and robustness against rapid motion and occlusion. Using inertial measurements we obtain a precise real time augmentation with reduced camera sample rate, which makes it usable for mobile AR and See-Through applications.
This paper presents a flexible run-time system, that benefits from sensor fusion using Kalman filtering for pose estimation. The camera as main sensor is aided by an inertial measurement unit (IMU).
The system presented here provides an autonomous initialisation as well as a predictive tracking procedure and switches between both after successfull (re)-initialisation and tracking failure respectively. The computer vision part performs 3D model-based tracking of natural features using different approaches for yielding both, high accuracy and robustness. Results on real and synthetic sequences show how inertial measurements improve the tracking.
A novel method which is simple but effective is proposed to estimate human skeleton ratios from 2D uncalibrated monocular data. Unlike the existing skeleton estimation methods where pre-defined models are used or identification of body segments with certain attributes is necessary, the proposed method utilizes only the 2D joint locations as the input source without posture estimation. In addition, the proposed method uses a real perspective camera model instead of the popularly-used scaled orthographic camera model. The proposed method is tested on monocular data from different camera motions and the estimation result is satisfactory. The reconstructed human skeleton model can be used in further research for full body reconstruction or motion reconstruction from monocular data.
Fonseca,F., Feijo,B., Dreux,M., Clua,E.
With the continuous increase of processing power, the graphic hardware – also called Graphic Processor Unit (GPU) – is naturally assuming most part of the rendering pipeline, leaving the Central Processor Unit (CPU) with more idle time. In order to take advantage of this when rendering relief textures, the present work proposes two approaches for the mapping of relief textures. Both methods are fully implemented on the CPU leaving the GPU responsible for the per-pixel shading effects. These approaches allow the use of CPU idle time and/or multi-processed systems for the increase of real-time rendering quality and the inclusion of image-based representations.
In this paper we present a new approach to generate a mixed mesh with elements aligned to boundary/interfaces wherever is required. A valid element is: (a) any convex co-spherical element that fulfills the requirements of the underlying numerical method and (b) any element that satisfies domain specific geometric features of the model. The algorithm is based on the normal offsetting approach to generate coarse elements aligned to the boundary/interfaces. Those elements are later refined to accomplish layer density requirements. The main steps of the algorithm are described in detail and examples are given to illustrate the already implemented parts. As far as possible, we contrast this algorithm with previous approaches.
A model-based method is proposed in this paper for 3-dimensional human motion recovery, taking un-calibrated monocular data as input. The proposed method is able to generate smooth human motions that resemble the original motion from the same viewpoint the sequence was taken, and look continuous from any other viewpoint. The core of the proposed system is the motion trend prediction for reconstruction. To focus the research effort on motion reconstruction, “synthesized” input is first employed to ensure that the reconstruction algorithm is developed and evaluated accurately. Experiment results on real video data indicate that the proposed method is able to recover human motion from un-calibrated 2D monocular images with very high accuracy.
Generating subdivision surfaces from polygonal meshes requires the complete topological information of the original mesh, in order to find the neighbouring faces, and vertices used in the subdivision computations. Normally, winged-edge type data-structures are used to maintain such information about a mesh. For rendering meshes, most of the topological information is irrelevant, and winged-edge type data-structures are inefficient due to their extensive use of dynamical data structures. A standard approach is the extraction of a rendering mesh from the winged-edge type data structure, thereby increasing the memory footprint significantly.
We introduce a mesh data-structure that is efficient for both tasks: creating subdivision surfaces as well as fast rendering. The new data structure maintains full topological information in an efficient and easily accessible manner, with all information necessary for rendering optimally suited for current graphics hardware. This is possible by disallowing modifications of the mesh, once the topological information has been created. In order to avoid any inconveniences due to this limitation, we provide an API that makes it possible to stitch multiple meshes and access the topology of the resulting combined mesh as if it were a single mesh. This API makes the new mesh data structure also ideally suited for generating complex geometry using mesh-based L-systems.
In this paper, a new representational model is introduced for the rational family of ruled surfaces in Computer Graphics. The surface parameterization is constructed using the NURBS basis functions and line geometry. The ruled surface is defined by interpolating directly dual unit vectors representing lines, which is a single parametric surface and its shape depends on the control lines. All the advantages of the NURBS basis such as shape control and the local modification property are also applicable and bequeathed to the dual NURBS ruled surface. The problem of drawing the lines defined by dual unit vectors is also resolved. Towards this direction, we propose a simple technique to calculate the surface’s striction curve in order to draw the rulings of the surface within the striction curve neighborhood. The on-screen 3D plot of the surface is realized in a pre-defined specific region close to the striction curve. With the proposed technique a natural representation of the ruled surface is derived. The shape of the surface can be intrinsically manipulated via the control lines that possess one more degree of freedom than the control points. Our method can find application not only in CAD but in the areas of NC milling and EDM.
Manresa-Yee,C., Varona,J., Perales,F.J.
Nowadays accessibility to new technologies for everyone is a task to accomplish. A way to contribute to this aim is creating new interfaces based on computer vision using low cost devices such as webcams. In this paper a face-based perceptual user interface is presented. Our approach is divided in four steps: automatic face detection, best face features detection, feature tracking and face gesture recognition. Using facial feature tracking and face gesture recognition it is possible to replace the mouse’s motion and its events. This goal implies the restriction of real-time response and the use of unconstrained environments. Finally, this application has been tested successfully on disabled people with problems in hands or arms that can not use the traditional interfaces such as a mouse or a keyboard.
When visiting an aquarium, people may be disturbed or, at least, disappointed by the amount and diversity of available information. Moreover, one can find it very difficult to match the information of notices on the wall to the reality of the fishes. Therefore, we propose a virtual guide, an autonomous teaching assistant embodied in the real world using augmented reality techniques, for helping people in their visit of aquariums. This virtual guide will interact with the real world using multiple modalities (e.g. speech, facial expression, ...). Thus, it should be aware of the aquarium\'s state and content, and use perceived information and prior knowledge to inform the visitor in a structured fashion. Due to the high mobility and unpredictable behaviour of the fishes, our guide requires an adequate perception system. This camera-based system has to keep track of the fishes and their behavior. It is based on the focalisation of visual attention that allows to select interesting information in the field of view. This is achieved by extracting a number of focuses of attention (FOA) using a saliency map and a multi-level memory system, which is filled (or updated) with the extracted information. It allows our system to detect and track targets in the aquarium. This article describes how we use the saliency map and memory system, along with their interactions, to set up the first part of our perception system.
Jaume-Capó,A. ,Varona,J. ,González-Hidalgo,M. ,Mas,R. ,Perales,F.
In this paper we present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on analyzing the human shape. We decompose the body into its main parts by computing the curvature of a B-Spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user performs a predefined initial posture for identifying her principal joints, and building the human model. Using this model, we solve the initialization problem of a vision-based markerless motion capture system of the human body.
Le Garrec,J., Andriot,C., Merlhiot,X., Bidaud,P.
This paper describes a physically-based simulation for grasping task in an interactive environment. Fingertips and interacting objects are based on quasi-rigid model. The quasi-rigid model combines a rigid model for dynamic simulation and a deformable model for resolving local contact with friction and surface deformations. We simulate deformation by adding compliance on control points in the contact area based on point primitives. To add friction phenomenon in the virtual environment, we use a Coulomb's law description. A linearized formulation of this model in the contact space and an iterative Gauss-Seidel like algorithm are able to resolve complex multi-contacts problem between deformable objects in real time. Our method computes stable, consistent and realistic contact surface. This allows us a coupling with an optical motion capture system for Virtual Reality applications.
Villard,P.F., Beuve,M. , Shariat,B.
We present an approach to convert the geometrical data produced by a physical simulation of soft-organ motion into a 3D+time CT scan. The simulated geometry consists of a "time-dependent" mesh of positions in both bulk and surface. We assume that density is a continuous function well represented by interpolation functions of the mesh-point density values. Then we
describe in this paper how we calculate density at the mesh points and how we produce the CT scan using the scanner convolution parameters associated to clinical scanner devices. The aim of this work is to provide the physicians with standard images useful to appreciate organ motions and to incorporate them into a treatment planning platform.
This paper compares WT-PCA, WT-ICA and WT-LDA performance based on Wavelet Transform (WT), Principal component Analysis (PCA), Independent Component Analysis (ICA) and Linear Discriminant Analysis (LDA) respectively, for recovering human faces on image databases.
WT is applied to the face image, decomposing it into a sub-image followed by PCA to obtain a subspace of reduced dimension. The other two techniques (ICA and LDA) are then applied to the subspace produced by PCA. The experimental results indicate that the performances of these techniques depend on the image database used, on the subspace dimension, on the image resolution, on the number of images an on the choice of images used in the training set.
Inspired on the idea of colour mixing performed by painters, this work proposes a new methodology for the quantization of colours into digital images. A pixel represented in the RGB space with 24 bits is classified as a mixture of colours black, blue, green, cyan, red, pink, yellow and white. Once for each image pixel, a value given by the mixture of these colours may be specified, so the Mixturegram for an image is defined. The Mixturegram is a quantized histogram of colours from a digital image based on the colour mixture value. The application proposed in this work demonstrates the Mixturegram potential in the recovery of images based on the content. Moreover, the Mixturegram allows the segmentation of the image regions only based on the classification of the pixel colour and how it applies the colour binary representation, one of the main characteristics of the Mixturegram is the possibility to be implemented through a hardware, what would make it faster and more efficient.
Wundrak,S., Henn,T., Stork,A.
We present an extension of the original progressive mesh algorithm for large dynamic meshes that contain a mix of triangle and quadrilateral elements. The demand for this extension comes from the visualisation of dynamic finite element simulations, such as car crashes or metal sheet punch operations. These methods use meshes, which consist mainly of quadrilaterals, due to their increased numerical stability during the simulation. Furthermore, these meshes have a dynamic geometry with about 25 to 100 animation steps. Accordingly, we extend the original progressive mesh algorithm in two aspects: First, the edge collapse operation is extended for meshes with a mixture of triangle and quadrilateral elements. Second, we present an algorithm on how to extend quadric error metrics for the simplification of large dynamic meshes with many animation steps. The results are dynamic progressive triangle-quadrilateral meshes – a progressive multi-resolution mesh structure that has two interactive degrees of freedom: simulation time and mesh resolution. We show that our method works on meshes with up to one million vertices and 25 animation steps. We measure the approximation error and compare the results to other algorithms.
Loke,R.E., Jansen,F.W., du Buf,H.
Discrete boundary triangulation methods generate triangular meshes through the centers of the boundary voxels of a volumetric object. At some voxel configurations it may be arbitrary whether a part of the volume should be included in the object or could be classified as background. Consequently, important details such as concave and convex edges and corners are not consistently preserved in the describing geometry. We present a ``background priority'' version of an existing ``object priority'' algorithm [Kenmochi98]. We show that the ad hoc configurations of the well-known Discretized Marching Cubes algorithm [Montani] can be derived from our method and that a combined triangulation with ``object priority'' and ``background priority'' better preserves object details.
Xizhe,Z., Zhengxuan,W., Tianyang,L.
The iteration of complex function can generate beautiful fractal images. This paper presents a novel method based on the iteration of the distance ratio with two points, which generates a generalized Mandelbrot set according to distance ratio convergence times. This paper states the definition of distance ratio and its iteration. Then taking the complex function f(z)=za+c for example, it discusses the visual structure of generalized Mandelbrot with various exponent and comparing it with Mandelbrot set generated by escape time algorithm. When exponent a>1, the outer border of DRM is same as Mandelbrot set, but has complex inner structure; when a<0, the inner border of DRM is same as Mandelbrot set, DRM is the "outer" region and complement set of Mandelbrot set, the two sets cover the whole complex plane.
Agrawal, A., Radhakrishna, M., and Joshi, R.C.
Interactive three-dimensional (3D) visualization of very large-scale grid digital elevation models coupled with corresponding high-resolution remote-sensing phototexture images is a hard problem. The graphics load must be controlled by an adaptive view-dependent surface triangulation and by taking advantage of different levels of detail (LODs) using multiresolution modeling of terrain geometry. Furthermore, the display of vector data over the level of detail terrain models is a challenging task. In this case, rendering artifacts are likely to occur until vector data is mapped consistently and exactly to the current level-of-detail of terrain geometry. In our prior work, we have developed a view-dependent dynamic block-based LOD mesh simplification scheme and out-of-core management of large terrain data for real-time rendering on desktop PCs. In this paper, we have proposed a new rendering algorithm for the combined display of multiresolution 3D terrain and polyline vector data representing the geographical entities such as roads, state or country boundaries etc. Our algorithm for multiresolution modeling of vector data allows the system to adapt the visual mapping without rendering artifacts to the context and the user needs while maintaining interactive frame rates. The algorithms have been implemented using Visual C++ and OpenGL 3D API and successfully tested on different real-world terrain raster and vector data sets.
Ferreira,A., Vala,M., Pereira,J.A.M, Jorge,J.A., Paiva,A.
Despite the considerable work on agent frameworks, user interfaces to manage these are mostly script based. Even though some solutions provide graphical interfaces to build agent worlds these are quite limited and overly dependent on textual input. Recently, calligraphic systems using sketch-based and pen-input have emerged as a viable alternative to conventional direct-manipulation interfaces in a wide range of areas, such as user interface design or mechanical systems simulation. In this paper, we present a preliminary approach to a calligraphic interface for managing agents. It recognizes gestures drawn by users allowing them to create and manage agent worlds flexibly and efficiently using a concise language.
This paper discusses the problem of visualizing inherently positive data sets using the Shepard interpolation family. There are number of scientific and business domains where data is inherently positive. While presenting this data in visual format, the visualization must not contradict this known reality about data set. Otherwise the visualization and the reality discovered due to this visualization may not be trustable. Modified Quadratic Shepard method is a commonly used method of this family that does not preserve positivity for inherently positive data sets. We present two algorithms, which constrain basis function to be positive, to produce non-negative graph through scattered positive data using this method.
One of the most important tasks of a traditional 3D Rendering engine is the projection on the image plane of geometrical structures (such as triangles or lines). This operation takes place in the middle of the rendering pipeline, between the vertex shader and the fragment shader: its aim is just that of creating fragment data from vertex data. The solution of the projection problem is necessarily bound to the solution of a great number of systems of equations, where the complexity of the equations is in general related to the properties of the geometrical structures. To make this process fast, the most adopted solution is that of using linear models, so that the systems become linear and the module gets the simplest implementation. Unfortunately, linear models have some limitations: the solution is to use approximation, but to get good models we have to use a lot of linear structures, in particular a lot of triangles; modern 3D Rendering Engines may automate the process of converting non linear models in triangles, but this does not reduce the occupation of memory and doesn't eliminate linear approximation.
In this article I consider a non linear model (the Lembo model) for geometrical structures in a 3D rendering engine: firstly I show the properties of the model; then I show an efficient algorithm to solve the projection problem directly on the model equations.