WSCG 2014

 

 

 

22nd WSCG International Conference on

Computer Graphics, Visualization and Computer Vision

2014

http://www.wscg.cz

 

 

Primavera Hotel & Congress Center

Plzen, Czech Republic

 

 

June 2 – 5, 2014

 

 

Conference Program

 

and

ABSTRACTS

 

 

 

Keynote speakers

Manuel M.Oliveira, Universidade Federal do Rio Grande do Sul
Porto Alegre, Brazil

Tino Weinkauf, Max-Planck-Institut für Informatik
Saarbrucken, Germany

Brian A. Barsky, University of California, Berkeley, USA

 

 

 

Conference Chair
Vaclav Skala, University of West Bohemia
Plzen, Czech Republic


 

WSCG 2014 Conference Program

 

Some links might not be valid; especially in the case of Additional files as they have not been submitted correctly or nit submitted.

Time schedule of sessions is available at the last page.                                                 Update 2014-05-27

 

Session: A1

Session: A2

Session: B1

Session: B2

Session: C1

Session: D1

Session: D2

Session: F1

Session: F2

Session: G1

Session: G2

Session: H1

Session: K1

Session: K2

Session: L1

Session: L2

Session: M1

Session: M2

Session: N1

 

POSTERS

Session: Posters A

Session: Poster B

Session: Posters C

 

 


 

Keynote talks

 
J03: Flow maps - Benefits, Problems, Future Research

Weinkauf,T.

Abstract:
The flow map has become a standard tool for the analysis and visualization of unsteady flows. In simple terms, it maps the start point of a particle integration to its end point. Flow maps are used to compute Finite Time Lyapunov Exponents (FTLE), Streak Line Vector Fields, or to speed up other methods in flow visualization. However, they are very costly in terms of both computation time and storage. In this talk, I will give an overview of the latest developments in flow visualization, review the theoretical and practical benefits of flow maps, discuss issues of accuracy and complexity, and pose open questions for future research in this area.

 

J05: Performing High-Dimensional Filtering in Low-Dimensional Spaces

Oliveira,M.M.

Abstract:
High-dimensional filtering is a key component for many graphics, image, and video processing applications. Edge-preserving filters (an important class of high-dimensional ones), for instance, are essential for tasks like global-illumination filtering, tone mapping, denoising, detail enhancement, and non-photorealistic effects, among many others. Edge-preserving filtering can be implemented as a convolution with a spatially-varying kernel in image space, or with a spatially-invariant kernel in high-dimensional space. Performing the operation either way is computationally expensive, preventing its use in interactive and real-time scenarios. The talk will present two recent techniques we have developed for efficiently performing edge-aware filtering. The first one is based on a domain transform that allows high-dimensional geodesic filtering to be performed in linear time as a sequence of 1-D filtering steps using a spatially-invariant kernel. The second technique works by sampling and filtering the input signal using a set of 2-D manifolds adapted to the original data. Its cost is linear in the number of pixels and in the dimensionality of the space in which the filter operates. These techniques are significantly faster than previous approaches, supporting high-dimensional filtering of images, videos, and global illumination effects in real time. In the talk, I will present several examples illustrating their use in graphics, image, and video processing applications.

 

N59: Simulating Human Vision and Correcting Visual Aberrations with Computational Light Field Displays

Barsky,B.

Abstract:
This talk will present research on simulating human vision and on correcting visual aberrations with computational light field displays . The simulation is not an abstract model but incorporates real measurements of a particular individual’s entire optical system. Using these measurements, synthetics images are generated. This process modifies input images to simulate the appearance of the scene for the individual. Recent work on vision-correcting displays will also be briefly introduced. Given the measurements of the optical aberrations of a user’s eye, a vision correcting display will present a transformed image that when viewed by this individual will appear in sharp focus. This could impact computer monitors, laptops, tablets, and mobile phones. Vision correction could be provided in some cases where spectacles are ineffective.
 

 


 

Full papers – candidates for the Journal of WSCG

I11: City Sketching

Gain,J., Marais,P., Neeser,R.

Abstract:
Procedural methods offer an automated means of generating complex cityscapes, incorporating the placement of park areas and the layout of roads, plots and buildings. Unfortunately, existing interfaces to procedural city systems tend to either focus on a single aspect of city layout (such as the road network) ignoring interaction with other elements (such as building dimensions) or expect numeric input with little visual feedback, short of the completed city, which may take up to several minutes to generate. In this paper we present an interface to procedural city generation, which, through a combination of sketching and gestural input, enables users to specify different land usage (parkland, commercial, residential and industrial), and control the geometric attributes of roads, plots and buildings. Importantly, the inter-relationship of these elements is pre-visualized so that their impact on the final city layout can be predicted. Once generated, further editing, for instance shaping the city skyline or redrawing individual roads, is supported. In general, City Sketching provides a powerful and intuitive interface for designing complex urban layouts.

 

I41: Intelligent Prioritization and Filtering of Labels in Navigation Maps

Vaaraniemi,M., Görlich,M., Au,A.

Abstract:
The description of objects in navigation maps by textual annotations provides a powerful means for orientation and visual data exploration. However, displaying labels for all features leads to a cluttered map with unreadable labels and occluded information. Therefore, the overall goal is to display the most important and filter out the less important labels. In this paper, we present a general approach for filtering labels. We use the navigation in automotive maps as an application to test our approach. This involves the creation of a priority metric for ranking labels in maps. Our flexible system allows runtime configuration of the priority. Moreover, we keep the temporal coherency of label filtering; hence, jittering of labels does not occur. The system is predictable, modular, and can easily be adapted to new applications. On medium-class hardware, our real-time system is capable of filtering on average 1000 labels within 12 ms. A concluding expert study validates our approach for navigation purposes. All candidates approve the resulting clear labeling layout.

 

I47: Detecting Dominant Motion Flows and People Counting in High Density Crowds

Khan,S.D., Vizzari,G., Bandini,S., Basalamah,S.

Abstract:
Urbanisation is growingly generating crowding situations which generate potential issues for planning and public safety. This paper proposes new techniques of crowd analysis and precisely crowd flow segmentation and crowd counting framework for estimating the number of people in each flow segment. We use two foreground masks, one generated by Horn-Schunck optical flow, used by crowd flow segmentation, and another by Gaussian background subtraction, used by crowd counting framework. For crowd flow segmentation, we adopt $K$-means clustering algorithm which segments the crowd in different flows. After clustering, some small blobs can appear which are removed by blob absorption method. After blob absorption, crowd flow is segmented into different dominant flows. Finally, we estimate the number of people in each flow segment by using blob analysis and blob size optimization methods. Our experimental results demonstrate the effectiveness of the proposed method comparing to other state-of-the-art approaches and our proposed crowd counting framework estimates the number of people with about 90\% accuracy.

 

I59: Efficient Procedural Generation of Forests

Kenwood,J., Gain,J., Marais,P.

Abstract:
Forested landscapes are an important component of many large virtual environments in games and film. In order to reduce modelling time, procedural methods are often used. Unfortunately, procedural tree generation tends to be slow and resource-intensive for large forests. The main contribution of this paper is the development of an efficient procedural generation system for the creation of large forests. Our system uses L-systems, a grammar-based procedural technique, to generate each tree. We algorithmically modify L-system tree grammars to intelligently use an instance cache for tree branches. Our instancing approach not only makes efficient use of memory but also reduces the visual repetition artifacts which can arise due to the granularity of the instances. Instances can represent a range of structures, from a single branch to multiple branches or even an entire tree. Our system improves the speed and memory requirements for forest generation by 3--4 orders of magnitude over naive methods: we generate over $1,000,000$ trees in 4.5 seconds, while using only 350MB of memory.

 

K47: Detecting Topologically Relevant Structures in Flows by Surface Integrals

Reich,W., Kasten,J., Scheuermann,G.

Abstract:
Gauss' theorem, which relates the flow through a surface to the vector field inside the surface, is an important tool in Flow Visualization. We are exploit the fact that the theorem can be further refined on polygonal cells and construct a process that encodes the particle movement through the boundary facets of these cells using transition matrices. By pure power iteration of transition matrices, various topological features, such as separation and invariant sets, can be extracted without having to rely on the classical techniques, e.g., interpolation, differentiation and numerical streamline integration. We will apply our method to steady vector fields with a focus on three dimensions.

 

K67: A Visuomotor Coordination Model for Obstacle Recognition

Iwao,T., Kubo,H., Maejima,A., Morishima,S.

Abstract:
In this paper, we propose a novel method for animating CG characters that while walking or running pay heed to obstacles. Here, our primary contribution is to formulate a generic visuomotor coordination model for obstacle recognition with whole body movements. In addition, our model easily generates gaze shifts, which expresses the individuality of characters. Based on experimental evidence, we also incorporate the coordination of eye movements in response to obstacle recognition behavior via simple parameters related to the target position and individuality of the characters's gaze shifts. Our overall model can generate plausible visuomotor coordinated movements in various scenes by manipulating parameters of our proposed functions.

 

K73: Improved particle-based Ice Melting Simulation with SPH Air Model

Domaradzki,J., Martyn,T.

Abstract:
This paper presents an improved method for simulating melting of ice. The melting process is implemented as a result of the heat transfer between ice objects and fluids (water and air). Both the solids and the fluids, includ-ing air, are modeled as a set of particles with specified temperatures, which can vary locally during simulation. The proposed new particle-based air model allows one to consider in simulation the influence of the natural air convection on the ice melting process. Moreover, the model makes it possible to melt the ice object in a controllable way by means of external heat sources. The motion of air and water, originally described by the Navier-Stokes equations for incompressible fluids, is computed using the Smoothed Particle Hydrodynamics (SPH) algorithm, which we modify to properly handle our particle-based air and its interactions with ice and water. Thanks to a GPU-based implementation, the proposed method allows us to run the simulation of ice melting at interactive speed on an average PC.

 

M07: Cross-domain image matching improved by visual attention

Melo,E., de Amo,S., Guliato,D.

Abstract:
A good accuracy in image retrieval across different visual domains, such as photos taken over different seasons or lighting conditions, paintings, drawings, hand-drawn sketches, still is a big challenge. This paper proposes the use of visual attention to estimate the relative importance of some regions in a given query image. Recently, researchers used different databases in specific domains to validate their hypothesis. In this paper, we also propose a database with multiple image domains, called UFU-DDD. We used the UFU-DDD database to demonstrate the performance and accuracy gains from the association of visual attention with orientation-based feature descriptors. The analysis of the results showed that our approach outperforms all the standard descriptors used in the experiments. We hope the UFU-DDD database constitutes a valuable benchmark to the future research in cross-domain similarity searching.

 

M71: Acquiring Bidirectional Texture Functions for Large-Scale Material Samples

Steinhausen,H.C., den Brok,D., Hullin,M.B., Klein,R.

Abstract:
Most current acquisition setups for bidirectional texture functions (BTFs) are incapable of capturing large-scale material samples. We propose a method based on controlled texture synthesis to produce BTFs of appealing visual quality for such materials. Our approach uses as input data a complete measurement of a small fraction of the sample, together with few images of the large-scale structure controlling the synthesis process. We evaluate the applicability of our approach by reconstructing sparsified ground truth data and investigate the consequences of choosing different kinds and numbers of constraint images.

 

M73: Patch-based sparse reconstruction of material BTFs

den Brok,D., Steinhausen,H. C., Hullin,M., Klein,R.

Abstract:
We propose a simple and efficient method to reconstruct materials' texture functions (BTFs) from angularly sparse measurements. The key observation is that materials of similar types exhibit both similar surface structure and reflectance properties. We exploit this by manually clustering an existing database of fully measured material BTFs and fitting a linear model to each of the clusters. The models are computed not on per-texel data but on small spatial BTF patches we call apparent BTFs. Sparse reconstruction can then be performed by solving a linear least-squares problem without any regularization, using a per-cluster sampling strategy derived from the models. We demonstrate that our method is capable of faithfully reconstructing fully resolved BTFs from sparse measurements for a wide range of materials.


 

Communication and short papers

G73: The Shortest Path Finding between two points on a Polyhedral Surface

Popov,E.V., Popova,T.P., Rotkov,S.I.

Abstract:
The paper describes the approximate method of the shortest path finding between two points on a surface. This problem occurs when generating a cutting pattern after the form of the fabric tensile surface is found. The shortest path finding is reduced to the problem of finding the geodesic line on the surface. However, the numerical problem solution of the form finding of fabric tensile structure leads to the fact that the final surface is represented by an arbitrary polyhedron. There is no analytical problem solution of finding shortest paths in this case. The described method allows finding the shortest path on a surface of any regular polyhedron form.

 

G89: Wavelet Representation of Optical System Distortion

Zahradka,J., Barina,D., Zemcik,P.

Abstract:
A novel method for a representation of the optical system distortion using the discrete wavelet transform is proposed in this paper. Using the presented approach, virtually any complex distortion can be represented only with a small number of wavelet coefficients. Moreover, one can represent the distortion up to the resolution of one pixel or even finer. The experiments shown in the paper suggest that the introduced wavelet interpolation reconstructs distorted data very realistically. The proposed method was evaluated on two scenes comprising a projector and irregular surfaces using dataset of images of various type.

 

G97: Distributed Triangle Mesh Processing

Cabiddu,D., Attene,M.

Abstract:
We propose a web-based system to remotely and distributedly process triangle meshes. Users can implement complex geometric procedures by composing simpler processing tools that, in their turn, can be provided by researchers who publish them as appropriate Web services. We defined an efficient geometric data transfer protocol in order to resolve the potential mesh delivery bottleneck caused by the transfer of large models to the various servers on typical long-distance connections with limited bandwidth. We have experimented our system on several large models and on diverse processing scenarios, and we have concluded that our transfer protocol significantly reduces the overall time needed to produce the result.

 

H03: Local Monte Carlo estimation methods in the solution of global illumination equation

Budak,V., Zheltov,V.

Abstract:
In this article we consider a local estimations of the Monte Carlo method for solving the equation of the global illumination. The local estimations allow to calculate directly the luminance at a predetermined point, in a given direction for an arbitrary law of reflection. Thus, there is no need to construct the map of the illumination. Thereby it is much more effective than direct modeling or method of finite element . The usage in lighting calculations of the object described by the spherical harmonics is also discussed in the article.

 

H13: Is Augmented Reality the Future Middleware for Improving Human Robot Interactions?

Lakshantha,E., Egerton,S.

Abstract:
With robots appearing frequently within our society, the cases will be higher where people with little or no practical experience in robotics would have to supervise robots. Future interfaces should make Human Robot Interactions (HRI) intuitive for such less-experienced users. A key requirement to have an intuitive interface is to improve the level of HRI performance. In this study we try to improve the HRI performance by developing a system namely, SHRIMP (Spatial Human Robot Interaction Marker Platform) based on Augmented Reality (AR) technology. We present SHRIMP as a new type of middleware for HRI, that can mediate high-level user intentions with robot-related action tasks. SHRIMP enables users to embed user intentions in the form of AR diagrams inside the robot's environment (as seen by the robot's camera view). These AR diagrams translate into action tasks for robots to follow in that environment. Furthermore, we report on the HRI performance induced by our SHRIMP framework when compared to an alternative more common robot control interface, a joystick controller.

 


 

H29: Visual Exploration of Patterns in Multi-run Time-varying Multi-field Simulation Data Using Projected Views

Molchanov,V., Linsen,L.

Abstract:
Numerical simulations in the fields of science and engineering generate spatial data of time-varying phenomena that frequently depend on various simulation and input parameters. It is common to run hundreds of simulations to investigate the dependence of the modeled process on the choice of the parameters. We propose a comprehensive approach for the visual analysis of such multi-run data to detect patterns and outliers. We use dimensionality reduction algorithms to generate a visual representation that exhibits the distribution of the simulation results under varying parameter settings. Each field (or even multi-field) of every time step and every simulation run is represented as a point in a 2D space, where the 2D layout conveys similarity of the scalar fields. Points corresponding to consecutive time steps of one run are connected by line segments, such that each simulation run is represented as a polyline. Consequently, the multi-run data are visually encoded as a set of polylines. Variations of hue, saturation, opacity, and shape allow for distinguishing groups of simulations and depicting various characteristics of runs. The user can interactively change these settings, while further interaction mechanisms allow for selection, refinement, zooming, requesting textual information, and brushing and linking to coordinated (or embedded) views of physical and attribute space visualizations. We apply our approach to two common applications with significantly different data structure. The first application is a multi-run climate modeling simulation based on an Eulerian method over a 2D regular grid. The second application is a multi-run binary star evolution simulation based on a Lagrangian method (Smoothed Particle Hydrodynamics) with unstructured particles changing 3D positions over time. We demonstrate the contribution and impact of our visualization method for the interactive visual analysis of the multi-run data by identifying meaningful groups of simulations, detecting global patterns, and finding interesting outliers.

 

H37: Invariant Interest Point Detection Based on Variations of the Spinor Tensor

Hast,A., Marchetti,A.

Abstract:
Image features are obtained by using some kind of interest point detector, which often is based on a symmetric matrix such as the structure tensor or the Hessian matrix. These features need to be invariant to rotation and to some degree also to scaling in order to be useful for feature matching in applications such as image registration. Recently, the spinor tensor has been proposed for edge detection. It was investigated herein how it also can be used for feature matching and it will be proven that some simplifications, leading to variations of the response function based on the tensor, will improve its characteristics. The result is a set of different approaches that will be compared to the well known methods using the Hessian and the structure tensor. Most importantly the invariance when it comes to rotation and scaling will be compared.

 

H41: Correspondence Chaining for Enhanced Dense 3D Reconstruction

Wasenmüller,O., Krolla,B., Michielin,F., Stricker,D.

Abstract:
Within the computer vision community, the reconstruction of rigid 3D objects is a well known task in current research. Many existing algorithms provide a dense 3D reconstruction of a rigid object from sequences of 2D images. Commonly, an iterative registration approach is applied for these images, relying on pairwise dense matches between images, which are then triangulated. To minimize redundant and imprecisely reconstructed 3D points, we present and evaluate a new approach, called Correspondence Chaining, to fuse existing dense two-view 3D reconstruction algorithms to a multi-view reconstruction, where each 3D point is estimated from multiple images. This leads to an enhanced precision and reduced redundancy. The algorithm is evaluated with three different representative datasets. With Correspondence Chaining the mean error of the reconstructed pointclouds related to ground truth data, acquired with a laser scanner, can be reduced by up to 40%, whereas the root mean square error is even reduced by up to 56%. The reconstructed 3D models contain much less 3D points, while keeping details like fine structures, the file size is reduced by up to 78% and the computation time of the involved parts is decreased by up to 42%.

 

H43: Interactive Guidance and Navigation for Facilitating Image-Based 3D Modeling

Liu, Damon Shing-Min Liu, Chang, Te-Li

Abstract:
Here we present an interactive guidance and navigation system that assists user in acquiring pictures for image based 3D modeling. To reconstruct an object‘s 3D model, user follows our instruction to take a set of images for an object in different angles, we calculate their relative viewing positions and spare point cloud data using structure from motion technique. After we obtain sufficient number of images, we use Patch-based Multi-View Stereo (PMVS) [1] software to generate dense point cloud data. When displaying dense point cloud, we provide user an interface to eliminate those noise data points yielded from background construction or re-projection errors. Afterwards we reconstruct surface mesh as output. Our system provides informative message for failures while calculating camera poses and helps user how to resolve those problems. Furthermore, we assess the quality of camera poses reconstruction and generated point cloud to reveal the lack of angles for captured images and guide user to remedy those information.

 


 

H47: Kinect based 3D Scene Reconstruction

Zeller,N., Quint,F., Guan,L.

Abstract:
This paper presents a novel system for 3D scene reconstruction and obstacle detection for visually impaired people, which is based on Microsoft Kinect. From the depth image of Kinect a 3D point cloud is calculated. By using both, the depth image and the point cloud a gradient and RANSAC based plane segmentation algorithm is applied. After the segmentation the planes are combined to objects based on their intersecting edges. For each object a cuboid shaped bounding box is calculated. Based on experiments the accuracy of the presented system is evaluated. The achieved accuracy is in the range of few centimeters and thus sufficient for obstacle detection. Besides, the paper gives an overview about already existing navigation aids for visually impaired people and the presented system is compared to a state of the art system.

 

H59: On Maximum Geometric Finger-Tip Recognition Distance Using Depth Sensors

Shekow,M., Oppermann,L.

Abstract:
Depth sensor data is commonly used as the basis for Natural User Interfaces (NUI). The recent availability of different camera systems at affordable prices has caused a significant uptake in the research community, e.g. for building hand-pose or gesture-based controls in various scenarios and with different algorithms. The limited resolution and noise of the utilized cameras naturally puts a constraint on the distance between camera and user at which a meaningful interaction can still be designed for. We therefore conducted extensive accuracy experiments to explore the maximum distance that allows for recognizing finger-tips of an average-sized hand using three popular depth cameras (SwissRanger SR4000, Microsoft Kinect for Windows and the Alpha Development Kit of the Kinect for Windows 2), two geometric algorithms and a manual image analysis. In our experiment, the palm faces the sensors with all five fingers extended. It is moved at distances of 0.5 to 3.5 meters from the sensor. Quantitative data is collected regarding the number of recognized finger-tips for each sensor, using two algorithms. For qualitative analysis, samples of the hand outline are also collected. The quantitative results proved to be inconclusive due to false positives or negatives caused by noise. In turn our qualitative analysis, achieved by inspecting the hand outline images manually, provides conclusive understanding of the depth data quality. We find that recognition works reliably up to 1.5 m (SR4000, Kinect) and 2.4 m (Kinect 2). These insights are generally applicable for designing NUIs that rely on depth sensor data.

 

H61: 3D Morphing for Triangle Meshes using Deformation Matrix

Xing,T., Yang,Y., Yu,Y., Zhou,Y., Xing,X., Du,S.

Abstract:
We introduce a 3D morphing method which generates a merged model given a series of triangle meshes. Our morphing, based on a set of parameters between the source and target shapes, can show the process of the transformation from the source to the target smoothly. We choose a model as our reference mesh, and obtain corresponding unified models from other models which may have different number of vertices or facets. Given these unified models, parameters between any two meshes can be computed integrally or separately for each rigid part. Different forms of combination of the parameters can generate different merged models. To address the collapsed situation happened occasionally, shape and pose morphing are separated for some parts in our work. By merging different parts of different models, we can get a merged shape, e.g. an animal with the horse head and the cat body. As an application of our 3D morphing method, quantifying the difference between any two models can be done efficiently, represented by the distance between any two sets of low-dimensional parameters reduced from the initial parameters using Principal Component Analysis (PCA). Character replacement and model driven are another two applications. Characters in two-dimensional images are used to guide our morphing work and depth image sequence is used to drive our merged model to show the same pose as the character in the sequence respectively.

 

H67: Visualization of Traffic Flow Simulations Based on

Vergeest,J.S.M.

Abstract:
Microscopic car-following models are widely used to generate traffic flow simulations. In some cases visualization on macroscopic scales (both spatial and temporal) are required to provide relevant feedback to researchers. Recent studies have indicated that the life time of highway traffic jams is severally influenced by temporary deviations of the driving style of individual motorists. The most influential parameters are the longitudinal acceleration and the headway distance produced by car drivers soon after leaving a congestion situation. To study the effects of acceleration profiles on the life times of congestions it is crucial to generate traffic flow simulations which can be viewed at highly varying spatial and temporal scales. The influence of adaptive cruise control (ACC), for example, is one of the factors that can be forecasted using traffic flow simulations. In this paper we present typical results of such simulations and suitable graphical presentations, which support the traffic flow research.

 

H71: Evaluation of Fuzzy Rough Set Feature Selection for Content Based Image Retrieval System with Noisy Images

Shahabi Lotfabadi,M., Shiratuddin,M.F., Wong,K.W.

Abstract:
In this paper Fuzzy Rough Set is used for feature selection in the Content Based Image Retrieval system. Noisy query images are fed to this Content Based Image Retrieval system and the results are compared with four other feature selection methods. The four other feature selection methods are Genetic Algorithm, Information Gain, OneR and Principle Component Analysis. The main objective of this paper is to evaluate the rules which are extracted from fuzzy rough set and determine whether these rules which are used for training the Support Vector Machine can deal with noisy query images as well as the original queried images. To evaluate the Fuzzy Rough set feature selection, we use 10 sematic group images from COREL database which we have purposely placed some defect by adding Gaussian, Poisson and Salt and Pepper noises of different magnitudes. As a result, the proposed method performed better in term of accuracies in most of the different types of noise when compared to the other four feature selection methods.

 

I02: Navigation Parameters Correction Technique Using Multiple View Geometry Methods

Sablina,V.A., Novikov,A.I., Nikiforov,M.B., Loginov,A.A.

Abstract:
The problem of determining precisely the current location coordinates and orientation of an aircraft in the space is formulated and considered in this paper. The review of the existing approaches to the navigation tasks solutions in the aviation is done. The advantages of the contour analysis and the multiple view geometry mathematical apparatus application are revealed. The general navigation parameters correction technique on the basis of the multispectral computer vision system images analysis is suggested. For the main stages of the suggested technique the used approaches are described and the experimental results of the investigations are obtained, viz. the object detection and comparison, the geometric interconnection finding between the images for the subsequent navigation parameters correction. The obtained results also can be used for the real and the synthetized images superimposition problem solution in multispectral computer vision systems. The experiments show that using multiple view geometry methods in combination with contour analysis methods is promising for the aircraft navigation parameters correction problem solution in the real time.

 

I03: Texture Classification with the PQ Kernel

Ionescu,R.T., Popescu,A.L., Popescu,M.

Abstract:
Computer vision researchers have developed various learning methods based on the bag of words model for image related tasks, including image categorization, image retrieval and texture classification. In this model, images are represented as histograms of visual words (or textons) from a vocabulary that is obtained by clustering local image descriptors. Next, a classifier is trained on the data. Most often, the learning method is a kernel-based one. Various kernels can be plugged in to the kernel method. Popular choices, besides the linear kernel, are the intersection, the Hellinger‘s, the χ2 and the Jensen-Shannon kernels. Recent object recognition results indicate that the novel PQ kernel seems to improve the accuracy over most of the state of the art kernels. The PQ kernel is inspired from a set of rank correlation statistics specific for ordinal data, that are based on counting concordant and discordant pairs among two variables. In this paper, the PQ kernel is used for the first time for the task of texture classification. The PQ kernel is computed in O(nlogn) time using an efficient algorithm based on merge sort. The algorithm leverages the use of the PQ kernel for large vocabularies. Texture classification experiments are conducted to compare the PQ kernel with other state of the art kernels on two benchmark data sets of texture images. The PQ kernel has the best accuracy on both data sets. In terms of time, the PQ kernel becomes comparable with the state of the art Jensen-Shannon kernel. In conclusion, the PQ kernel can be used to obtain a better pairwise similarity between histograms, which, in turn, improves the texture classification accuracy.

 

I19: Parking Spaces Modelling for Inter Spaces Occlusion Handling

Masmoudi,I., Wali,A., Alimi,A.

Abstract:
Intelligent systems for vacant parking spaces detection present an important solution to facilitate finding an available place for the drivers. Many real world challenges can face these vision based systems like weather conditions and luminance variation. In this paper, we are interested in the problem of inter spaces occlusion, where one or more place of a parking can be hidden by another parked vehicle. In order to overcome this problem we propose a new on-street surface based model for parking model extraction and we perform vehicle tracking in order to detect the events of "Entering" and "Leaving" of a car to a parking place.

 

I23: Mesh Partitioning for Parallel Garment Simulation

Hutter,M., Knuth,M., Kuijper,A.

Abstract:
We present a method for partitioning meshes that allows a simple and efficient parallel implementation of different simulation methods. It is based on a generalization of the concept of independent sets from graph theory to sets of simulation elements. The general description makes it versatile and flexibly applicable in existing simulation systems. Every simulation method that formerly worked by sequentially processing a set of simulation elements can now be parallelized by partitioning the underlying set, without affecting the behavior of the simulated model.

 

I29: Calibration of RGB Camera with Velodyne LiDAR

Velas,M., Spanel,M., Materna,Z., Herout,A.

Abstract:
Calibration of the LiDAR sensor with RGB camera finds its usage in many application fields from enhancing image classification to the environment perception and mapping. This paper presents a pipeline for mutual pose and orientation estimation of the mentioned sensors using a coarse to fine approach. Previously published methods use multiple views of a known chessboard marker for computing the calibration parameters, or they are limited to the calibration of the sensors with a small mutual displacement only. Our approach presents a novel $3$D marker for coarse calibration which can be robustly detected in both the camera image and the LiDAR scan. It also requires only a single pair of camera-LiDAR frames for estimating large sensors displacement. Consequent refinement step searches for more accurate calibration in small subspace of calibration parameters. The paper also presents a novel way for evaluation of the calibration precision using projection error.

 

I61: Set of Texture Descriptors for Music Genre Classification

Nanni,L., Costa,Y., Brahnam,S.

Abstract:
This paper presents a comparison among different texture descriptors and ensembles of descriptors for music genre classification. The features are extracted from the spectrogram calculated starting from the audio signal. The best results are obtained by extracting features from subwindows taken from the entire spectrogram by Mel scale zoning. To assess the performance of our method, two different databases are used: the Latin Music Database (LMD) and the ISMIR 2004 database. The best descriptors proposed in this work greatly outperform previous results using texture descriptors on both databases: we obtain 86.1% accuracy with LMD and 82.9% accuracy with ISMIR 2004. Our descriptors and the MATLAB code for all experiments reported in this paper will be available at https://www.dei.unipd.it/node/2357.

 

I83: Consistent Pose Uncertainty Estimation for Spherical Cameras

Krolla,B., Gava,C., Pagani,A., Stricker,D.

Abstract:
In this work, we discuss and evaluate the reliability of first order uncertainty propagation in context of spherical Structure from Motion, concluding that they are not valid without restrictions, but depend on the choice of the objective function. We furthermore show that the choice of the widely used geodesic error as objective function for a reprojection error optimization leads to disproportional pose uncertainty results of spherical cameras. This work identifies and outlines alternative objective functions to bypass those obstacles by deducing Jacobian matrices according to the chosen objective functions with subsequent conduction of first order uncertainty propagation. We evaluate the performance of the different objective functions in different optimization scenarios and show that best results for uncertainty propagation are obtained using the Euclidean distance to measure deviations of image points on the spherical image.

 

I97: Mass-spring parameters definition in 2D for simulation

Trouvain,G., Gagnol,V., Duc,E., Sancho,J.F.

Abstract:
In computer graphics and in industrial context, Mass-Spring model is used to obtain fast and visual results in physical simulations. A disadvantage of the method is to obtain accurate result on account of the difficulty to define parameters of a Mass-Spring Model. Different works and results are carried out to define Mass-Spring parameters in other domains such as in cloth animation or in soft tissue modeling. However the Mass-Spring model is not used in some context where a real-time computation can be useful as in tire manufacturing industry for example. In this paper, a method is presented to define the geometric configuration of a Mass-Spring system and the tuning of the mass, stiffness and damper parameters according to physical material behaviours. Different load cases are studied and used to conduct a sensitivity study on the network spring parameters. Then, results are compared to Finite Element Model of same cases in order to evaluate the precision of the proposed approach.

 

J07: A GPU-Based Level of Detail System for the Real-Time Simulation and Rendering of Large-Scale Granular Terrain

Leach,C., Marais,P.

Abstract:
We describe a system that is able to efficiently render large-scale particle-based granular terrains in real-time. This is achieved by integrating a particle-based granular terrain simulation with a heightfield-based terrain system, effectively creating a level of detail system. By quickly converting areas of terrain from the heightfield-based representation to the particle-based representation around dynamic objects which collide with the terrain, we are able to create the appearance of a large-scale particle-based granular terrain, whilst maintaining real-time frame rates.

 

J17: Multiresolution Smoothing of NURBS Curves based on Non-uniform B-Spline Wavelets

Li,A., Kou,F., Niu,Q., Tian,H.

Abstract:
As a rule, an energy method is widely adopted for b-spline curve smoothing, but this method has the disadvantages such as massive calculation, computation complexity and low efficiency. Compared with the energy method, multi-resolution smoothing approaches nicely overcome these obstacles. Presently, some researches have been conducted on multi-resolution smoothing, but these efforts mainly aimed at uniform or quasi-uniform b-spline curves. Uniform and quasi-uniform b-spline curves are just exceptional cases of NURBS curves. Multi-resolution smoothing for these types of curves mostly depend on uniform b-spline wavelets, so this smoothing method can‘t be directly applied to NURBS curves. In this paper, firstly, new non-uniform b-spline wavelets are created based on discrete b-spline basis functions in the light of the particularities of NURBS curves, the wavelet reconstruction and decomposition algorithms are provided. The wavelets, obviously, have greater flexibility and applicability than uniform b-spline wavelets because of its distance-independence of neighbor nodes in knot vectors. Then, it presents the multi-resolution smoothing method for NURBS curves based on the newly built wavelets. Lastly, an example is presented to confirm effectiveness of this multi-resolution smoothing method. Furthermore, the method can also be applied to NURBS surfaces if extended properly.

 

J19: Feature Extraction and Localisation using Scale-Invariant Feature Transform on 2.5D Image

Pui, S.T., Minoi, JL., Terrin, L., Oliveira, J.F., Gillies, D.F

Abstract:
The standard starting point for the extraction of information from human face image data is the detection of key anatomical landmarks, which is a vital initial stage for several applications, such as face recognition, facial analysis and synthesis. Locating facial landmarks in images is an important task in image processing and detecting it automatically still remains challenging. The appearance of facial landmarks may vary tremendously due to facial variations. Detecting and extracting landmarks from raw face data is usually done manually by trained and experienced scientists or clinicians, and the landmarking is a laborious process. Hence, we aim to develop methods to automate as much as possible the process of landmarking facial features. In this paper, we present and discuss our new automatic landmarking method on face data using 2.5-dimensional (2.5D) range images. We applied the Scale-invariant Feature Transform (SIFT) method to extract feature vectors and the Otsu‘s method to obtain a general threshold value for landmark localisation. We have also developed an interactive tool to ease the visualisation of the overall landmarking process. The interactive visualization tool has a function which allows users to adjust and explore the threshold values for further analysis, thus enabling one to determine the threshold values for the detection and extraction of important keypoints or/and regions of facial features that are suitable to be used later automatically with new datasets with the same controlled lighting and pose restrictions. We measured the accuracy of the automatic landmarking versus manual landmarking and found the differences to be marginal. This paper describes our own implementation of the SIFT and Otsu‘s algorithms, analyzes the results of the landmark detection, and highlights future work.

 

J29: A methodological approach to BIM design

Barbato, D.

Abstract:
More and more often the design engineers are in the necessity to realize planning graphic arts loads of information in a very short time: on the one hand the clients' request, on the other hand the decision-makers forced, therefore, to considerable conceptual and executive efforts. The increase of the representative quality and the time and price reduction are just a few of the 'enemies' against which the stakeholders in the construction industry are compelled to fight. The three-dimensional models, conceptual and photo-realistic render, videos, etc., represent a part of the infographics representations available and required by the market. Pursuant to the sectorialization of the specific professional skills, the Building Information Modeling, if properly implemented, deploys itself as 'ally' to face the interdisciplinary inefficiencies in building projects and to reduce the planning time without affecting its quality. It must pay attention to the evolution of the information and data's exchange and management techniques that constitute a complex project. With this contribution we are going to analyze the level of interoperability between three BIM software ArchiCAD 16 by Graphisoft, Revit Architecture 2013 and Robot Structural Analysis both by Autodesk confirming the compatibility of data exchange and, if necessary, how to proceed in the case of loss of information in the transition from one software to another. In particular the Graphisoft program will be used for the architectural modeling, Revit Architecture as control software of the BIM management project and finally Robot Structural Analysis for the structural analysis of the frames in the realized building.

 

J37: Exploiting Spatial Redundancy with Adaptive Pyramidal Rendering

Lawlor,O., Genetti,J.

Abstract:
Just as image data compression is designed to save space while preserving the essence of an image, we present an adaptive pyramidal rendering scheme designed to save rendering time while maintaining acceptable image quality. Our coarse-to-fine scheme predicts when and where it is safe to take less than one sample per output pixel, and exploits spatial redundancy to predict pixel colors in the resulting gaps, both of which can be performed at framerate in realtime on a modern GPU. As a lossy compression method, we present experimental data on the rendering time versus image quality tradeoff for several example renderers.

 

J41: Group analysis based on multilevel Bayesian for FMRI Data

Feng Yang, Kuang Fu, Ai Zhou

Abstract:
This paper suggests one method to process fMRI time series based on Bayesian inference for group analysis. The method is based on Bayesian inference to divide group into multilevel by session, subject and group levels. It compares covariance to select prior to reinforce posterior probability in group analysis. At the same time it combines classical statistics, i.e., t-statistics to obtain voxel activation at subject level as prior for Bayesian inference at group level. Through the method, it can effectively decrease computation expensive and reduce complexity. Therefore the experimental results show robust on Bayesian inference for group analysis.

 


 

J43: A Simple Method for Generating of Facial Barcodes

Matveev, Yu., Kukharev, G., Shchegoleva, N.

Abstract:
In this paper we propose a simple method for generating standard type linear barcodes from facial images. The method uses the difference in gradients of image brightness. It involves averaging the gradients into a limited number of intervals, quantization of the results into the range of decimal numbers from 0 to 9, and table conversion into the final barcode. The proposed solution is computationally low-cost and does not require the use of any specialized image processing software, which makes it possible to generate facial barcodes in mobile systems. Results of tests conducted on the Face94 database and a database of composite faces at different ages show that the proposed method is a new solution for use in real-world practice. It ensures the stability of the generated barcodes against changes of scale, pose and mirroring of facial images, as well as changes of facial expressions and shadows on faces from local lighting.

 

J47: GPU-based discrepancy check for 3D fabrication

Wu,F., Dellinger,A.

Abstract:
A GPU-based visualization approach is presented to show the discrepancies between the 3D model and the fabrication of that model. To show the differences, a 3D scanner is constructed to scan the fabrication for comparison with the 3D model. To compare the detailed differences, a high resolution camera with a projector is used. To demonstrate its application, a sculpting assistance system is implemented. The system can capture the three-dimensional model iteratively and provide information by rendering different colors on the surface to show the topological and geometric differences between the final target model and the current model. Then the user can see how to modify the current model to best approximate the final target model. The topological difference is obtained by rendering on the screen the 3D data from both the final target model and the current model. The user can manipulate and observe their differences. The local geometry is compared in the GPU and rendered on the real scene. Users can easily see the geometry directly on the fabrication. To keep a fixed relationship between the 3D model and the fabrication, a reference image is tracked at the bottom stage to recover the related transformation.

 

J83: A Cave Based 3D Immersive Interactive City With Gesture Interface

Zhang,Z., McInerney,T., Zhang,N., Guan,L.

Abstract:
3D city models have greatly changed the way we interactive with geographic information. However both the visualization and interaction are limited on conventional 2D displays. This paper presents a system that visualizes 3D cities and allows gesture interaction in the fully immersive Cave Automatic Virtual Environment (CAVE). The proposed system enables navigation, selection, and object manipulation, which will be basic functions in applications such as urban planning, virtual tourism, etc. We propose the use of pattern recognition methods for gesture recognition as new type of interactions in a Virtual Reality (VR) environment. In this work, we apply the Hidden Markov Model, to facilitate real time dynamic gesture recognition, and achieve good recognition result. We also propose a novel selection method for selecting multiple objects in VR. User study shows preference of the proposed system both for its realistic immersive visualization and natural gesture interface.

 

K02: Manipulation of Motion Capture Animation by Characteristics

De Martino,J. M., Angare,L.M.G.

Abstract:
Three-dimensional animation is an area in vast expansion due to, continuous research in the field has enabled an increasing number of users access to powerful tools with intuitive interfaces. We present our work-in-progress methodology by which artists can manipulate existing animation segments using intuitive characteristics instead of manually changing keyframes' values and interpolations. To achieve this goal, motion capture is used to create a database in which actors perform the same movement with different characteristics; keyframes from those movements are analyzed and used to create a transformation of animation curves that describe differences of values and times in keyframes of neutral and a movement with a specific characteristic. This transformation can be used to change a large set of keyframes, embedding a desired characteristic into the segment. To test our methodology, we used as a proof of concept a character performing a walk, represented by 59 joints with 172 degrees of freedom (DOF), and a set of 12 physical and emotional characteristics. Using our methodology we embedded a neutral walk with these desired characteristics and evaluated the results with a survey comparing our modified animations with direct motion capture movements, with partial results. With this methodology, one can decrease drastically the time needed to tweak large sets of keyframes, embedding a desired characteristic in a fashion more closely related to the artistic universe of animators than the mathematical representations of angles, translations and interpolations in animation curves commonly used in commercial softwares.

 

K17: Accelerated SQLite Database using GPUs

Hordemann,G., Lee,J.K., Smith,A.H.

Abstract:
This paper introduces the development of a new GPU-based database to accelerate data retrieval. The main goal is to explore new ways of handling complex data types and managing data and workloads in massively parallel databases. This paper presents three novel innovations to create an efficient virtual database engine that executes the majority of database operations directly on the GPU. The GPU database executes a subset of SQLite‘s SELECT queries, which are typically the most computationally expensive operations in a transactional database. This database engine extends existing research by exploring methods of table caching on the GPU, handling irregular and complex data types, and executing multiple table joins and managing the resulting workload on the GPU. The GPU database discussed in this paper is implemented on a consumer grade GPU to demonstrate the high-performance computing benefits of relatively inexpensive hardware. Advances are compared both to existing CPU standards and to alternate implementations of the GPU database.

 

K19: SAMI: SAliency based Metrics of Identification for object concealment evaluation

Gosseaume,J., Kpalma,K., Ronsin,J.

Abstract:
We propose original metrics for estimation of detection and identification of an object in an image. SAMI, (SAliency based Metrics of Identification), allows to compare the performance of detection and identification of a given Region Of Interest (ROI) within a test image. The metrics give a saliency score and a structural score, for the detection evaluation and the identification evaluation, respectively, for this ROI. The identification evaluation requires the ground truth edges of the test image. SAMI has initially been conceived to estimate the performance of SCOTT, a "Synthesis COncealment Two-level Texture" algorithm. However, a direct derived application of such metrics could be the evaluation of saliency algorithms for object segmentation: given the ground truth area of a salient object in the ROI, the best segmentation would be the one with the highest SAMI saliency score in this ROI. Another possible application could be the use of SAMI inside a saliency algorithm, to compute a dense modified saliency map, in which each pixel has the SAMI score corresponding to its neigborhood. Such a resulting map would be more robust to saliency noise from small spots.

 

K29: 3D Visualization of UCG Process

Iwaszenko S., Nurzyńska K.

Abstract:
This paper presents the aspects of 3D graphics utilization in the visualization of underground coal gasification (UCG) process. Data gathered during the process as well as achieved by the mathematical modelling describe a three dimensional structure of the georeactor where the process takes place. Proper visualization of this information is crucial for better understanding of the process and further possibilities of its development and practical usage. Therefore, a dedicated software tool was developed to support the visualization of the data. This program enables visualization of the data in user friendly environment which supports the monitoring of process during each stage. It is developed with application of XNA technology in C#.NET. The system overview with possible applications are presented.

 

K59: Pose Estimation of Small-articulated Animals using Multiple View Images

Hwang,S., Choi,Y.

Abstract:
Study of robot that mimics the behavior of small animals such as lizards and arthropods has been actively carried out. However, research on systematic analysis of the gait behavior of small animals is not easy to find. Motion analysis of most living creatures is using the optical motion capture system. However, it is inapplicable to small animals because of it is difficult to attach optical makers. To solve this problem, the markerless motion capture is being researched, but many researches about markerless motion capture performed with human subjects. Therefore, to apply markerless motion capture system for insects which have many legs and high degree of freedom, the additional researches are needed. The purpose of the study is to develop a system to estimate continuous pose of small articulated animals using three-dimensional skeleton model of the animal from multi-view video sequence. It includes extraction of extremity and root of each leg and calculation of joint kinematics using FABRIK(Forward And Backward Reaching Inverse Kinematics) algorithm with extracted extremity and root. The method developed in this study will contribute to better understand gait behavior of the small articulated animals.

 

K79: Physics-based modelling and animation of saccadic eye movement

Papapavlou,C., Moustakas,K.

Abstract:
In this paper we present a new approach in producing realistic saccade eye movement animations by incorporating anatomical details of the oculomotor system into the dynamics of the eye model. Unlike abstract models of the eye motor behaviour, we make use of a biomedical framework to effectively model the eye globe along with the three extraocular muscle pairs in efficient detail, that the application of the corresponding muscle activation signals, naturally results in realistic motions. That way, we avoid the need of explicitly providing trajectory information, and therefore simplify the process of eye animation. Regarding the calculations of the muscle activation signals needed to drive the animation in a way that imitates a real human eye, we are based on existing knowledge about the way that the nervous system utilizes the extraocular muscles during saccades.

 

L07: Semi-automatic Segmentation of Prostate by Directional Search for Edge Boundaries

Kortelainen,J.M., Antila,K., Schmitt,A., Mougenot,C., Ehnholm,G., Chopra,R.

Abstract:
Semi-automatic segmentation of the prostate boundary is presented for the pre-operational images of the MRI-guided ultrasonic thermal therapy of the prostate cancer. The specific deformable surface method is based on firstly fitting an ellipsoid on the given manual landmark points, then modifying the shape of the initialization surface mesh by masking out the regions of the separately segmented bladder and rectum, and finally adapting the surface mesh by searching image for the edge boundaries in the direction of the surface normal. The suggested segmentation method combines information from two types of pre-operational MR-images showing different contrast for the tissue structure. Dice similarity coefficient (DSC) between the semi-automatic segmentation and the manual reference was on average 0.89 for a group of N=5 patients having the MRI guided ultrasound thermal treatment. The robustness of the surface fitting method was tested by simulating 30 randomized initialization sets of the landmark points for each patient, and the resulting standard deviation of DSC was 0.01.

 

L29: Three Dimensional Blood Vessel Modeling Method Considering IVUS Catheter Insertion

Son,J., Choi,Y., Lee,W.S., Kim,S.W.

Abstract:
In this paper, we proposed a new 3D blood vessel modeling method for FSI(Fluid-Structure Interaction) analysis. Because of characteristics of medical images, a 3D blood vessel model including intima and adventitia cannot be reconstructed using a single medical image. For obtain detail intima and adventitia information, many researchers use IVUS(Intravascular Ultrasound) image and for calculate position and orientation of IVUS images, X-ray angiogram images are used. Therefore, by combining these medical images, 3D blood vessel model can be generated. However, when IVUS image is taken, a catheter attached miniaturized ultrasound device is inserted into blood vessel, so the shape of blood vessel is deformed. Accordingly, the 3D blood vessel model combining IVUS and X-ray angiogram images is in deformed state by IVUS catheter. For solve this problem, in this study, we proposed novel method for 3D blood vessel model including undeformed intima and adventitia by IVUS catheter.

 

L37: Head Pose Estimation Based on 2D and 3D for Driving Assistance Systems

Peláez,G., García,F., Armingol,J. Escalera,A.

Abstract:
A solution for driver monitoring and event detection based on 3D information from a range imaging camera is presented: The system combines 2D and 3D algorithms to provide head pose estimation and regions of interest identification. Starting with the captured cloud of 3D points from the sensor and the detection of a face in the 2D projection of it, the points that correspond to the head are determined and extracted for further analysis. Later the head pose estimation with 3 degrees of freedom (Euler angles) is estimated using ICP algorithm. As a final step, the important regions of the face are identified and used for further experimentation, e.g. gaze orientation, behavior analysis and more. The resulting application is a complete 3D driver monitoring application, based on low cost sensor; it is described how to combine both 2D and 3D computer vision algorithms for future human factors research enabling the possibility to study specific factors like driver drowsiness, gaze orientation or the head pose estimation itself. The experimental results shown are compared with ground-truth head movements obtained using an IMU

 

L47: Towards building a diving simulator for organizing dives in real conditions

Aristidou,K., Michael,D.

Abstract:
We present a prototype of a diving simulator system that can be used for organizing dives in real conditions. Our simulator comprises of (a) an accurate visualization of a real wreck site in Mediterranean sea, Zenobia Cyprus, one of the most well-known wrecks worldwide and (b) visualization of marine life based on the real types of species that are gathering near the wreck. Moreover, a first attempt of integrating an existing diving computation algorithm has been made. The simulator's purpose, in its complete framework, is to be used by divers to organize their dives in advance at the specific wreck and moreover to be used as a tool to promote diving tourism. The diving computation part of the simulator has been validated according to the Professional Association of Diving Instructor's data, while the complete prototype of the system has been evaluated by expert users (divers) denoting the importance of the specific simulator.

 

L53: 3D Registration of Multi-modal Data Using Surface Fitting

Mahiddine,A., Merad,D., Drap,P., Boi,J.-M.

Abstract:
The registration of two 3D point clouds is an essential step in many applications. The objective of our work is to estimate the best geometric transformation to merge two point clouds obtained from different sensors. In this paper, we present a new approach for feature extraction which is distinguished by the nature of the extracted signature of each point. The descriptor we propose is invariant to rotation and overcomes the problem of multi-resolution. To validate our approach, we have tested on synthetic data and we have applied to heterogeneous real data.

 

L83: Innovative Solutions for Immersive 3D Visualization Laboratory

Lebiedz,J., Mazikowski,A.

Abstract:
The paper presents results of the work concerning the technical specification of Immersive 3D Visualization Laboratory to be opened in late summer 2014 at the Faculty of Electronics, Telecommunications and Informatics at Gdańsk University of Technology in Poland. The person immersed in VR space will be placed in a transparent sphere with a diameter of over 10 feet, supported on rollers and equipped with a motion tracking system. Walking motion of the person will inflict the revolution of the sphere and trigger changes in the computer generated images on screens surrounding the sphere (CAVE with six walls) thus creating an illusion of motion. The projection system will be equipped with a 3D visualization capability and supplemented with a spatial sound generation system. The analysis prior to the specification was based on extended studies including literature and site visits to selected installations in Europe as well as consulting with companies which are leading European manufacturers and 3D systems integrators.

 

L97: Reconstruction and Interaction with 3D Simplified Bone Models

Pulido,R., Paulano,F., Jiménez, Juan,J.

Abstract:
Visualization and interaction with 3D bone models reconstructed from medical images are fundamental for bio-medical applications. They are useful for surgeons in order to diagnose and plan surgical operations. Although traditional reconstruction techniques based on isosurfacing are mainly used for visualization, it is desirable to obtain labelled models without outliers in order to improve the interaction. This paper presents the integration of digital image processing and computer graphics techniques to enable not only the reconstruction of simplified 3D bone models, but also the interaction with them. To that end, the segmentation of CT images is performed in order to obtain different bone regions and to label them. This approach is divided into three main parts: segmentation, reconstruction and interaction. The goal of the segmentation is to extract closed contours and to generate labelled regions which represent the bone structures to be reconstructed. Then, three-dimensional bone models are obtained from these regions by isosurfacing. Finally, a detailed collision detection can be calculated between the 3D models in the scene in order to provide visual aid when the user is interacting with them. This interaction includes the calculation of distances, nearest points and overlapping triangles. Moreover, an immersive experience is provided by integrating the Leonar3Do stereo system.

 

M03: User-based Visual-interactive Similarity Definition for Mixed Data Objects - Concept and First Implementation

Bernard,J., Sessler,D., Ruppert,T., Davey,J., Kuijper,A., Kohlhammer,J.

Abstract:
The definition of similarity between data objects plays a key role in many analytical systems. The process of similarity definition comprises several challenges as three main problems occur: different stakeholders, mixeddata, and changing requirements. Firstly, in many applications the developers of the analytical system (data scientists) model the similarity, while the users (domain experts) have distinct (mental) similarity notions. Secondly, the definition of similarity for mixed data types is challenging. Thirdly, many systems use static similarity models that cannot adapt to changing data or user needs. We present a concept for the visual-interactive definition of similarity for mixed data objects emphasizing 15 crucial steps for the development of appropriate systems. For each step different design considerations and implementation variants are presented, revealing a tremendous design space. Moreover, we present a first implementation of our concept, enabling domain experts to express mental similarity notions through a visual-interactive system. An additional system mode enables data scientists to monitor this similarity-based expert feedback and thereby to validate our implementation. The provided implementation tackles the different stakeholders problem, the mixed data problem, and the changing requirements problem. We show the applicability of our implementation in a case study where a mental similarity notion is transfered into a functional similarity model.

 

M11: Personal Health Data: Visualization Modalities and Their Perceived Values

Fens,P., Funk,M.

Abstract:
In this paper we focus on universal human values as defined by Schwartz [Schw92] in the context of visualizing personal health data. Can data visualizations convey human values? We have explored various modalities of presenting health data and found that personal health visualizations indeed can convey values. This is currently work in progress, an initial step towards value-based design in the area of data visualization of personal health data.

 

M29: Recognising Tables Using Multiple Spatial Relationships Between Table Cells

Mohamed Alkalai

Abstract:
While much work has been done on table recognition this research has been primarily concerned with tables in ordinary text. However, tables containing mathematical structures can differ quite significantly from ordinary text tables and therefore specialist treatment is often necessary. In fact, it is even difficult to clearly distinguish table recognition in mathematics from layout analysis of mathematical formulae. This blurring is often leading to a number of possible, equally valid interpretations. However, a reliable understanding of the layout of a formula is often a necessary prerequisite to further semantic interpretation. In this paper, a new construction of table representation method is introduced which, attempts to overcome the unsolved issues mentioned in several published works. This encompasses the lack of sufficient work that deals with tables with misaligned cells. I utilise multi spatial relationships between cells to recognise tabular components. Experimental evaluation on two different datasets shows a promising results.

 


 

N05: Multi Scale Color Coding of Derived Curvature and Torsion Fields on a Multi-Block Curvilinear Grid

Brener,N., Harhad,F., Karki,B., Benger,W., Acharya,S., Ritter,M., Iyengar,S.

Abstract:
We present a method to compute and visualize the curvature and torsion scalar fields derived from a vector field defined on a multi-block curvilinear grid. In order to compute the curvature and torsion fields, we define a uniform Cartesian grid of points in the volume occupied by the curvilinear grid and interpolate from the curvilinear grid to the Cartesian grid to get the vector field at the Cartesian grid points. We can then use finite difference formulas to numerically compute the derivatives needed in the curvature and torsion formulas. Once the curvature and torsion have been computed at the Cartesian grid points, we employ a multi scale color coding technique to visualize these scalar fields in orthoslices of the Cartesian grid. This multi scale technique allows one to observe the entire range of values of the scalar field, including small, medium and large values. In contrast, if uniform color coding is used to visualize curvature and torsion fields, it sometimes shows most of the values in a single predominant color, which makes it impossible to distinguish between the small, medium and large values. As an example of this multi-scale technique, we displayed the curvature and torsion fields in a computational fluid dynamics (CFD) simulation of an industrial stirred tank and used these images to identify regions of low, medium and high fluid mixing in the tank.

 

M61: Histogram of Structure Tensors: Application to Pattern Clustering

Walha,R., Drira,F., Lebourgeois,F., Garcia,C., Alimi,A.M.

Abstract:
Pattern clustering is an important data analysis process useful in a wide spectrum of computer vision applications. In addition to choosing the appropriate clustering methods, particular attention should be paid to the choice of the features describing patterns in order to improve the clustering performance. This paper presents a novel feature descriptor, referred as Histogram of Structure Tensors (HoST), allowing to capture the local information of an image. The basic idea is that a local pattern could be described by the distribution of the structure tensors orientations and shapes. The proposed HoST descriptor has two major advantages. On the first hand, it captures the dominant orientations in a local spatial region taking into account of the local shape of the edges structure. In fact, it is based on the structure tensor that represents a very interesting concept for characterizing the local shape. On the other hand, the use of the histogram concept makes the proposed descriptor so effective and useful when a reduced feature representation is required. In this paper, the proposed HoST descriptor is addressed to the pattern clustering task. An extensive experimental validation demonstrates its performance when compared to other existing feature descriptors such as Local Binary Patterns and Histogram of Oriented Gradients. In addition, the proposed descriptor succeeds in improving the performance of clustering based resolution enhancement approaches.

 

N47: A Framework for Wait-Free Data Exchange in Massively Threaded VR Systems

Lange,P., Weller,R., Zachmann,G.

Abstract:
A central part of virtual reality systems and game engines is the generation, management and distribution of all relevant world states. In modern interactive graphic software systems usually many independent software components need to communicate and exchange data. Standard approaches suffer the $n^2$ problem because the number of interfaces grows quadratically with the number of component functionalities. Such many-to-many architectures quickly become unmaintainable, not to mention latencies of standard concurrency control mechanisms. We present a novel method to manage concurrent multithreaded access to shared data in virtual environments. Our highly efficient low-latency and lightweight architecture is based on a new wait-free hash map using key-value pairs. This allows us to reduce the traditional many-to-many problem to a simple many-to-1 approach. Our results show that our framework outperforms by an order of magnitude standard lock-based but also modern lock-free methods significantly.

 

N17: Parallel iso-surface extraction and simplification

Ulrich,Ch., Grund,N., Derzapf,E., Lobachev,O., Guthe,M.

Abstract:
When extracting iso-surfaces from large volume data sets, long processing times are required and a high number of polygons is generated. We propose a massively parallel iso-surface extraction and simplification algorithm. The extraction is based on the marching cubes algorithm. In order to process large volume data sets, we perform the extraction with an interleaved simplification step using parallel edge collapses and the quadric error metrics.
Interleaving extraction and simplification is based on locally postponing collapse operations close to the processing front. In contrast to previous methods, we do not need an explicit simplification error fall-off close to the front. Thus we can produce meshes with the same quality as if we would simplify the complete mesh after extraction. By implementing both extraction and simplification on the GPU, we can reconstruct high quality iso-surfaces from large data sets within a few seconds.

 

N19: Estimating the Vanishing Point of a Sidewalk

Jeon,S.-H., Chung,Y.-K., Yoon,H.-S

Abstract:
In this paper, we propose an algorithm estimating the vanishing point(VP) of a sidewalk in a man-made environment from a single image. For finding the VP, the lines in an image are efficiently exploited as a clue because the projections of parallel lines in an image intersect at a VP. However, there are too many noises disturbing line detection in natural scene images, for example trees, pedestrian and shadows. Thus, we suggest a noise reduction technique called orientation consistency pass filtering(OCPF) for improving line detection performance. An edge orientation at the window center of OCPF is compared to its neighbor edge orientations for calculating orientation difference. The center pixel is removed if the difference of edge orientations greater than a threshold, and preserved otherwise. In addition, we suggest a novel vanishing point detection method using edge orientation voting(EOV), which predict VP position accurately. The lines filtered by OCPF can generate the VP candidates using bottom-up extended lines. The VP candidates receive supports from all edges below the currently inspected VP candidate. The most supported VP candidate is selected as dominant VP. This proposed method was implemented and tested on 600 sidewalk image database that has 640x320 resolutions. 81% of the test sidewalk images are in range from 0 pixels to 30 pixels from manually marked VP.

 

N41: Using metrics to evaluate and improve text-based information visualization in 3D urban environment

Zhang,F., Tourre,V., Moreau,G.

Abstract:
Information visualization is widely involved in our daily life. It develops rapidly in both 2D and 3D environments. In the 3D case, evaluation is a critical problem. Existent evaluation metrics are firstly introduced in this paper. We chose to focus on mathematical metrics only and several metrics referring existent ones are designed to evaluate a text-based information visualization in 3D urban environment. Afterwards, some modifications of the visualization are gained by constructing processing functions which take into account the object space distance and the screen space distance. A re-evaluation process for the new results is conducted to see if the visualization result is improved or not. Results show that screen space functions have better performance in improving visualization performance, which can provide references for visualization designers to diversify and characterize their visualizations.


 

Poster papers

H17: Ocean wave simulation by the mix of FFT and Perlin Noise

Tian,L.

Abstract:
For the application of ocean wave, a new height-field simulation method is proposed by the mix of FFT and Perlin Noise, and OpenSceneGraph (OSG) and VC++ 2008 are used to simulate realistic ocean wave. The method takes wind effects into consideration, and uses Philips model and Gauss random numbers to produce ocean wave spectrum, which is then transformed to wave height-field by FFT. Perlin Noise is overlaid to disturb the wave height to generate a vivid and random sea surface. Simulation results show the effectiveness of the proposed method.

 

H31: Visualizing multi-channel networks

Antemijczuk,P., Magiera,M., Lehmann,S., Cuttone,A., Larsen,J.

Abstract:
In this paper, we propose a visualization to illustrate social interactions, built from multiple distinct channels of communication. The visualization displays a summary of dense personal information in a compact graphical notation. The starting point is an abstract drawing of a spider‘s web. Below, we describe the meaning of each data dimension along with the background and motivation for their inclusion. Finally, we present feedback provided by the users (31 individuals) of the visualization.

 

H53: A Low-Cost IR Imaging System for Hand Vein Biometrics Parameters Extraction

Benziane,S., Merouane,A.

Abstract:
This paper presents a new approach to authenticate individuals using hand vein images. The proposed method is fully automated and employs palm dorsal hand vein images acquired from a low cost, near infrared contactless imaging; the aim of our work. In order to evaluate the system performance, a prototype was designed and a dataset of 34 persons from different ages above 20 and different gender, in each step 10 images per person was acquired at different intervals, 5 images for left hand and 5 images for right hand. The vein detection process consists of an easy to implement a device that takes a snapshot of the subject‘s veins under a source of infrared radiation at a specific wavelength. The system is able to detect veins but not arteries due to the specific absorption of infrared radiation in blood vessels. Almost any part of the body could be analyzed in order to extract an image of the vascular pattern but the hand and the fingers are preferred.

 

H83: Method of Discrete Wavelet Analysis of Edges on the Random Background

Bezuglov,D.A., Bezuglov,Yu.D., Shvidchenko,S.A.

Abstract:
Currently known algorithms suppose preliminary image filtering with consecutive solving of edges extraction problem. In the process of development of image filtering algorithms the a priori knowledge of distortion interference characteristics is also required. In practice in most of the cases such information is not available or is approximate. A new method of objects‘ edges extraction in the images was developed, in the presence of distortion, by applying direct and inverse wavelet transformation. The results of mathematical modeling are provided. The proposed approach may be used in the course of development of digital picture processing systems, self-contained robots engineering, and under the surveillance conditions which complicate the registration process in the absence of a priori data about the background noise type.

 

I05: Polynomiography with Non-standard Iterations

Gdawiec,K., Kotarski,W., Lisowska,A.

Abstract:
In the paper visualizations of some modifications based on the Newton's root finding of complex polynomials are presented. Instead of the standard Picard iteration several different iterative processes described in the literature, that we call as non-standard ones, are used. Following Kalantari such visualizations are called polynomiographs. Polynomiographs are interesting from scientific, educational and artistic points of view. By the usage of different kinds of iterations we obtain quite new, comparing to the standard Picard iteration, polynomiographs that look aesthetically pleasing. As examples we present some polynomiographs for complex polynomial equation z^3 - 1 = 0. Polynomiographs graphically present dynamical behaviour of different iterative processes. But we are not interested in it. We are focused on polynomiographs from the artistic point of view. We believe that the new polynomiographs can be interesting as a source of aesthetic patterns created automatically. They also can be used to increase functionality of the existing polynomiography software.

 


 

I71: Spatter Tracking in Laser- and Manual Arc Welding with Sensor-level Pre-processing

Lahdenoja,O., Säntti,T., Laiho,M., Poikonen,J.

Abstract:
This paper presents methods for automated visual tracking of spatters in laser- and manual arc welding. Imaging of the welding process is challenging due to extreme conditions of high radiated light intensity variation. The formation and the number of spatters ejected in the welding process are dependent on the parameters of the welding process, and can potentially be used to tune the process towards better quality (either on-line of off-line). In our case, the spatter segmentation is based either on test on object elongatedness or Hough transform, which are applied on pre-processed image sequences captured by a high-speed smart camera. Part of the segmentation process (adaptive image capture and edge extraction) is performed on the camera, while the other parts of the algorithm are performed off-line in Matlab. However, our intention is to move the computation towards the camera or an attached FPGA.

 

I73: Fast and Robust Tessellation-Based Silhouette Shadows

Milet,T., Kobrtek,J., Zemčík,P., Pečiva,J.

Abstract:
This paper presents a new simple, fast and robust approach in computation of per-sample precise shadows. The method uses tessellation shaders for computation of silhouettes on arbitrary triangle soup. We were able to reach robustness by our previously published algorithm using deterministic shadow volume computation. We also propose a new simplification of the silhouette computation by introducing reference edge testing. Our new method was compared with other methods and evaluated on multiple hardware platforms and different scenes, providing better performance than current state-of-the art algorithms. Finally, conclusions are drawn and the future work is outlined.

 

I79: A method of micro facial expression recognition based on dense facial motion data

Akagi,Y., Kawasaki,H.

Abstract:
In this paper, we propose a method for recognizing a micro expression which is a small motion appearing on a face by using the high density and high frame-rate 3D reconstruction method. Some studies report that the micro expressions are caused by the change of mental state. If we can recognize the micro expressions, this information could be useful for machines to understand the mental state of a human. With advancements of 3D reconstruction methods, methods have been proposed to reconstruct dynamic objects such as motions of a human's body in high accuracy with high frame rate. Based on the data obtained from the high quality shape reconstruction method, the proposed method recognizes the micro expressions. To detect a part of the face where the micro-expressions are appeared, we propose an experimental estimation of the part. We also report a recognition rate of the change of the mental state using the experimental system.

 

I89: Human-Computer Interaction Using Robust Gesture Recognition

Endler,M., Lobachev,O., Guthe,M.

Abstract:
We present a detector cascade for robust real-time tracking of hand movements on consumer-level hardware. We adapt existing detectors to our setting: Haar, CAMSHIFT, shape detector, skin detector. We use all these detectors at once. Our main contributions are: first, utilization of bootstrapping: Haar bootstraps itself, then its results are used to bootstrap the other filters; second, the usage of temporal filtering for more robust detection and to remove outliers; third, we adapted the detectors for more robust hand detection. The resulting system produces very robust results in real time. We evaluate both the robustness and the real-time capability.

 

J71: Particle systems-based riverbed modelling over a terrain with hardness layer

Warszawski,K.K., Nikiel,S.S.

Abstract:
This paper proposes a method that applies particle systems to simulate results of hydrological erosion caused by spout, like riverbeds. The terrain model is divided into two layers. The first one stores heights data (typical height-field) while the second is reserved for hardness data. This data structure enables fast and simple implementation of terrain deformation. We present the construction of a particle system terrain modifier, its main attributes and how they influence the final product of the modelling process. The proposed technique behaves like in classical particle systems. It uses emitter as element that control starting location, direction and quantity of particles in a given simulation environment. We choose parameters for particles such as: the current position, directional angle, linear velocity, rotation angle, rotation velocity and the size which define its zone of influence for landscape modification processes. Each emitted particle is moving (rolling) over the surface of terrain structure making deformations at its current position. Scale of the modifications depends on particle parameters and landscape structure susceptibility for modifications process under the particle influence zone. The proposed method is not intended to simulate physically erosion process, but focuses on its results for exploitation in virtual environments in real-time simulations and rapid prototyping of virtual terrains.

 


 

J79: Adaptive Projection Displays: a low cost system for public interactivity

Dundas,J., Wagner,M.

Abstract:
Interactive digital public displays that track viewer's position are currently inaccessible to the average consumer. Many tracking systems available on the market are prohibitively expensive and are out of scope for small business owners to purchase. This research tests various consumer level tracking technologies to ascertain whether a system can be developed in a low cost and accessible manner. Microsoft's Kinect in tandem with Unity3D offers a system that is straightforward to use and allows for ease of implementation. The resulting technique can be quickly carried out to create an interactive digital public display.

 

L61: Connect-S: a physical visualization through tangible interaction

Giang,K., Funk,M.

Abstract:
In our current society, open data streams are more and more available through the Internet. This data can have an increasing impact on everyday life. Its full potential can, however, only reached through better integration and new interfaces. The goal of this project is to explore the possibilities of repurposing public information in an developing area of a large city in the Netherlands. Can we create a tangible interaction with use of physical visualization of these data streams? A series of prototypes have been made to develop a physical visualization through the method of research through design. Users were involved in expert panels and interviews to fine-tune and create a final prototype, Connect-S. The concept shows the opportunities of using physical visualization in connection with physical interaction for browsing and navigation.

 

L73: The study of features of expert signature for left ventricle on ultrasound images

Zyuzin,V., Porshnev,S., Bobkova,A.

Abstract:
The article presents the study result of signature of left ventricle (LV) contours which are built by experts.The result is a part of a task of automatic contouring area of LV on an ultrasound frames with apical four-chamber view. Signature is LV contour curve, built in polar coordinates. The study was found the optimal point in the center of the LV base. The resulting signature is approximated by three polynomials second and third degree: right side, left side and top. The result of such approximation has been qualitatively and quantitatively better than the whole curve.

 

M02: Real time vehicle detection and tracking on multiple lanes

Kovačić,K., Ivanjko,E., Gold,H.

Abstract:
Development of computing power and cheap video cameras enabled today's traffic management systems to include more cameras and computer vision applications for transportation system monitoring and control. Combined with image processing algorithms cameras are used as sensors to measure road traffic parameters like flow, origin-destination matrices, classify vehicles, etc. In this paper development of a system capable to measure traffic flow and estimate vehicle trajectories on multiple lanes using one static camera is described. Vehicles are detected as moving objects using foreground and background image segmentation. Adjacent pixels in the moving objects image are grouped together and a weight factor based on cluster area, cluster overlapping area and distance between multiple clusters is computed to enable multiple moving object tracking. To ensure real-time capabilities, image processing algorithm parallelization is applied. Described system is tested using real traffic video footage obtained from Croatian highways.

 

M53: VisNow - a Modular, Extensible Visual Analysis Platform

Nowinski,K.S., Borucki,B.

Abstract:
A new, dataflow driven, modular visual data analysis platform with extensive data processing and visualization capabilities is presented. VisNow is written in Java, easily extendable to incorporate new modules and module libraries. Dataflow networks built with the help of interactive network editor can be wrapped into stand-alone application for the end users

 

M67: A new way of Rich Image Representation (VectorPixels)

Simons,A., Prakash,E., Wood,J.

Abstract:
These are Preliminary Results: Generating images on computer systems has been done with the same technology for several decades now. Bitmap or pixel technology is used for the representation of rich color images. Simple graphics like line drawings and logos usually use vector graphics. As known both have their disadvantages and advantages. In recent years a lot of research has been done to combine the advantages of both techniques in a comprehensive solution. However all these research started from one of these two technologies as base, so new technologies were built on top of existing technologies. There was no development from scratch and that's exactly what this paper wants to propose, an algorithm to invent the pixel again. By always applying new developments on existing technologies generating images became a mix of various techniques and a rather complex matter. A latest contribution to this, for instance, is the use of Diffusion Curves. As said before, all research has been fixed on improving existing pixel or vector based graphics. With the use of VectorPixel (VP) the concept of a pixel will be redefined. In contrast with a pixel (Picture Element) a VectorPixel will be defined by a mathematically description and resolution independent. Instead of using a classical grid, which is resolution depended, to position a VectorPixel a reference point will be defined and the position of other VectorPixels will refer to that reference point. Classical pixels make use of a grid which is resolution depended. VectorPixels instead can for example overlap each other to present a smooth curve. When the image is enlarged VectorPixels are also increased proportionately.

 

N23: Lane Marking Detection in Various Lighting Conditions using Robust Feature Extraction

Jang,H.J., Baek,S.-H., Park,S.-Y.

Abstract:
Robustly extracting the features of lane markings under different lighting and weather conditions such as shadows, glows, sunset and night is the a key technology of the lane departure warning system (LDWS). In this paper, we propose a robust lane marking feature extraction method. By useing the characteristics of the lane marking to detect candidate areas. The final lane marking features are extracted by first finding the center points of the lane marking in the candidate area then these center point pixels are labeled according to the intensity similarity along the direction of the vanishing point. The performance of the proposed method is evaluated by experiment at results using real world lane data.

 

N31: Survey on Automated LEGO Assembly Construction

Kim,J.-W., Kang,K.-K., Lee,J.-H.

Abstract:
LEGO has been very popular toy in the world because it is attractive and fun to play with and stimulates one's creativity by providing means to conveniently assemble a variety of interesting shapes using the limited types of given bricks. However, it is hard for the beginners to design and assemble complex models they desire to make without instructions. Building a LEGO assembly manually usually requires a significant amount of trial-and-error. LEGO company therefore presented the LEGO construction problem in 1998 and in 2001. The problem statement is "Given any 3D body, how can it be built from LEGO bricks?" In this paper we will investigate the current research efforts to address the LEGO construction problem. We will review the problem definition, formulation, and a variety of approaches to solve the problem. We will discuss the data representations for input 3D polygonal models and the LEGO assembly structures, cost functions that will guide the search for the optimal solution, and various solution methods.

 

N43: A Control Cluster Approach to Non-linear Deformation

Bukatov,A.A., Gridchina,E.E., Zastavnoy,D.A.

Abstract:
Modeling plausible deformation of the objects has been an important task in computer animation and game design industry. The approach proposed in the paper deals with polygonal mesh deformation splitting the vertices of the mesh into two types: cluster vertices and free vertices. With the user defining the shape of the mesh key areas with the help of cluster vertices, the algorithm takes advantage of non-linear geometric deformation for calculating free vertices position. The approach could be used both for creating a sequence of altered model shapes to produce a character animation (with the help of user-created control cluster data) and for visualizing some ecological processes.

 

City Tour

Thursday June 5, 2014

The  afternoon City tour schedule is expected as follows [read the NOTE at the registration desk – MIGHT CHANGE]:

You can join me and have a dinner and beer(s), beer(s).......... at the most world famous Plzen brewery - Prazdroj - Pilsner Urquell


WSCG 2014 Conference Schedule

Registration  Monday, June 2     17:30 - 20:30            Conference office is open during breaks, only

Tuesday, June 3, 2014

8:30 - 10:00

30'

10:30 - 12:00

20'

12:20

13:30 - 14:30

60'

15:30 - 17:30

16:30 -17:30

Session A1

Break

Session B1

Welcome

Lunch
Break

C1
Keynote
talk
Oliveira
J05

Break &
Posters A

Session D1

Late
Registr.

Room A

Room A

Room A

Session A2

Session B2

Session D2

Room B

Room B

Room B

Wednesday, June 4, 2014

8:30 -10:00

30'

10:30 -12:00

20'

12:20

13:30 -14:30

60'

15:30 -17:30

 

18:59*

Session F1

Break &
Posters B

Session G1

Common
Photo

Lunch
Break

H1
Keynote
talk
Weinkauf
J03

Break &
Posters C

Session K1

FREE

DINNER

Room A

Room A

Room A

Session F2

Session G2

Session K2

Room B

Room B

Room B

Thursday, June 5, 2014

8:30 - 9:30

30'

10:00 - 11:30

30'

10'

14:30 -????

Session L1

Break

Session M1

N1
Keynote
talk
Barsky
N59

Closing

Session

FREE

Plzen City Sightseeing Tour **
&
Grand Beer Tasting

Room A

Room A

Session L2

Session M2

Room B

Room B

* Expected                                                                ** NOT ORGANIZED – JOIN US AND PASS THE CITY CENTER

Conference Dinner: Wednesday, June 4 – buy a ticket [at a symbolic price 10 EUR] at the registration – offer limited.
Place and time to be announced.                           We recommend to buy 24 hour/1 day ticket for public transport

We recommend visiting (not organized tours):

·       Explore Plzen City http://web.zcu.cz/plzen/

·       Techmania Science Center http://www.techmania.cz

·       ZOO and Botanical Garden
http://www.zooplzen.cz/ (45 mins.)

·       Stara Sladovna – Medieval Pub – city center (40 mins.)
http://www.starasladovna.cz/video/slad2.mp4

·       Purkmister Brewery
http://www.purkmistr.cz/ (10 mins. by trolleybus)

·       Pilsner Urquell Brewery and Brewery Museum -
http://www.prazdrojvisit.cz/en/ (30 mins.)

`