Computer Graphics Forum

Cover image for Vol. 31 Issue 6

September 2012

Volume 31, Issue 6

Pages 1–1991

  1. Issue Information

    1. Top of page
    2. Issue Information
    3. Articles
    4. Reports
    1. Issue Information (page i)

      Article first published online: 21 SEP 2012 | DOI: 10.1111/j.1467-8659.2012.03223.x

  2. Articles

    1. Top of page
    2. Issue Information
    3. Articles
    4. Reports
    1. Parallel Surface Reconstruction for Particle-Based Fluids (pages 1797–1809)

      G. Akinci, M. Ihmsen, N. Akinci and M. Teschner

      Article first published online: 4 APR 2012 | DOI: 10.1111/j.1467-8659.2012.02096.x

      Thumbnail image of graphical abstract

      This paper presents a novel method that improves the efficiency of high-quality surface reconstructions for particlebased fluids using Marching Cubes. By constructing the scalar field only in a narrow band around the surface, the computational complexity and the memory consumption scale with the fluid surface instead of the volume. Furthermore, a parallel implementation of the method is proposed. The presented method works with various scalar field construction approaches. Experiments show that our method reconstructs high-quality surface meshes efficiently even on single-core CPUs. It scales nearly linearly on multi-core CPUs and runs up to fifty times faster on GPUs compared to the original scalar field construction approaches.

    2. Efficiently Simulating the Bokeh of Polygonal Apertures in a Post-Process Depth of Field Shader (pages 1810–1822)

      L. McIntosh, B. E. Riecke and S. DiPaola

      Article first published online: 5 MAR 2012 | DOI: 10.1111/j.1467-8659.2012.02097.x

      Thumbnail image of graphical abstract

      The effect of aperture shape on an image, known in photography as ‘bokeh’, is an important characteristic of depth of field in real-world cameras. However, most real-time depth of field techniques produce Gaussian bokeh rather than the circular or polygonal bokeh that is almost universal in real-world cameras. ‘Scattering’ (i.e. point-splatting) techniques provide a flexible way to model any aperture shape, but tend to have prohibitively slow performance, and require geometry-shaders or significant engine changes to implement. This paper shows that simple post-process ‘gathering’ depth of field shaders can be easily extended to simulate certain bokeh effects.

    3. Cultural Heritage Predictive Rendering (pages 1823–1836)

      Jassim Happa, Tom Bashford-Rogers, Alexander Wilkie, Alessandro Artusi, Kurt Debattista and Alan Chalmers

      Article first published online: 21 JUN 2012 | DOI: 10.1111/j.1467-8659.2012.02098.x

      Thumbnail image of graphical abstract

      High-fidelity rendering can be used to investigate Cultural Heritage (CH) sites in a scientifically rigorous manner. However, a high degree of realism in the reconstruction of a CH site can be misleading insofar as it can be seen to imply a high degree of certainty about the displayed scene-which is frequently not the case, especially when investigating the past. So far, little effort has gone into adapting and formulating a Predictive Rendering pipeline for CH research applications. In this paper, we first discuss the goals and the workflow of CH reconstructions in general, as well as those of traditional Predictive Rendering. Based on this, we then propose a research framework for CH research, which we refer to as ‘Cultural Heritage Predictive Rendering’ (CHPR).

    4. A Significance Cache for Accelerating Global Illumination (pages 1837–1851)

      Thomas Bashford-Rogers, Kurt Debattista and Alan Chalmers

      Article first published online: 19 MAR 2012 | DOI: 10.1111/j.1467-8659.2012.02099.x

      Thumbnail image of graphical abstract

      Rendering using physically based methods requires substantial computational resources. Most methods that are physically based use straightforward techniques that may excessively compute certain types of light transport, while ignoring more important ones. Importance sampling is an effective and commonly used technique to reduce variance in such methods. Most current approaches for physically based rendering based on Monte Carlo methods sample the BRDF and cosine term, but are unable to sample the indirect illumination as this is the term that is being computed. Knowledge of the incoming illumination can be especially useful in the case of hard to find light paths, such as caustics or scenes which rely primarily on indirect illumination. To facilitate the determination of such paths, we propose a caching scheme which stores important directions, and is analytically sampled to calculate important paths.

    5. In at the Deep End: An Activity-Led Introduction to First Year Creative Computing (pages 1852–1866)

      E. F. Anderson, C. E. Peters, J. Halloran, P. Every, J. Shuttleworth, F. Liarokapis, R. Lane and M. Richards

      Article first published online: 11 APR 2012 | DOI: 10.1111/j.1467-8659.2012.03066.x

      Thumbnail image of graphical abstract

      Misconceptions about the nature of the computing disciplines pose a serious problem to university faculties that offer computing degrees, as students enrolling on their programmes may come to realise that their expectations are not met by reality. This frequently results in the students' early disengagement from the subject of their degrees which in turn can lead to excessive ‘wastage’, that is, reduced retention. In this paper, we report on our academic group's attempts within creative computing degrees at a UK university to counter these problems through the introduction of a 6 week long project that newly enrolled students embark on at the very beginning of their studies. This group project, involving the creation of a 3D etch-a-sketch-like computer graphics application with a hardware interface, provides a breadth-first, activity-led introduction to the students' chosen academic discipline, aiming to increase student engagement while providing a stimulating learning experience with the overall goal to increase retention.

    6. Perceptually Optimized Coded Apertures for Defocus Deblurring (pages 1867–1879)

      Belen Masia, Lara Presa, Adrian Corrales and Diego Gutierrez

      Article first published online: 4 APR 2012 | DOI: 10.1111/j.1467-8659.2012.03067.x

      Thumbnail image of graphical abstract

      The field of computational photography, and in particular the design and implementation of coded apertures, has yielded impressive results in the last years. In this paper we introduce perceptually optimized coded apertures for defocused deblurring. We obtain near-optimal apertures by means of optimization, with a novel evaluation function that includes two existing image quality perceptual metrics. These metrics favour results where errors in the final deblurred images will not be perceived by a human observer. Our work improves the results obtained with a similar approach that only takes into account the L2 metric in the evaluation function.

    7. Feature-Preserving Displacement Mapping With Graphics Processing Unit (GPU) Tessellation (pages 1880–1894)

      Hanyoung Jang and JungHyun Han

      Article first published online: 4 APR 2012 | DOI: 10.1111/j.1467-8659.2012.03068.x

      Thumbnail image of graphical abstract

      Displacement mapping reconstructs a high-frequency surface by adding geometric details encoded in the displacement map to the coarse base surface. In the context of hardware tessellation supported by GPUs, this paper aims at feature-preserving surface reconstruction, and proposes the generation of a displacement map that displaces more vertices towards the higher-frequency feature parts of the target mesh. In order to generate the feature-preserving displacement map, surface features of the target mesh are estimated, and then the target mesh is parametrized and sampled using the features. At run time, the base surface is semi-uniformly tessellated by hardware, and then the vertices of the tessellated mesh are displaced non-uniformly along the 3-D vectors stored in the displacement map.

    8. Selecting Coherent and Relevant Plots in Large Scatterplot Matrices (pages 1895–1908)

      Dirk J. Lehmann, Georgia Albuquerque, Martin Eisemann, Marcus Magnor and Holger Theisel

      Article first published online: 4 APR 2012 | DOI: 10.1111/j.1467-8659.2012.03069.x

      Thumbnail image of graphical abstract

      The scatterplot matrix (SPLOM) is a well-established technique to visually explore high-dimensional data sets. It is characterized by the number of scatterplots (plots) of which it consists of. Unfortunately, this number quadratically grows with the number of the data set's dimensions. Thus, an SPLOM scales very poorly. Consequently, the usefulness of SPLOMs is restricted to a small number of dimensions. For this, several approaches already exist to explore such ‘small’ SPLOMs. Those approaches address the scalability problem just indirectly and without solving it. Therefore, we introduce a new greedy approach to manage ‘large’ SPLOMs with more than 100 dimensions. We establish a combined visualization and interaction scheme that produces intuitively interpretable SPLOMs by combining known quality measures, a pre-process reordering and a perception-based abstraction.

    9. Real-Time Fluid Effects on Surfaces using the Closest Point Method (pages 1909–1923)

      S. Auer, C. B. Macdonald, M. Treib, J. Schneider and R. Westermann

      Article first published online: 10 MAY 2012 | DOI: 10.1111/j.1467-8659.2012.03071.x

      Thumbnail image of graphical abstract

      The Closest Point Method (CPM) is a method for numerically solving partial differential equations (PDEs) on arbitrary surfaces, independent of the existence of a surface parametrization. The CPM uses a closest point representation of the surface, to solve the unmodified Cartesian version of a surface PDE in a 3D volume embedding, using simple and well-understood techniques. In this paper, we present the numerical solution of the wave equation and the incompressible Navier-Stokes equations on surfaces via the CPM, and we demonstrate surface appearance and shape variations in real-time using this method. To fully exploit the potential of the CPM, we present a novel GPU realization of the entire CPM pipeline.

    10. Multi-Class Anisotropic Electrostatic Halftoning (pages 1924–1935)

      C. Schmaltz, P. Gwosdek and J. Weickert

      Article first published online: 10 MAY 2012 | DOI: 10.1111/j.1467-8659.2012.03072.x

      Thumbnail image of graphical abstract

      Electrostatic halftoning, a sampling algorithm based on electrostatic principles, is among the leading methods for stippling, dithering and sampling. However, this approach is only applicable for a single class of dots with a uniform size and colour. In our work, we complement these ideas by advanced features for real-world applications. We propose a versatile framework for colour halftoning, hatching and multi-class importance sampling with individual weights. Our novel approach is the first method that globally optimizes the distribution of different objects in varying sizes relative to multiple given density functions. The quality, versatility and adaptability of our approach is demonstrated in various experiments.

    11. Improving Data Locality for Efficient In-Core Path Tracing (pages 1936–1947)

      J. Bikker

      Article first published online: 10 MAY 2012 | DOI: 10.1111/j.1467-8659.2012.03073.x

      Thumbnail image of graphical abstract

      In this paper, we investigate the efficiency of ray queries on the CPU in the context of path tracing, where ray distributions are mostly random. We show that existing schemes that exploit data locality to improve ray tracing efficiency fail to do so beyond the first diffuse bounce, and analyze the cause for this.We then present an alternative scheme inspired by the work of Pharr et al. in which we improve data locality by using a data-centric breadth-first approach.We show that our scheme improves on state-of-the-art performance for ray distributions in a path tracer.

    12. Local Poisson SPH For Viscous Incompressible Fluids (pages 1948–1958)

      Xiaowei He, Ning Liu, Sheng Li, Hongan Wang and Guoping Wang

      Article first published online: 10 MAY 2012 | DOI: 10.1111/j.1467-8659.2012.03074.x

      Thumbnail image of graphical abstract

      Enforcing fluid incompressibility is one of the time-consuming aspects in SPH. In this (paper, we present a local Poisson SPH (LPSPH) method to solve incompressibility for particle based fluid simulation. Considering the pressure Poisson equation, we first convert it into an integral form, and then apply a discretization to convert the continuous integral equation to a discretized summation over all the particles in the local pressure integration domain determined by the local geometry. To control the approximation error, we further integrate our local pressure solver into the predictive-corrective framework to avoid the computational cost of solving a pressure Poisson equation globally. Our method can effectively eliminate the large density deviations mainly caused by the solid boundary treatment and free surface topological change, and show advantage of a higher convergence rate over the predictive-corrective incompressible SPH (PCISPH).

    13. Enhanced Texture-Based Terrain Synthesis on Graphics Hardware (pages 1959–1972)

      F. P. Tasse, J. Gain and P. Marais

      Article first published online: 14 MAY 2012 | DOI: 10.1111/j.1467-8659.2012.03076.x

      Thumbnail image of graphical abstract

      Curvilinear features extracted from a 2D user-sketched feature map have been used successfully to constraint a patch-based texture synthesis of real landscapes. This map-based user interface does not give fine control over the height profile of the generated terrain. We propose a new texture-based terrain synthesis framework controllable by a terrain sketching interface. We enhance the realism of the generated landscapes by using a novel patch merging method that reduces boundary artefacts caused by overlapping terrain patches. A more constrained synthesis process is used to produce landscapes that better match user requirements. The high computational cost of texture synthesis is reduced with a parallel implementation on graphics hardware.

    14. Adaptive Compression of Texture Pyramids (pages 1973–1983)

      C. Andujar

      Article first published online: 15 MAY 2012 | DOI: 10.1111/j.1467-8659.2012.03077.x

      Thumbnail image of graphical abstract

      High-quality texture minification techniques, including trilinear and anisotropic filtering, require texture data to be arranged into a collection of pre-filtered texture maps called mipmaps. In this paper we present a compression scheme for mipmapped textures which achieves much higher quality than current native schemes by exploiting image coherence across mipmap levels. The basic idea is to use a high-quality native compressed format for the upper levels of the mipmap pyramid (to retain efficient minification filtering) together with a novel compact representation of the detail provided by the highest-resolution mipmap.

  3. Reports

    1. Top of page
    2. Issue Information
    3. Articles
    4. Reports
    1. New EUROGRAPHICS Fellows (pages 1984–1985)

      Article first published online: 21 SEP 2012 | DOI: 10.1111/j.1467-8659.2012.03131.x

SEARCH

SEARCH BY CITATION