Computer Animation and Virtual Worlds

Cover image for Vol. 24 Issue 3-4

Special Issue: CASA '2013 Special Issue

May-August 2013

Volume 24, Issue 3-4

Pages i–ii, 153–441

  1. Issue Information

    1. Top of page
    2. Issue Information
    3. Editorial
    4. Special Issue Papers
    1. Issue Information (pages i–ii)

      Article first published online: 15 MAY 2013 | DOI: 10.1002/cav.1532

  2. Editorial

    1. Top of page
    2. Issue Information
    3. Editorial
    4. Special Issue Papers
    1. You have free access to this content
      Editorial (pages 153–154)

      Daniel Thalmann, Tolga Capin and Selim Balcisoy

      Article first published online: 15 MAY 2013 | DOI: 10.1002/cav.1533

  3. Special Issue Papers

    1. Top of page
    2. Issue Information
    3. Editorial
    4. Special Issue Papers
    1. Simulating and animating social dynamics: embedding small pedestrian groups in crowds (pages 155–164)

      Seung In Park, Francis Quek and Yong Cao

      Article first published online: 13 MAY 2013 | DOI: 10.1002/cav.1512

      Thumbnail image of graphical abstract

      We present a crowd model informed by common ground theory to accommodate high-level socially aware behavioral realism of characters in crowd simulations. In our approach, group members maintain group cohesiveness by communicating and adapting their behaviors to each other. In the course of social interaction, agents present gestures or other behavioral cues according to their communicative purposes. We demonstrate that our model produces more believable animations from the viewpoint of human observers through a series of user studies.

    2. Simulating realistic crowd based on agent trajectories (pages 165–172)

      Libo Sun, Xiaona Li and Wenhu Qin

      Article first published online: 3 MAY 2013 | DOI: 10.1002/cav.1507

      Thumbnail image of graphical abstract

      This paper presents a model for simulating realistic crowd behaviors at low computation cost. In our approach, we classify the crowd into two categories based on agent's trajectories extracted from the video data and the change of the environment: main characters and background characters. We adopt two approaches to simulate the behaviors of main characters and background characters to improve the realism of the scenario and guarantee the simulation rate.

    3. A collision avoidance behavior model for crowd simulation based on psychological findings (pages 173–183)

      Jin Hyoung Park, Francisco Arturo Rojas and Hyun Seung Yang

      Article first published online: 15 MAY 2013 | DOI: 10.1002/cav.1504

      Thumbnail image of graphical abstract

      This paper proposes a collision avoidance behavior model for crowd simulation based on psychological findings of human behaviors such as gaze-movement angle (GMA), side stepping, gait motion, and personal reaction bubble (PRB) to get better results in crowd simulation. The total loss of kinetic energy accumulated during an agent's movement and the ratio of the length of the path actually traveled to the length of the original path are used as key metrics.

    4. A heterogeneous CPU–GPU parallel approach to a multigrid Poisson solver for incompressible fluid simulation (pages 185–193)

      Hwi-Ryong Jung, Sun-Tae Kim, Junyong Noh and Jeong-Mo Hong

      Article first published online: 7 MAY 2013 | DOI: 10.1002/cav.1498

      Thumbnail image of graphical abstract

      We propose a novel heterogeneous CPU-GPU parallel multigrid Poisson solver that decomposes the high-frequency components of the residual field using a wavelet decomposition and conducts an additional smoothing on the CPU while the GPU handles the coarsest level evaluation. We demonstrate the efficiency of our solver with animations of smoke and turbulent flow with thermal buoyancy.

    5. Coupling elastic solids with smoothed particle hydrodynamics fluids (pages 195–203)

      Nadir Akinci, Jens Cornelis, Gizem Akinci and Matthias Teschner

      Article first published online: 13 MAY 2013 | DOI: 10.1002/cav.1499

      Thumbnail image of graphical abstract

      We propose a method for handling elastic solids in smoothed particle hydrodynamics fluids. Our approach samples triangulated surfaces of solids using boundary particles. To prevent fluid particle tunneling in case of large expansions, additional boundary particles are adaptively generated to prevent gaps and undesired leakage. Furthermore, as an object compresses, particles are adaptively removed to avoid unnecessary computations. We demonstrate that our approach produces plausible interactions of smoothed particle hydrodynamics fluids with both slowly and rapidly deforming solids.

    6. Rigid-motion-inspired liquid character animation (pages 205–213)

      Guijuan Zhang, Dianjie Lu, Dengming Zhu, Lei Lv, Hong Liu and Xiangxu Meng

      Article first published online: 3 MAY 2013 | DOI: 10.1002/cav.1502

      Thumbnail image of graphical abstract

      We present a rigid-motion-inspired method for animating liquid characters. Our method allows an animator to control the motion of liquid characters with motion capture data. It animates the most visual interesting part of liquid character, that is, to preserve character's shape as well as produce enough liquid details. The method is easy and intuitive to use while incurring little additional cost.

    7. Flexible and rapid animation of brittle fracture using the smoothed particle hydrodynamics formulation (pages 215–224)

      Feibin Chen, Changbo Wang, Buying Xie and Hong Qin

      Article first published online: 3 MAY 2013 | DOI: 10.1002/cav.1514

      Thumbnail image of graphical abstract

      A novel hybrid particle-based animation approach is presented to the flexible and rapid crack simulation of brittle material. A smoothed particle hydrodynamics formulation is adapted to solve the linear elastic mechanics for fracture animation, the hybrid sampling approach and effective shape representation scheme low computation burden, and has an advantage in mesh processing and crack propagation.

    8. Fire pattern analysis and synthesis using EigenFires and motion transitions (pages 225–235)

      Nima Nikfetrat and Won-Sook Lee

      Article first published online: 7 MAY 2013 | DOI: 10.1002/cav.1501

      Thumbnail image of graphical abstract

      We propose novel approaches of generating realistic fire animation, using image-based techniques, statistical analysis, and motion transitions. By introducing “EigenFires,” we visualize the main features of fire, compress it, and even synthesize a new fire from our database.

    9. You have free access to this content
      Procedural modeling of trees based on convolution sums of divisor functions for real-time virtual ecosystems (pages 237–246)

      Jinmo Kim, Daeyeoul Kim and Hyungje Cho

      Article first published online: 15 MAY 2013 | DOI: 10.1002/cav.1506

      Thumbnail image of graphical abstract

      To model various natural trees growing in a virtual ecosystem, a new procedural tree modeling method was proposed on the basis of the convolution sums of divisor functions. For the efficient construction of a real-time virtual ecosystem, a growth pattern, which is based on the convolution sums of divisor functions, was defined to have branch propagation of a uniform pattern. Also, the growth grammar and procedural tree modeling method, by which processes such as branch propagation, a growth pattern of branches and leaves, and growth deformation for various tree generations controlled intuitively and conveniently, were designed and evaluated via the experiment.

    10. Porous deformable shell simulation with surface water flow and saturation (pages 247–254)

      Kiwon Um, Tae-Yong Kim, Youngdon Kwon and JungHyun Han

      Article first published online: 7 MAY 2013 | DOI: 10.1002/cav.1497

      Thumbnail image of graphical abstract

      This paper proposes a method for simulating the dynamics of porous deformable shells in the presence of water that floats on the surface or is absorbed into the interior. The proposed method enables various effects such as surface flow, capillary flow involving absorption and saturation of water, changes of the material properties caused by water saturation, and the deformable body dynamics including tearing.

    11. Facial performance illumination transfer from a single video using interpolation in non-skin region (pages 255–263)

      Hongyu Wu, Xiaowu Chen, Mengxia Yang and Zhihong Fang

      Article first published online: 3 MAY 2013 | DOI: 10.1002/cav.1519

      Thumbnail image of graphical abstract

      This paper proposes a novel video-based method to transfer the illumination from a single reference facial performance video to a target one taken under nearly uniform illumination. We use an edge-preserving filter and illumination component interpolation in non-skin region to ensure the spatial smoothness and consistency of illumination component. The illumination components of key frames are propagated to non-key frames to ensure the temporal consistency between the two adjacent frames of illumination component.

    12. Relighting abstracted image via salient edge-guided luminance field optimization (pages 265–274)

      Chunxiao Liu, Hong Li, Qunsheng Peng, Xun Wang and Enhua Wu

      Article first published online: 3 MAY 2013 | DOI: 10.1002/cav.1516

      Thumbnail image of graphical abstract

      We present an integrated image abstraction and relighting rendering system to incorporate the dynamic lighting effects into the abstracted images in an artistic style. It carries out prior-based image illumination decomposition, message-passing-based salient edge extraction, salient edge-guided image abstraction and relighting in order. Experiment results show that our system can artistically adjust the illumination of the abstracted image and makes it more vivid.

    13. Physically based cosmetic rendering (pages 275–283)

      Cheng-Guo Huang, Tsung-Shian Huang, Wen-Chieh Lin and Jung-Hong Chuang

      Article first published online: 13 MAY 2013 | DOI: 10.1002/cav.1523

      Thumbnail image of graphical abstract

      In this paper, we propose an integrated approach, which combines the Kubelka–Munk model and a screen-space skin rendering approach, to simulate 3D makeup effects. The Kubelka–Munk model is used to compute total transmittance when light passes through cosmetic layers, whereas the screen-space translucent rendering approach simulates the subsurface scattering effects inside human skin. The parameters of Kubelka–Munk model are obtained by measuring the optical properties of different cosmetic materials, such as foundations, blushes, and lipsticks.

    14. Real-time path planning in heterogeneous environments (pages 285–295)

      Norman Jaklin, Atlas Cook IV and Roland Geraerts

      Article first published online: 14 MAY 2013 | DOI: 10.1002/cav.1511

      Thumbnail image of graphical abstract

      Modern virtual environments can contain a variety of characters and traversable regions. Each character may have different preferences for the traversable region types. We present a novel path planning method name MIRAN that computes visually convincing paths while taking a character's region preferences into account.

    15. You have free access to this content
      Realistic paint simulation based on fluidity, diffusion, and absorption (pages 297–306)

      Mi You, Taekwon Jang, Seunghoon Cha, Jihwan Kim and Junyong Noh

      Article first published online: 15 MAY 2013 | DOI: 10.1002/cav.1500

      Thumbnail image of graphical abstract

      We present a new method to create realistic paint simulation, utilizing the characteristics of paint, such as fluidity, diffusion, and absorption. Adopting smoothed particle hydrodynamics including a consideration of viscoelastic movement, we simulate the fluid motion of the paint and the solvent. To handle the diffusion of the pigment in the solvent, we utilize Fick's law. As time elapses, the Lucas-Washburn equation determines the distance of absorption of the binder and the solvent.

    16. Pencil drawing animation from a video (pages 307–316)

      Dongxue Liang and Kyoungju Park

      Article first published online: 13 MAY 2013 | DOI: 10.1002/cav.1520

      Thumbnail image of graphical abstract

      We present an automatic, efficient, and simple technique to create pencil drawing animation, starting from a video. We combine pencil drawing stylization and rigid body dynamics framework to translate and rotate strokes because of temporally filtered per-pixel optical flow vectors. Our framework effectively generates the coherent animation of pencil strokes preserving the structured appearance of charcoal or pastel, which is difficult to achieve with previous abstraction-based non-photorealistic animation.

    17. Realistic deformation of 3D human blood vessels (pages 317–325)

      Jaesung Park, Minsub Shim, Seon-Young Park, Yunku Kang and Myung-Soo Kim

      Article first published online: 3 MAY 2013 | DOI: 10.1002/cav.1510

      Thumbnail image of graphical abstract

      Using a dynamic bounding volume hierarchy for sweep surfaces, we present a real-time algorithm for realistically deforming 3D human blood vessels, while automatically detecting and avoiding interference among a large number of blood vessels under deformation.

    18. Toward socially responsible agents: integrating attachment and learning in emotional decision-making (pages 327–334)

      Maher Ben Moussa and Nadia Magnenat-Thalmann

      Article first published online: 7 MAY 2013 | DOI: 10.1002/cav.1515

      Thumbnail image of graphical abstract

      This paper has the creation of socially responsible agents as its goal. On the basis of emerging psychological theories, it presents an integration of emotions, attachment, and learning in emotional decision making where emotions play a central role in the decision making. It also presents an approach for emotion appraisal where emotional attachment is used in determining the intensities of emotions. Emotions in their turn are used to calculate the emotional attachment toward the users and for learning to predict future consequences.

    19. You have free access to this content
      Towards polite virtual agents using social reasoning techniques (pages 335–343)

      JeeHang Lee, Tingting Li and Julian Padget

      Article first published online: 15 MAY 2013 | DOI: 10.1002/cav.1517

      Thumbnail image of graphical abstract

      This paper aims to model the politeness of virtual humans in social situations using logic-based approaches, through the high-level agent architecture combined with normative framework being capable of social reasoning. Using experiments with simple collision avoidance model, we show the effectiveness of polite behaviour in navigation designed by our approach and the adequacy of this architecture for modelling theory of politeness in all circumstances.

    20. Interactive scenario generation for mission-based virtual training (pages 345–354)

      Linbo Luo, Haiyan Yin, Wentong Cai, Michael Lees and Suiping Zhou

      Article first published online: 14 MAY 2013 | DOI: 10.1002/cav.1525

      Thumbnail image of graphical abstract

      For a virtual training system, how to effectively and quickly generate training scenarios has become a challenging issue. In this paper, we introduce a scenario generation framework for mission-based virtual training, which aims to generate scenarios from both trainer and trainee's perspective. The framework is designed to generate scenarios that can reflect the trainer's preferences over different mission objectives and adapt to different trainees' skill levels.

    21. You have free access to this content
      Classification of human motion based on affective state descriptors (pages 355–363)

      Gokcen Cimen, Hacer Ilhan, Tolga Capin and Hasmet Gurcay

      Article first published online: 15 MAY 2013 | DOI: 10.1002/cav.1509

      Thumbnail image of graphical abstract

      The objective of this study is to analyze human body movements and postures in the spatial and temporal structure of the motion capture data and extract features that are indicative of certain emotions in terms of affective state descriptors. Our contribution comprises identifying the directly or indirectly related descriptors to emotion classification in human motion and conducting a comprehensive analysis of these descriptors (features) that fall into three different categories: posture descriptors, dynamic descriptors, and frequency-based descriptors.

    22. Compression of 3D mesh sequences by temporal segmentation (pages 365–375)

      Guoliang Luo, Frederic Cordier and Hyewon Seo

      Article first published online: 9 MAY 2013 | DOI: 10.1002/cav.1522

      Thumbnail image of graphical abstract

      We describe a compression method for 3D animated mesh sequences that has notable advantages over existing techniques. The key ideas of this method is to cluster the animation frames according to their pose similarity and compress each cluster by using principal component analysis in search of a smaller number of Eigen basis.

    23. Draft-space warping: grading of clothes based on parametrized draft (pages 377–386)

      Moon-Hwan Jeong and Hyeong-Seok Ko

      Article first published online: 3 MAY 2013 | DOI: 10.1002/cav.1503

      Thumbnail image of graphical abstract

      Draft space warping is the fast and automatic method for garment grading. For the development of a grading technique, we introduced retargeting technique, which is widely used in the computer graphics field and obtained the insight from the process of drawing the patternmaking draft (sloper) in the clothing field. Our approach can minimize designer's specialized know-how and save performing time for the grading of real garment and virtual garment.

    24. Live accurate and dense reconstruction from a handheld camera (pages 387–397)

      Yadang Chen, Chuanyan Hao, Zhongmou Cai, Wen Wu and Enhua Wu

      Article first published online: 3 MAY 2013 | DOI: 10.1002/cav.1508

      Thumbnail image of graphical abstract

      The live reconstruction system is designed in a parallel frame as the work pipeline such as the following: live camera tracking is firstly implemented by real-time structure from motion algorithm. Then accurate depth maps are generated from selected frame bundles with three restrictions. Finally, depth maps are fused into a dense mesh by a linear algorithm.

    25. A semantic feature for human motion retrieval (pages 399–407)

      Tian Qi, Yinfu Feng, Jun Xiao, Yueting Zhuang, Xiaosong Yang and Jianjun Zhang

      Article first published online: 15 MAY 2013 | DOI: 10.1002/cav.1505

      Thumbnail image of graphical abstract

      We propose a high-level semantic feature in a low-dimensional space to represent the essential characteristic of different motion classes. On the basis of the statistic training of Gauss Mixture Model, this feature can effectively achieve motion matching on both global clip level and local frame level. Experimental results show that our approach can retrieve similar motions with rankings from large motion database in real-time and also can make motion annotation automatically on the fly.

    26. Interactive elastic motion editing through space–time position constraints (pages 409–417)

      Siwang Li, Jin Huang, Mathieu Desbrun and Xiaogang Jin

      Article first published online: 13 MAY 2013 | DOI: 10.1002/cav.1521

      Thumbnail image of graphical abstract

      Given an input motion of an elastic body, our approach enables the user to interactively edit node positions in order to alter and fine-tune the motion. When the user edits the motion through position constraints, our system produces a new animation at interactive rates in which the constraints are visually met. The rope bridge in the input animation is displayed with textures, whereas the one in the output animation is displayed in green.

    27. Spatiotemporal coupling with the 3D+t motion Laplacian (pages 419–428)

      T. Le Naour, N. Courty and S. Gibet

      Article first published online: 14 MAY 2013 | DOI: 10.1002/cav.1518

      Thumbnail image of graphical abstract

      This paper proposes a new representation of motion based on the Laplacian expression of a 3D+t graph: the set of connected graphs given by the skeleton over time. Our approach enables an easy and interactive editing, correction, or retargeting of motion. Using several examples, we demonstrate the benefits of our method and in particularly the preservation of the spatiotemporal properties of the motion in an interactive context.

    28. Introducing tangible objects into motion controlled gameplay using Microsoft® Kinect TM (pages 429–441)

      Gamze Bozgeyikli, Evren Bozgeyikli and Veysi İşler

      Article first published online: 13 MAY 2013 | DOI: 10.1002/cav.1513

      Thumbnail image of graphical abstract

      In this study, a tangible gameplay interaction method is developed using Microsoft Kinect that senses hand-held objects with their dimensions and incorporates them into gameplay. Proposed algorithm is implemented on an experimental game, and a user study is performed to measure effects of tangible interaction on Kinect gameplay experience. Results revealed that an improved gameplay with more natural and accurate motion controlling is achieved.

SEARCH

SEARCH BY CITATION