Computer Animation and Virtual Worlds

Cover image for Computer Animation and Virtual Worlds

Special Issue: CASA '2014 Special Issue

May-August 2014

Volume 25, Issue 3-4

Pages i–ii, 199–519

  1. Issue Information

    1. Top of page
    2. Issue Information
    3. Editorial
    4. Special Issue Papers
    1. You have free access to this content
      Issue information (pages i–ii)

      Article first published online: 23 MAY 2014 | DOI: 10.1002/cav.1606

  2. Editorial

    1. Top of page
    2. Issue Information
    3. Editorial
    4. Special Issue Papers
    1. You have free access to this content
  3. Special Issue Papers

    1. Top of page
    2. Issue Information
    3. Editorial
    4. Special Issue Papers
    1. Rapid avatar capture and simulation using commodity depth sensors (pages 201–211)

      Ari Shapiro, Andrew Feng, Ruizhe Wang, Hao Li, Mark Bolas, Gerard Medioni and Evan Suma

      Article first published online: 23 MAY 2014 | DOI: 10.1002/cav.1579

      Thumbnail image of graphical abstract

      We demonstrate a method of acquiring a 3D model of a human using commodity scanning hardware and then controlling that 3D figure in a simulated environment in only a few minutes. The 3D models that are captured are suitable for use in applications where recognition and distinction among characters by shape, form, or clothing are important, such as small group or crowd simulations or other socially oriented applications.

    2. Interactive model-based reconstruction of the human head using an RGB-D sensor (pages 213–222)

      Michael Zollhöfer, Justus Thies, Matteo Colaianni, Marc Stamminger and Günther Greiner

      Article first published online: 23 MAY 2014 | DOI: 10.1002/cav.1584

      Thumbnail image of graphical abstract

      We present a novel method for the interactive markerless reconstruction of human heads using a single commodity RGB-D sensor. Our entire reconstruction pipeline is implemented on the GPU and allows to obtain high-quality reconstructions of the human head using an interactive and intuitive reconstruction paradigm. All obtained reconstructions have a common topology and can be directly used as assets for games, films and various virtual reality applications.

    3. The semantic space for facial communication (pages 225–233)

      Susana Castillo, Christian Wallraven and Douglas W. Cunningham

      Article first published online: 23 MAY 2014 | DOI: 10.1002/cav.1593

      Thumbnail image of graphical abstract

      Harnessing the aspects of non-verbal communication can lend artificial communication agents greater depth and realism but requires a sound understanding of the relationship between cognition and expressive behaviour. We extend traditional word-based methodology to use actual videos and then extract the semantic/cognitive space of facial expressions. The recovered space can well capture the full range of facial communication and is very suitable for semantic-driven facial animation.

    4. Real-time depth-of-field rendering using single-layer composition (pages 235–243)

      Xiaoxin Fang, Bin Sheng, Wen Wu, Zengzhi Fan and Lizhuang Ma

      Article first published online: 16 MAY 2014 | DOI: 10.1002/cav.1591

      Thumbnail image of graphical abstract

      We proposed a real-time depth-of-field rendering algorithm by using pinhole image (left) and depth map (middle) to produce image with DoF (right). Two different blurring functions were used in different situations to reduce artifacts. Blurring process was implemented in GPU to improve the performance.

    5. A hybrid level-of-detail representation for large-scale urban scenes rendering (pages 245–255)

      Shengchuan Zhou, Innfarn Yoo, Bedrich Benes and Ge Chen

      Article first published online: 16 MAY 2014 | DOI: 10.1002/cav.1582

      Thumbnail image of graphical abstract

      We proposed a novel hybrid level-of-detail (LOD) approach combining point-based, line-based, and splat-based representations for large-scale urban scenes rendering. It provides a 10 × speed-up as compared with the ground truth models and is about four times faster than geometric LOD. Perceptual evaluations also show that our approach produces similar visual quality compared with textured triangle meshes.

    6. A haptic-enabled novel approach to cardiovascular visualization (pages 257–271)

      Shamima Yasmin, Nan Du, James Chen and Yusheng Feng

      Article first published online: 16 MAY 2014 | DOI: 10.1002/cav.1586

      Thumbnail image of graphical abstract

      This paper proposes the technique of a fully automatic 3D reconstruction of a patient-specific coronary artery model from a series of 2D intravascular ultrasound images from which the coronary plaque deposit can be delineated properly in a 3D visual-haptic environment.

    7. You have full text access to this OnlineOpen article
      Human motion retrieval based on freehand sketch (pages 273–281)

      Zhangpeng Tang, Jun Xiao, Yinfu Feng, Xiaosong Yang and Jian Zhang

      Article first published online: 19 MAY 2014 | DOI: 10.1002/cav.1602

      Thumbnail image of graphical abstract

      We present an integrated framework of human motion retrieval based on freehand sketch. Based on the Limb direction feature, the motions are indexed with k-d tree. And then, the posture-by-posture retrieval algorithm is used to retrieve a consecutive motion from the large motion database. What is more, our method can retrieve the combined motion, which is spliced with some existing motions in the database, by using the motion transition graph.

    8. A genetic algorithm approach to human motion capture data segmentation (pages 283–292)

      Na Lv, Yan Huang, Zhiquan Feng and Jingliang Peng

      Article first published online: 23 MAY 2014 | DOI: 10.1002/cav.1597

      Thumbnail image of graphical abstract

      In this paper, we propose a novel genetic algorithm approach to human motion capture (MoCap) data segmentation. To the best of our knowledge, we for the first time introduce the genetic algorithm and the sparse learning to the problem of MoCap data segmentation, leading to excellent segmentation performance as experimentally demonstrated.

    9. Real-time motion data annotation via action string (pages 293–302)

      Tian Qi, Jun Xiao, Yueting Zhuang, Hanzhi Zhang, Xiaosong Yang, Jianjun Zhang and Yinfu Feng

      Article first published online: 23 MAY 2014 | DOI: 10.1002/cav.1590

      Thumbnail image of graphical abstract

      This paper presents a novel online motion annotation method, which uses a probabilistic pose feature based on the Gaussian Mixture Model to represent each pose, and a motion clip could be represented as an action string. Then, a dynamic programming-based string matching method is introduced to compare the action strings, and a hierarchical action string structure is constructed to label a given action string in real time. The experimental results demonstrate the efficacy and efficiency of our method.

    10. Human motion variation synthesis with multivariate Gaussian processes (pages 303–311)

      Liuyang Zhou, Lifeng Shang, Hubert P.H. Shum and Howard Leung

      Article first published online: 23 MAY 2014 | DOI: 10.1002/cav.1599

      Thumbnail image of graphical abstract

      In this paper, we propose a novel generative probabilistic model to synthesize variations of human motion. Our key idea is to model the conditional distribution of each joint via SLFM. SLFM can effectively model the correlations between degrees of freedom of joints. Motions generated by our method show richer variations compared with existing ones.

    11. Interactive motion synthesis with optimal blending (pages 313–321)

      Masaki Oshita

      Article first published online: 16 MAY 2014 | DOI: 10.1002/cav.1578

      Thumbnail image of graphical abstract

      In this paper, we propose an interactive motion synthesis technique that synthesizes a continuous motion sequence from given elementary motions. We have extended a previous approach that determined the appropriate synthesis method and blending range and have introduced an optimal blending range and a weight function, which are determined for each blending segment for the upper and lower body. Our method can be used for both animation generation and interactive character control.

    12. Bulging-free dual quaternion skinning (pages 323–331)

      YoungBeom Kim and JungHyun Han

      Article first published online: 16 MAY 2014 | DOI: 10.1002/cav.1604

      Thumbnail image of graphical abstract

      We propose to post-process the dual quaternion skinning algorithm; the bulging-joint artifact can be removed by correcting the vertex positions, and the distorted-normal artifact can be removed by correcting the vertex normals. The proposed method is simple yet does not suffer from the collapsing-joint artifact, the candy-wrapper artifact, and the bulging-joint artifact.

    13. Hierarchical structures for collision checking between virtual characters (pages 333–342)

      Sybren A. Stüvel, Nadia Magnenat-Thalmann, Daniel Thalmann, Arjan Egges and A. Frank van der Stappen

      Article first published online: 23 MAY 2014 | DOI: 10.1002/cav.1592

      Thumbnail image of graphical abstract

      Simulating a crowded scene like a busy shopping street requires tight packing of virtual characters. We introduce the bounding cylinder hierarchy (BCH), a bounding volume hierarchy that uses vertical cylinders as bounding shapes. We compare our BCH with common collision shapes, in terms of query time, construction time, and represented volume. To get an indication of possible crowd densities, we investigate how close characters can be before collision is detected and finally propose a critical maximum depth for the BCH.

    14. Deformable polygonal agents in crowd simulation (pages 343–352)

      Thomas Pitiot, David Cazier, Thomas Jund, Arash Habibi and Pierre Kraemer

      Article first published online: 23 MAY 2014 | DOI: 10.1002/cav.1581

      Thumbnail image of graphical abstract

      To produce impressive virtual worlds, real time crowd simulations require large and detailed scenes populated by agents with complex shapes and geometry. This paper addresses the issue of handling deformable polygonal agents with arbitrary shapes in real time crowd simulations. The proposed multiresolution framework supports environments with arbitrary topologies and provides tools for efficient proximity queries.

    15. Flock morphing animation (pages 353–362)

      Xinjie Wang, Linling Zhou, Zhigang Deng and Xiaogang Jin

      Article first published online: 23 MAY 2014 | DOI: 10.1002/cav.1580

      Thumbnail image of graphical abstract

      Our method proposes a new morphing technique. It has four main contributions: (i) it provides a complete solution by seamlessly combining 3D shape morphing and crowd simulation, including tetrahedralization, path planning and control, deformation, and so on; (ii) we introduce a new style control into local tetrahedron trajectory planning; (iii) we design a new obstacle avoidance algorithm for velocity generation of the crowd; and (iv) we introduce a new smooth shape interpolation algorithm between two arbitrary tetrahedra, which can be potentially used for other morphing applications.

    16. A personality model for animating heterogeneous traffic behaviors (pages 363–373)

      Xuequan Lu, Zonghui Wang, Mingliang Xu, Wenzhi Chen and Zhigang Deng

      Article first published online: 20 MAY 2014 | DOI: 10.1002/cav.1575

      Thumbnail image of graphical abstract

      We propose a novel approach to model heterogeneous traffic behaviors by adapting a well-established personality trait model (i.e., Eysenck's PEN (psychoticism, extraversion and neuroticism) model) into widely used traffic simulation approaches. We trained regression models to bridge low-level traffic simulation parameters and high-level perceived traffic behaviors (i.e., adjectives of the PEN model and the three PEN traits). An additional user study validates the effectiveness and usefulness of our approach. Our approach can also produce interesting emergent traffic patterns including faster-is-slower effect and sticking-in-a-pin-wherever-there-is-room effect.

    17. Modeling social behaviors in an evacuation simulator (pages 375–384)

      Mei Ling Chu, Paolo Parigi, Kincho Law and Jean-Claude Latombe

      Article first published online: 23 MAY 2014 | DOI: 10.1002/cav.1595

      Thumbnail image of graphical abstract

      Building occupants often make evacuation decisions by perceiving information about the environment and interacting with other occupants. We present SAFEgress (Social Agent For Egress), an agent-based simulation platform that models occupants as social agents who decide their evacuation actions based on their knowledge of the building and their interactions with the social groups and the neighboring crowd. Results from the social agent for egress prototype show that both agents' familiarity with the building and social influence can significantly impact egress performance.

    18. An all-in-one efficient lane-changing model for virtual traffic (pages 385–393)

      Hua Wang, Tianlu Mao, Xingchen Kang and Zhaoqi Wang

      Article first published online: 23 MAY 2014 | DOI: 10.1002/cav.1576

      Thumbnail image of graphical abstract

      This paper presents a new lane-changing model that takes into both the decision-making and the actual lane-changing action into account for a more detailed and accurate traffic simulation. A whole lane-changing process, whether it can occur and how it goes on, is determined by whether there is a valid trajectory. It only takes O(1) time for the determination. Experiment results show that the method can describe lane-changing processes realistically and efficiently.

    19. Towards a data-driven approach to scenario generation for serious games (pages 395–404)

      Linbo Luo, Haiyan Yin, Wentong Cai, Michael Lees, Nasri Bin Othman and and Suiping Zhou

      Article first published online: 19 MAY 2014 | DOI: 10.1002/cav.1588

      Thumbnail image of graphical abstract

      In the development of serious games, one critical challenge is the authoring of a large set of scenarios for different training objectives. In this paper, we propose a data-driven approach to automatically generate scenarios for serious games. By designing an artificial intelligence (AI) player model to imitate human player's behaviors, the proposed approach uses the simulated AI player performance data to automatically construct the scenario evaluation function for scenario generation.

    20. Populating semantic virtual environments (pages 405–412)

      Cameron D. Pelkey and Jan M. Allbeck

      Article first published online: 20 MAY 2014 | DOI: 10.1002/cav.1587

      Thumbnail image of graphical abstract

      The capacity for knowledge representation within simulated environments is a growing field of research geared toward the inclusion of detailed descriptors within the environment through which to provide a virtual agent the knowledge necessary to afford decision-making and interaction. Here, we offer an affordable method for semi-automating the generation and injection of semantic properties into a virtual environment for the purpose of producing more natural agent-object interaction.

    21. Hierarchical mesh deformation with shape preservation (pages 413–422)

      Yong Zhao, Junyu Dong, Bin Pan and Chunxia Xiao

      Article first published online: 16 MAY 2014 | DOI: 10.1002/cav.1596

      Thumbnail image of graphical abstract

      It is very difficult to deform flexible objects in computer animation. This paper presents a novel approach to address this problem. Various experimental data demonstrate that our algorithm is intuitive, efficient, and effective in deforming large meshes.

    22. Real-time physical deformation and cutting of heterogeneous objects via hybrid coupling of meshless approach and finite element method (pages 423–435)

      Chen Yang, Shuai Li, Lili Wang, Aimin Hao and Hong Qin

      Article first published online: 23 MAY 2014 | DOI: 10.1002/cav.1594

      Thumbnail image of graphical abstract

      This paper advocates a novel method for real-time, large-scale physical deformation and arbitrary cutting simulation of heterogeneous objects with multi-material distribution, whose originality centers on the tight coupling of domain-specific finite element method and material distance-aware meshless approach in a CUDA-centric parallel simulation framework.

    23. You have full text access to this OnlineOpen article
      Macroscopic and microscopic deformation coupling in up-sampled cloth simulation (pages 437–446)

      Shunsuke Saito, Nobuyuki Umetani and Shigeo Morishima

      Article first published online: 23 MAY 2014 | DOI: 10.1002/cav.1589

      Thumbnail image of graphical abstract

      The left side shows the elastic properties of a layered object made out of unit cell (hard one is purple mesh and soft one is off-white mesh). The right side shows the results of homogenizing the fine heterogeneous cloth on the left.

    24. Adaptive skeleton-driven cages for mesh sequences (pages 447–455)

      Xue Chen and Jieqing Feng

      Article first published online: 23 MAY 2014 | DOI: 10.1002/cav.1577

      Thumbnail image of graphical abstract

      Our method combines the strengths of the skeleton-based and cage-based structure to represent the mesh sequence for reusing and post-editing. A simple sketch-based method is adopted to extract a hierarchical skeleton and construct a skeleton-driven cage from the rest pose mesh. The ability to faithfully reproduce mesh sequences benefits from the proposed adaptive cage generation method, and it is feasible to automatically refine the cage for improving the quality further.

    25. Real-time simulation of ductile fracture with oriented particles (pages 457–465)

      Min Gyu Choi

      Article first published online: 23 MAY 2014 | DOI: 10.1002/cav.1601

      Thumbnail image of graphical abstract

      This paper presents a practical approach for real-time simulation of large deformation and ductile fracture with oriented particles. The proposed method finds the optimal rotation and the optimal stretch in shape matching. The newly introduced optimal stretch leads to a material strain that can be employed in the plastic flow and fracture criteria. Experimental results show that the proposed method can robustly simulate large, elastoplastic deformation and ductile fracture of large visual meshes in real time.

    26. Turbulence synthesis for shape-controllable smoke animation (pages 467–474)

      Ben Yang and Xiaogang Jin

      Article first published online: 16 MAY 2014 | DOI: 10.1002/cav.1585

      Thumbnail image of graphical abstract

      We introduce procedural synthesis methods into shape-controllable smoke animation to enhance fine-scale details. From a technical point of view, we develop a new synthesis parameter that can adjust turbulence behaviors according to control force and vorticity velocity. As a result, our synthesis algorithm can enhance turbulence details while reducing unsatisfactory fluid control effects of large-scale noises.

    27. Visual fluid animation via lifting wavelet transform (pages 475–485)

      Shiguang Liu, Yixin Xu, Junyong Noh and Yiying Tong

      Article first published online: 14 MAY 2014 | DOI: 10.1002/cav.1574

      Thumbnail image of graphical abstract

      This paper proposed a novel method for the efficient enhancement of visually small-scale details. Different from previous work, our method detects and improves fluid details in the frequency-domain via lifting wavelet decomposition. Compared with previous work, our results reproduce more visually important details at the similar cost.

    28. Collaborative virtual training with physical and communicative autonomous agents (pages 487–495)

      Thomas Lopez, Pierre Chevaillier, Valérie Gouranton, Paul Evrard, Florian Nouviale, Mukesh Barange, Rozenn Bouville and Bruno Arnaldi

      Article first published online: 16 MAY 2014 | DOI: 10.1002/cav.1583

      Thumbnail image of graphical abstract

      We present a whole integrated model of collaborative virtual environment for training (CVET) focusing on the abstraction of the actors’ nature to define a homogenous collaboration model for users and virtual agents. To do so, we define a complete model of physically simulated autonomous agent able to collaboratively interact and communicate using natural language with real users in such environments. This contribution can be seen as a baseline of reasoning components to consider when building CVET.

    29. Assistant agents to advice users in hybrid structured 3D virtual environments (pages 497–506)

      Pablo Almajano, Maite Lopez-Sanchez, Inmaculada Rodriguez and Tomas Trescak

      Article first published online: 23 MAY 2014 | DOI: 10.1002/cav.1598

      Thumbnail image of graphical abstract

      Hybrid structured 3D Virtual Environments model serious activities in 3D spaces, where participants interactions are regulated by an Organisation Centered Multi Agent System. We propose personal assistants to advice the user about how to achieve goals in these spaces. Advices are computed using our proposed Plan-eA algorithm, which generates plans for the user that do not only include her or his actions but other users’ ones. A comparative analysis, with and without assistance, demonstrates that both efficiency and efficacy improve with assistance.

    30. A model for social spatial behavior in virtual characters (pages 507–519)

      Nahid Karimaghalou, Ulysses Bernardet and Steve DiPaola

      Article first published online: 16 MAY 2014 | DOI: 10.1002/cav.1600

      Thumbnail image of graphical abstract

      We present a social navigation model that aims at generating human-like spatial behavior for a virtual human in a social setting with group dynamics. We employ an engineering approach by defining a dynamic representation of interest and then using it as the psychometric function that regulates the behavior of the agent. Our work is a step toward models for generating more plausible social spatial behavior for virtual characters based on both internal dynamics and attributes of the social environment.

SEARCH

SEARCH BY CITATION