Computer Animation and Virtual Worlds

Cover image for Vol. 28 Issue 2

Edited By: Nadia Magnenat Thalmann and Daniel Thalmann

Impact Factor: 0.548

ISI Journal Citation Reports © Ranking: 2015: 88/106 (Computer Science Software Engineering)

Online ISSN: 1546-427X

Featured

  • Animating synthetic dyadic conversations with variations based on context and agent attributes

    Animating synthetic dyadic conversations with variations based on context and agent attributes

    The marketplace scenario.

  • An embodied approach to arthropod animation

    An embodied approach to arthropod animation

    Select frames of a tarantula climbing onto a vertical wall. Green dots show the intersection bounds of the rudimentary sensing mechanism, used to judge the proximity and relative angle of objects in the creature's path.

  • A system for automatic animation of piano performances

    A system for automatic animation of piano performances

    Key poses of finger crossover while playing scales. The first row shows a key frame of the thumb crossing over the middle finger while playing the C-major scale, and the second row shows a key frame of the thumb crossing over the ring finger while playing the D-major scale, both from three perspectives. Note that the ring/middle finger firmly presses down the keys, the fingers avoid collisions with black keys in C-major, the wrist maintains a natural rotation, and the thumb is positioned well on the key to play it after crossing over.

  • Painterly rendering techniques: a state-of-the-art review of current approaches

    Painterly rendering techniques: a state-of-the-art review of current approaches

    A sample input source images (A and A ′ ) along with the target input images (B) with the result output (B ′ ) produced using techniques presented in .

  • Haptic collision handling for simulation of transnasal surgery

    Haptic collision handling for simulation of transnasal surgery

    The endoscope (green) inside the nasal cavity during simulation of transnasal surgery. In the larger image, some interior structures were made visible. The anatomy is complex and challenging for computation of tissue–tool interaction.

  • Animating synthetic dyadic conversations with variations based on context and agent attributes
  • An embodied approach to arthropod animation
  • A system for automatic animation of piano performances
  • Painterly rendering techniques: a state-of-the-art review of current approaches
  • Haptic collision handling for simulation of transnasal surgery

Recently Published Issues

See all

Read the latest research from Computer Animation and Virtual Worlds

Subscribe to RSS headline updates from:
Powered by FeedBurner

Recently Published Articles

  1. A comparative study of k-nearest neighbour techniques in crowd simulation

    Jordi L. Vermeulen, Arne Hillebrand and Roland Geraerts

    Version of Record online: 21 APR 2017 | DOI: 10.1002/cav.1775

    Thumbnail image of graphical abstract

    We compare nine different implementations of data structures used to answer k-nearest neighbour queries in the context of crowd simulation. We find that the nanoflann implementation of a k-d tree offers the best performance by far on many different scenarios, processing 100,000 agents in about 35 ms on a fast consumer PC.

  2. Constructive approach for smoke plume animation using turbulent toroidal vortices

    Oyundolgor Khorloo and Enkhbayar Altantsetseg

    Version of Record online: 21 APR 2017 | DOI: 10.1002/cav.1772

    Thumbnail image of graphical abstract

    In this paper, we propose an efficient approach for generating plausible smoke animation at interactive rates. Among a wide range of smoke animation approaches, our proposed approach simulates the behavior of gaseous phenomena such as turbulent smoke from a steam locomotive. The key idea is that vortex flows generated by torus-type smoke primitives are passively advected in a wind field to describe the turbulent flow of smoke.

  3. ALET: Agents Learning their Environment through Text

    J. Timothy Balint and Jan Allbeck

    Version of Record online: 21 APR 2017 | DOI: 10.1002/cav.1759

    Thumbnail image of graphical abstract

    We present Agents Learning their Environment through Text, ALET, in order to connect graphical objects to virtual agent actions. ALET creates these connections through large unstructured text data and natural language knowledge bases. We compare ALET to other generation methods and show that it is able to more accurately differentiate both the meaning of actions and connections between actions and graphical objects.

  4. High-fidelity iridal light transport simulations at interactive rates

    Boris Kravchenko, Gladimir V. G. Baranoski, Tenn Francis Chen, Erik Miranda and Spencer R. Van Leeuwen

    Version of Record online: 19 APR 2017 | DOI: 10.1002/cav.1755

    Thumbnail image of graphical abstract

    First-principles models of light interaction with complex organic materials like the human iris are considered excessively time consuming for rendering and visualization applications requiring interactive rates. In this paper, we propose a strategy to achieve an optimal balance between fidelity and performance in the reproduction of iridal chromatic attributes. We believe that the proposed strategy represents a step toward the real-time and predictive synthesis of high-fidelity iridal images for such applications, and it can be extended to other biological structures.

  5. MAVE: Maze-based immersive virtual environment for new presence and experience

    Jiwon Lee, Kisung Jeong and Jinmo Kim

    Version of Record online: 19 APR 2017 | DOI: 10.1002/cav.1756

    Thumbnail image of graphical abstract

    To provide users with a sense of presence and experience, this study presents a MAze-based immersive Virtual Environment (MAVE). MAVE consists of a new immersive virtual scene based on user-oriented maze terrain authoring system and immersive interaction using a novel portable walking simulator. This study confirms through various technical and statistical experiments that MAVE can lead to new research on immersion enhancement without VR sickness via virtual reality content.

SEARCH

SEARCH BY CITATION