The full text of this article hosted at iucr.org is unavailable due to technical difficulties.

SPECIAL ISSUE ARTICLE

Enhancing multimodal learning through personalized gesture recognition

M.J. Junokas

Corresponding Author

E-mail address: junokas@illinois.edu

University of Illinois, Urbana‐Champaign, , United States

Correspondence

Michael Junokas, National Center for Supercomputing Applications, University of Illinois at Urbana‐Champaign, 1205 W Clark St, Room 2103, Urbana, IL, 61801.

Email: junokas@illinois.edu

Search for more papers by this author
R. Lindgren

University of Illinois, Urbana‐Champaign, , United States

Search for more papers by this author
J. Kang

University of Illinois, Urbana‐Champaign, , United States

Search for more papers by this author
J.W. Morphew

University of Illinois, Urbana‐Champaign, , United States

Search for more papers by this author
First published: 15 April 2018
Cited by: 1

Abstract

Gestural recognition systems are important tools for leveraging movement‐based interactions in multimodal learning environments but personalizing these interactions has proven difficult. We offer an adaptable model that uses multimodal analytics, enabling students to define their physical interactions with computer‐assisted learning environments. We argue that these interactions are foundational to developing stronger connections between students' physical actions and digital representations within a multimodal space. Our model uses real time learning analytics for gesture recognition, training a hierarchical hidden‐Markov model with a “one‐shot” construct, learning from user‐defined gestures, and accessing 3 different modes of data: skeleton positions, kinematics features, and internal model parameters. Through an empirical comparison with a “pretrained” model, we show that our model can achieve a higher recognition accuracy in repeatability and recall tasks. This suggests that our approach is a promising way to create productive experiences with gesture‐based educational simulations, promoting personalized interfaces, and analytics of multimodal learning scenarios.

Lay Description

What is already known about this topic:

  • Gesture recognition systems provide opportunity for embodiment in learning environments.
  • Current systems do not take full advantage of active movement and control research paradigms.
  • This leads to unnatural interactions and less accurate representations of user expression.

What this paper adds:

  • We contribute a “one‐shot” model to gesture recognition, enabling personalized human–computer interaction.
  • This personalization can lead to more accurate multimodal capture and analysis.
  • It also creates an intuitive, user‐defined interaction with gesture recognition systems.
  • In comparison with a “pretrained” model, our model performs more effectively with users.

Implications for practice and/or policy:

  • This leads to a more empowered student, thus a more complete analysis of their interactions.
  • This forms deep connections between their movement and conceptions, promoting embodiment.
  • It transforms human–computer interaction in educational settings, personalizing interaction in real time.

Number of times cited: 1

  • , Exploring Emergent Features of Student Interaction within an Embodied Science Learning Simulation, Multimodal Technologies and Interaction, 10.3390/mti2030039, 2, 3, (39), (2018).