Navigation is the process by which people control their movement in virtual environments and is a corefunctional requirement for all virtual environment (VE) applications. Users require the ability to move, controllingorientation, direction of movement and speed, in order to achieve a particular goal within a VE. Navigation israrely the end point in itself (which is typically interaction with the visual representations of data) but applicationsoften place a high demand on navigation skills, which in turn means that a high level of support for navigationis required from the application. On desktop systems navigation in non-immersive systems is usually supportedthrough the usual hardware devices of mouse and keyboard. Previous work by the authors shows that many usersexperience frustration when trying to perform even simple navigation tasks — users complain about getting lost,becoming disorientated and finding the interface `difficult to use'. In this paper we report on work in progressin exploiting natural language processing (NLP) technology to support navigation in non-immersive virtualenvironments. A multi-modal system has been developed which supports a range of high-level (spoken) navigationcommands and indications are that spoken dialogue interaction is an effective alternative to mouse and keyboardinteraction for many tasks. We conclude that multi-modal interaction, combining technologies such as NLP withmouse and keyboard may offer the most effective interaction with VEs and identify a number of areas where furtherwork is necessary.
ACM CSS: I.3.6 Computer Graphics Methodology and Techniques—Interaction and Techniques, I.3.7 Three-DimensionalGraphics and Realism—Virtual Reality, I.2.7 Natural Language Processing—Speech Recognitionand Synthesis