SEARCH

SEARCH BY CITATION

Keywords:

  • virtual patient;
  • artificial intelligence;
  • medical education;
  • anatomy;
  • patient simulation;
  • problem-based learning;
  • PBL;
  • access grid;
  • traumatic head injury;
  • TOUCH

Abstract

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. DESCRIPTION OF THE SYSTEM
  5. THE TOUCH APPLICATION
  6. DISCUSSION
  7. Acknowledgements
  8. LITERATURE CITED
  9. Biographical Information

Project TOUCH (Telehealth Outreach for Unified Community Health; http://hsc.unm.edu/touch) investigates the feasibility of using advanced technologies to enhance education in an innovative problem-based learning format currently being used in medical school curricula, applying specific clinical case models, and deploying to remote sites/workstations. The University of New Mexico's School of Medicine and the John A. Burns School of Medicine at the University of Hawai'i face similar health care challenges in providing and delivering services and training to remote and rural areas. Recognizing that health care needs are local and require local solutions, both states are committed to improving health care delivery to their unique populations by sharing information and experiences through emerging telehealth technologies by using high-performance computing and communications resources. The purpose of this study is to describe the deployment of a problem-based learning case distributed over the National Computational Science Alliance's Access Grid. Emphasis is placed on the underlying technical components of the TOUCH project, including the virtual reality development tool Flatland, the artificial intelligence–based simulation engine, the Access Grid, high-performance computing platforms, and the software that connects them all. In addition, educational and technical challenges for Project TOUCH are identified. Anat Rec (Part B: New Anat) 270B:23–29, 2003. © 2003 Wiley-Liss, Inc.


INTRODUCTION

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. DESCRIPTION OF THE SYSTEM
  5. THE TOUCH APPLICATION
  6. DISCUSSION
  7. Acknowledgements
  8. LITERATURE CITED
  9. Biographical Information

Project TOUCH is a multi-year program initiated in August of 2000 as a collaborative effort between the University of New Mexico and University of Hawai'i and their associated high-performance computing centers (Alverson et al., 2001; Jacobs et al., 2003). The purpose of the project is to demonstrate the feasibility of using advanced computing methods, such as virtual reality, to enhance education in a problem-based learning (PBL) format currently being used in the curriculum in the two schools (Kaufman et al., 1989; Anderson, 1991; Bereiter and Scardamalia, 2000). The demonstration case consists of a traumatic head injury, deploying it to remote sites and associated workstations over the Next Generation Internet Access Grid (AG), (http://www-fp.mcs.anl.gov/fl/). Recognizing that health care needs are local and require local solutions, both states are focused on improving health care delivery to their unique populations and have begun to benefit from sharing information and experiences. Emerging telehealth technologies can be applied by using existing high-performance computing and communications resources present in both states.

The primary objective of this project is to determine whether an integrated, collaborative, immersive virtual environment can be developed facilitating enhanced human comprehension and whether this system can be applied to PBL across distance.

One primary objective of the TOUCH project is to develop a computing environment that facilitates student directed learning within a group setting.

The first phase has been exploratory and has involved initial development of advanced computing tools by using immersive virtual reality and a newly developed virtual patient simulator while making use of a completely novel virtual environment development tool called Flatland (http://www.ahpcc.unm.edu/homunculus/) distributed over the AG to distant learning sites (Jacobs et al., 2003). The purpose of this study is to describe a real-time artificial intelligence (AI) simulation engine, a real-time three-dimensional (3D) virtual reality environment, a system for human-simulation interaction, and finally, an Internet “teleconferencing” system to distribute the learning experience out to remote sights.

DESCRIPTION OF THE SYSTEM

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. DESCRIPTION OF THE SYSTEM
  5. THE TOUCH APPLICATION
  6. DISCUSSION
  7. Acknowledgements
  8. LITERATURE CITED
  9. Biographical Information

One primary objective of the TOUCH project is to develop a computing environment that facilitates student-directed learning within a group setting. The group consists of individuals located at remote sites while the student-directed learning exercise generates learning issues resulting from the treatment of a virtual patient (Jacobs et al., 2003). Therefore, technical components of the TOUCH system are developed and integrated to achieve this objective. The system diagram in Figure 1 shows the relative location and interconnection of all system components from a network point of view.

thumbnail image

Figure 1. The components of the TOUCH system discussed in this study. A single student user is immersed in the Flatland environment. The artificial intelligence–based simulator interacts with the user and the environment, and controls the virtual patient. The Access Grid (AG) nodes are connected to Flatland through graphical image transmission and control transmission.

Download figure to PowerPoint

The AG

The National Computational Science Alliance AG (NCSA AG) is an Internet-based conferencing system supporting real time, multipoint, group-to-group communication and collaboration. AG nodes, or studios, are the meeting venues and typically combine large-screen multimedia displays by using conventional projectors with high-end audio support (Figure 2). The AG substrate is the Internet, using Internet protocol (IP) multicast and middleware to feed the nodes live video and sound. AG users can share presentations, visualization environments, browsers, whiteboards, and graphics tablets. The AG nodes provide a research environment for the development of distributed data and visualization conduits as well as studying issues relating to collaborative work in distributed environments. The AG uses the Video Conferencing Tool (VIC; McCanne and Jacobson, 1995) for transmitting and receiving video. VIC is a multimedia tool built by Lawrence Berkeley National Laboratory for real-time video conferencing over the Internet. It is intended to link multiple sites with multiple simultaneous video streams over a multicast infrastructure. VIC can perform two basic functions: (1) obtain information from video capture cards to which cameras or other video devices are attached, and send it over the network; and (2) receive data from the network and display them on a video monitor or on some other attached video device such as a video projector. VIC is based on the Real-time Transport Protocol (RTP; Schulzrinne et al., 1996) that is widely used with the Internet for the real-time transmission of audio and video due to its unique ability to encode and decode video streams (International Telecommunication Union, 1993). Although VIC can be run point-to-point by using standard unicast IP addresses, it is primarily intended as a multiparty conferencing application. To use VIC's conferencing capabilities, systems must support IP multicast, and ideally, the network should be connected to the IP Multicast Backbone (Mbone; Macedonia and Brutzman, 1994). Mbone is the multicast capable backbone of the Internet. It currently consists of a network of tunnels linking the islands of multicast capable subnetworks around the world.

thumbnail image

Figure 2. A typical Access Grid (AG) studio consists of a meeting room with a multiprojector wall screen, multiview cameras, microphones, and speakers. On the screen are live images of remote collaborators, a Spycam view into Flatland, and a Power Point presentation of the TOUCH traumatic head injury storyboard. [Color figure can be viewed in the online issue, which is available at www.interscience.wiley.com.]

Download figure to PowerPoint

The TOUCH project is using the Internet for its underlying telecommunications infrastructure. The AG provides a collaborative environment for remote visualization and interactive applications. A Flatland application was developed that allows real-time graphics to be multicast out to the AG for viewing at remote sites. This strategy involves a coordinated process of copying the graphics out of Flatland, encoding them into video formats, and finally transmitting the images using the Flatland Transmitter.

Flatland: Virtual Environments Tool

Flatland is a visualization/virtual reality application development environment, created at the University of New Mexico (http://www.ahpcc.unm.edu/homunculus). It allows software authors to construct and users to interact with arbitrarily complex graphical and aural representations of data and systems. The system is described in more detail in Box 1. The end result is a virtual-reality immersive environment with sight and sound, in which students using joywands and virtual controls can interact with computer-generated learning scenarios that respond logically to user interaction. Virtual patients can be simulated in any of several circumstances, with any imaginable disease or injury. The activities of a participant can be monitored by faculty and other students for educational and instructional purposes.

Flatland: Technical Details

Flatland is written in C/C++ and uses the standard OpenGL graphics language to produce all graphics. In addition, Flatland uses the standard GLUT library for window, mouse, joywand, and keyboard management. Flatland is object oriented, multithreaded, and uses dynamically loaded libraries to build user applications in the virtual environment, and runs under Linux and Irix operating systems. At the core of Flatland is an open, custom, transformation graph data structure that maintains and potentially animates the geometric relationships between the objects contained in the graph. Graph objects contain all of the information necessary to draw, sound, and control the entity represented by the object. The transformation graph is one part of a higher-level structure referred to in Flatland as a universe. The universe contains the transformation graph, a flat database of objects in the graph, and a reference to the graph vertex that is currently acting as the root of a hierarchically organized tree. This root is usually the graphical camera viewpoint.

Flatland is intrinsically multithreaded, allowing the system to make use of computer systems with multiprocessors and shared memory. The main thread spawns an OpenGL graphics thread, a Flatland sound thread, and a real-time tracker thread. The optional tracker facilitates the use of 3D interaction metaphors with their applications and to use head tracking and 3D joywands or wands. An application in the context of Flatland is a relatively self-contained collection of objects, functions, and data that can be dynamically loaded (and unloaded) into the graph of an environment instantaneously. An application is responsible for creating and attaching its objects to the graph, and for supplying all object functionality. An application is added to Flatland through the use of a configuration file. This structured file is read and parsed when Flatland starts, and contains the name and location of the libraries that have been created for the application, as well as a formal list of parameters and an arbitrary set of arguments for the application.

In Flatland, graphics and sound are treated symmetrically. Each object in the graph contains, among other things, a draw function and a sound function. The draw function contains or calls all of the code to draw and animate the graphics that represents the object. From an author's perspective, all object graphics are based on and drawn in a local coordinate system. Other structures in the graph handle the placement and orientation of the object's model coordinate relative to other objects in the graph and subsequently the camera. The sound function within an object contains all of the calls or code to make sounds that represents that object. Flatland maintains a library of sound function calls that are designed to resemble OpenGL. Wave sound files are treated like OpenGL display lists and are called sound lists. In addition to opening sound lists, functions exist that allow the author to control the starting, looping, stopping, volume, and 3D location of the sound. All sound is emitted in Flatland from point sources in the 3D space. The author specifies the location of the sounds in the same model coordinate system used for the graphics.

Although position-tracking technology is not generally available on computers today, Flatland is designed to make use of these. A tracker is a multiple degree of freedom measurement device that can, in real-time, monitor the position and/or orientation of multiple receiver devices in space, relative to a transmitter device of some sort. As such, Flatland launches a tracker thread to sample the available tracker information and make it available for use by applications. In the standard Flatland configuration, trackers are used to locate hand-held wands and to track the position of the user's head. Head position and orientation is needed in cases that involve the use of head mounted displays or stereo shutter glasses.

User interaction is a central component of Flatland, and as such, each object is controllable in arbitrary ways defined by the designer. Currently, there are four possible methods for the control of objects: (1) GLUT pop-up menus in the main viewer window, (2) the console keyboard, (3) Flatland 2D control panels either in the environment or separate windows, and (4) external systems or simulations. In the future, there will also be available 3D menus and controls in the virtual environment and voice recognition.

An array of controls may be defined when an object is coded by the designer. These controls are managed by Flatland and can be exercised through a designer-defined function that is evoked when either the keystroke is made or a menu item is selected. This function may be arbitrarily complex and may affect objects other than the owner of this control. The control functions associated with objects are the preferred method to change any internal states or data of the object. The mouse and keyboard interactions are provided through the GLUT libraries and a custom 2D widget library. The latter is available to the designer for the creation of standard 2D control panel windows. Finally, external systems may control an object, for example, through a threaded simulation, serial communication (trackers), or Unix sockets to another process.

THE TOUCH APPLICATION

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. DESCRIPTION OF THE SYSTEM
  5. THE TOUCH APPLICATION
  6. DISCUSSION
  7. Acknowledgements
  8. LITERATURE CITED
  9. Biographical Information

Application systems, such as the TOUCH demonstration case, are dynamically loaded into the basic Flatland system and associated objects are attached to the Flatland graph. The TOUCH demonstration case is composed of three Flatland application modules: (1) The Virtual Patient Environment; (2) The Artificial Intelligence simulator, and (3) The Spycam (Figure 3A). The immersed student interacts with the virtual patient through the virtual reality (VR) effector represented as a floating hand (Figure 3B). The Virtual Patient Environment consists, for the current head trauma case, of a car accident scene and an emergency room (Jacobs et al., 2003). Following the development of AI rules governing the patient's condition after head trauma resulting from an automobile accident, a storyboard was developed as a visual timeline for the simulation (Jacobs et al., 2003). Graphical models were created for this scenario in the commercial modeling tool Maya and imported into Flatland. A virtual patient body model was created in another commercial tool, Poser, and imported into the case scenario (Figure 4). Medical tool kit models (e.g., otoscope, neck brace, pen light) were also produced in Maya and loaded onto patient-side trays (Figure 4). The system is driven by a VR operator tracking of the users activities while viewing a computer monitor and assisting with virtual body position and movements when necessary (Figure 5).

thumbnail image

Figure 3. A: Diagram showing the relationships between the components of the TOUCH System in Flatland. The artificial intelligence contains all of the necessary knowledge extracted from medical experts to monitor and control the entire system. The user is virtually present in the scene and controls their viewpoint through a head tracking system. The user interacts with the scene through hand-held joywands represented as a floating hand with the corresponding orientation (B, inset). The outside world views the action in the scene through the SpyCam that can be moved arbitrarily in the environment by the virtual reality system operator or connected directly to the head of the user to share the view. [Color figure can be viewed in the online issue, which is available at www.interscience.wiley.com.]

Download figure to PowerPoint

thumbnail image

Figure 4. A: A view of the TOUCH system in a standard Flatland environment, showing the virtual patient with a blood pressure cuff, neck brace, and head bandage. On the medical trays are located an airway, an otoscope, stethoscope, and a penlight. B: Vital signs and data are presented to the immersed user upon their “head up” display, in this case, the stethoscope and blood pressure numbers as well as time since the accident. C: The artificial intelligence timeline results in the virtual reality patient becoming cyanotic at which point an airway must be inserted or death ensues. D: After the airway is inserted, the patient regains hue.

Download figure to PowerPoint

thumbnail image

Figure 5. A view of the Access Grid studio, screen, and a student user with the TOUCH system during one of the experimental sessions. The screen shows a mixture of live video images of the students participating in the learning session with the Spycam view of the virtual environment as seen by the immersed student (upper, right corner of screen). The dark-colored box over the user's head is part of the tracking system. The student holds a joywand in his right hand. The virtual reality operator is also seen below the screen. [Color figure can be viewed in the online issue, which is available at www.interscience.wiley.com.]

Download figure to PowerPoint

The immersed student interacts with the virtual patient through a joywand, equipped with tracking system (six degrees of freedom), buttons, and a trigger (Figure 3). The user may pick up and place objects by moving the virtual hand and pulling the wand's trigger. The AI is a custom forward chaining IF-THEN rulebase system that contains knowledge of medical experts for this particular case and knowledge of how objects interact (Luger, 2002). The rules are coded, at this time, in a C computer language format as logical antecedents and consequences and currently have limited human readability. The AI loops over the rule base, applying each rule's antecedents to the state of the system, including time, by using a double-buffering method to maintain consistency, and testing for logical matches. Matching rules are “fired,” modifying the next state of the system, and controlling the status of dynamically launched real-time control functions. These functions operate at the rate of the graphics engine to smoothly control all time varying states of the patient, including physiology and interaction.

Time is a special state of the system that is not directly modified by the AI, but whose rate is controlled by an adjustable clock. Because the rate of inference within the AI is controlled by this clock, the operator is able to speed up, slow down, or stop the action controlled by the AI. The AI is currently represented in the TOUCH system as a crystal rotating synchronously with the passage of virtual time, to provide a monitor of the AI's status for the developers of the system. In the future, it is planned for the immersed student to interact directly with representations of the AI; therefore, it may be required to take on other forms, such as human avatars. For the current TOUCH project experiments, the AI representation was not visible to the student user during their interaction with the virtual patient.

The camera-probe application, Spycam, captures images from the Flatland environment and transmits them over the AG for viewing at remote sites (Figure 3). This camera is used to capture the third-person independent view of the applications within Flatland. The Spycam can move around within Flatland and stop at any position. The image captured by the camera is copied into an auxiliary buffer and prepared for transmitting. Multiple Spycams may be launched simultaneously and separately flown for multiview transmission into the AG.

The Flatland transmitter is a VIC-based multimedia tool that translates the output from the virtual environment into a video stream for multicast. The transmitter receives the images to transmit from Flatland, encodes them, and then sends the video stream over the AG. The transmitter depends on the Multicast Backbone (MBone) to broadcast the video streams. To accommodate sites without multicast capabilities, we use a multicast/unicast bridge that provides Mbone gateway services so that users can run their Mbone tools in unicast mode, and join a multicast session (Lehmen, 1999).

DISCUSSION

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. DESCRIPTION OF THE SYSTEM
  5. THE TOUCH APPLICATION
  6. DISCUSSION
  7. Acknowledgements
  8. LITERATURE CITED
  9. Biographical Information

The TOUCH project is determining the feasibility of using emerging technologies to overcome geographic barriers to delivery of medical education in the communities of need and to enhance the learning process with immersive virtual reality, patient simulation, and Internet-based distribution of knowledge.

The TOUCH project is determining the feasibility of using emerging technologies to overcome geographic barriers to delivery of medical education and to enhance the learning process with immersive virtual reality.

The project builds upon previous data supporting the PBL system of medical education as well as an AI tool initially conceived as a patient simulator (Stansfield et al., 2000). However, TOUCH has provided many unique applications and technological advances. Within this context, considerable advancement in distance learning is being achieved.

PBL was pioneered during the 1960s at Case Western Reserve University and McMaster University (Boud and Feletti, 1991), and it has been applied in numerous forms over the decades. Its initial intention was to provide a learning approach that facilitated knowledge integration across academic disciplines and to promote problem-solving skills (Barrows and Tamblyn, 1980). Although still debated possibly due to differing definitions of the processes as well as its conceptual underpinnings (Maudsley, 1999), PBL has been shown to yield successful educational outcomes with measurable benefits (Blake et al., 2000). Small group interaction provides an opportunity for students to work toward the understanding and resolution of specific patient problem. This problem serves as the focus for the establishment of hypothesis testing and the generation of learning issues ultimately stimulating problem-solving and reasoning skills. Project TOUCH capitalizes on this philosophy, because a specific problem is presented to the student group. However, the presentation is novel, because a virtual patient is used, thus providing a sense of realism and urgency, particularly because the simulation responds to a timeline. In addition, students can dynamically determine the direction of the scenario each potentially resulting in a unique outcome.

The TOUCH project places students in a position of decision-making, requiring intergroup analysis and reasoning. Yet, numerous effects remain untested. For example, the role of case distribution with the uncertainties of AG transmission must be examined. It is unknown whether the personal interaction within a PBL group is retained over the AG. Although the realism of a virtual patient should facilitate associative relationships providing a more effective learning experience for the student, this hypothesis remains to be tested. The effectiveness of the AI system and Flatland in providing effective reification must be validated. Another source of uncertainty is the transmission capability of the AG, which can introduce issues of latency and jitter. In more extreme cases, network congestion may cause transmission interruptions and “downtime” with a potentially adverse effect on dynamic PBL tutorial group interaction.

Currently, experiments are being undertaken to compare the presentation of the virtual patient and distributed learning structure with a standard paper case tutorial to assess the effects of the virtual environment as well as remote distribution of the case with various iterations (Jacobs et al., 2003; Lozanoff et al., 2003). In particular, the AG has, until now, not been used to support these types of applications. If evaluation of AG distribution of PBL cases is successful, a more uniform access to these enhancements would be made possible regardless of location as Internet access becomes more ubiquitous. Thus, a major goal of the project is to understand the impact of the TOUCH technology on learning dynamics and knowledge processing.

The system described here provides unique opportunities to navigate a case at numerous levels of anatomical complexity. A zoom capability is being developed which allows the participants in the virtual environment to maneuver across levels of the system interacting within that environment. For example, the student interacts at the patient level, performing a physical examination and assessing the physical condition. The student could then changes levels and investigate learning issues at the cellular or molecular level by exploring for example, the effect a drug given to a patient and its effect at the current level or a level above or below. In addition, participants could zoom out to witness and interact with the consequences of patient's condition on phenomena at a community, population, or even a global level. Thus, Project TOUCH provides a framework for initial evaluation of the potential benefit of these methods to enhance further medical education, as well as a means of defining the strengths, weaknesses, and barriers to their use in medical education. In addition, this project sets the stage for future development and potential integration into a medical school curriculum and provides a “touchstone” for other applications using these methods.

Acknowledgements

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. DESCRIPTION OF THE SYSTEM
  5. THE TOUCH APPLICATION
  6. DISCUSSION
  7. Acknowledgements
  8. LITERATURE CITED
  9. Biographical Information

The project described was partially supported by grant 2 DIB TM00003-02 from the Office for the Advancement of Telehealth, Health Resources and Services Administration, Department of Health and Human Services. The contents of this study are solely the responsibility of the authors and do not necessarily represent the official views of the Health Resources and Services Administration. The authors thank the Maui High Performance Computing Center, UNM High Performance Computing and Research Center, the UNM Health Sciences Library, and the UNM Center for Telehealth for their support, as well as Dr. Sharon Stansfield of Ithaca College, Ithaca, NY, and her former team at Sandia National Laboratories in Albuquerque, NM, for their helpful discussions. Dr. Robert Trelease, UCLA, is thanked for providing helpful comments during the preparation of this manuscript.

LITERATURE CITED

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. DESCRIPTION OF THE SYSTEM
  5. THE TOUCH APPLICATION
  6. DISCUSSION
  7. Acknowledgements
  8. LITERATURE CITED
  9. Biographical Information
  • Alverson DM, Saiki S, Buchanan H. 2001. Telehealth for Unified Community Health (TOUCH). 5th Annual Distributed Medical Intelligence Conference, Breckenridge, CO.
  • Anderson A. 1991. Conversion to problem-based learning in 15 months. In: BoudD, FelettiG, editors. The challenge of problem based learning. New York: St. Martin's Press. p 7279.
  • Barrows HS, Tamblyn RM. 1980. Problem-based learning: An approach to medical education. Medical education series, Vol. 1. New York: Springer Verlag.
  • Bereiter C, Scardamalia M. 2000. Commentary on part I: Process and product in problem-based learning (PBL) research. In: EvensenDH, HmeloCE, editors. Problem-based learning: A research perspective on learning interactions. New Jersey: Lawrence Erlbaum Assoc. Publishers. p 185195.
  • Blake RL, Hosokawa MC, Riley SL. 2000. Student performances on step 1 and step 2 of the United States Medical Licensing Examination following implementation of a problem-based learning curriculum. Acad Med 75: 6670.
  • Boud D, Feletti GI. 1991. Introduction. In: The challenge of problem-based learning. London: Kogan Page. p 1320.
  • International Telecommunication Union. 1993. Video codec for audiovisual services at p*64kb/s, ITU-T Recommendation H.261.
  • Jacobs J, Caudell TP, Wilks D, et al. 2003. Integration of advanced technologies to enhance experiential problem-based learning over distance: Project TOUCH. Anat Rec (New Anat) 270B: 1622.
  • Kaufman A, Mennin S, Waterman R, et al. 1989. The New Mexico experiment: Educational innovation and institutional change. Acad Med 64: 285294.
  • Lehman T. 1999. Mbone Multicast/Unicast Gateway and Reflector (C Program). Pasadena, CA: University of Southern California.
  • Lozanoff S, Lozanoff B, Sora M-C, et al. 2003. Anatomy and the Access Grid: exploiting plastinated brain sections for use in distributed medical education. Anat Rec (New Anat) 270B: 3037.
  • Luger G. 2002. Artificial intelligence. New York: Addison-Wesley.
  • Macedonia MR, Brutzman DP. 1994. MBone provides audio and video across the Internet. IEEE Comp 27: 3036.
  • Maudsley G. 1999. Do we all mean the same thing by “problem-based learning”? A review of the concepts and a formulation of ground rules. Acad Med 74: 178185.
  • McCanne S, Jacobson V. 1995. Vic: A flexible framework for packet video. ACM Multimedia, p 511522.
  • Schulzrinne H, Casner S, Frederick R, Jacobson V. 1996. RTP: A transport protocol for real-time applications. IETF Audio-Video Transport Working Group RFC1889.
  • Stansfield S, Shawver D, Sobel A, Prasad M, Tapia L. 2000. Design and implementation of a virtual reality system and its application to training medical first responders presence: Teleoperators and virtual environments. MIT Press J 9: 524556.

Biographical Information

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. DESCRIPTION OF THE SYSTEM
  5. THE TOUCH APPLICATION
  6. DISCUSSION
  7. Acknowledgements
  8. LITERATURE CITED
  9. Biographical Information

Drs. Caudell and Mowafi are in the Department of Electrical and Computer Engineering, University of New Mexico. Drs. Summers, Holten, and Hakamata are in the High Performance Computing, Education & Research Center, University of New Mexico. Dr. Jacobs is in the Department of Internal Medicine, University of Hawai'i School of Medicine. Drs. S. Lozanoff and Keep, and Ms. B. Lozanoff, are in the Department of Anatomy and Reproductive Biology, University of Hawai'i School of Medicine. Dr. Wilks is in the Department of Radiology, University of New Mexico School of Medicine. Dr. Saiki is in the Department of Internal Medicine, University of Hawai'i School of Medicine and the Tripler Army Medical Center, Honolulu, Hawai'i. Dr. Alverson is in the Department of Pediatrics, University of New Mexico.