SEARCH

SEARCH BY CITATION

Keywords:

  • cataract surgery;
  • simulator;
  • skills training;
  • virtual reality

Abstract.

  1. Top of page
  2. Abstract.
  3. Introduction
  4. Material and Methods
  5. Results
  6. Discussion
  7. Acknowledgments
  8. References

Purpose:  To investigate initial learning curves on a virtual reality (VR) eye surgery simulator and whether achieved skills are transferable between tasks.

Methods:  Thirty-five medical students were randomized to complete ten iterations on either the VR Caspulorhexis module (group A) or the Cataract navigation training module (group B) and then two iterations on the other module. Learning curves were compared between groups. The second Capsulorhexis video was saved and evaluated with the performance rating tool Objective Structured Assessment of Cataract Surgical Skill (OSACSS). The students’ stereoacuity was examined.

Results:  Both groups demonstrated significant improvements in performance over the 10 iterations: group A for all parameters analysed including score (p < 0.0001), time (p < 0.0001) and corneal damage (p = 0.0003), group B for time (p < 0.0001), corneal damage (p < 0.0001) but not for score (p = 0.752). Training on one module did not improve performance on the other. Capsulorhexis score correlated significantly with evaluation of the videos using the OSACSS performance rating tool. For stereoacuity < and ≥120 seconds of arc, sum of both modules’ second iteration score was 73.5 and 41.0, respectively (p = 0.062).

Conclusion:  An initial rapid improvement in performance on a simulator with repeated practice was shown. For capsulorhexis, 10 iterations with only simulator feedback are not enough to reach a plateau for overall score. Skills transfer between modules was not found suggesting benefits from training on both modules. Stereoacuity may be of importance in the recruitment and training of new cataract surgeons. Additional studies are needed to investigate this further. Concurrent validity was found for Capsulorhexis module.


Introduction

  1. Top of page
  2. Abstract.
  3. Introduction
  4. Material and Methods
  5. Results
  6. Discussion
  7. Acknowledgments
  8. References

Learning cataract surgery is technically challenging and demands well-developed psychomotor skills (Binenbaum & Volpe 2006). Teaching cataract surgery is costly and time-consuming. The rate of complications is higher for surgeons under training compared to more experienced ones (Randleman et al. 2007).

Capsulorhexis is one of the most difficult skills for new cataract surgeons to master (Dooley & O’Brien 2006). Besides scarce wet-lab training, most new surgeons train in the operating room on real patients (Henderson & Ali 2007; Lee et al. 2007). It is desirable to move the initial increased risk training from the operating room.

Surgical simulators have long been used for training and assessment in other surgical disciplines and can improve operating skills (Seymour et al. 2002; Grantcharov et al. 2004; Ahlberg et al. 2007; Kundhal & Grantcharov 2009; Schijven et al. 2010). The EYESi simulator (VR Magic, Mannheim, Germany) is a commercially available virtual reality (VR) eye surgery simulator for training in both anterior and posterior segment intraocular surgery. The VR simulator provides metrics and scoring at the end of each performed task. These scores correlate with the experience of intraocular surgery indicating construct validity (Rossi et al. 2004; Mahr & Hodge 2008; Solverson et al. 2009) and VR training can improve capsulorhexis wet-laboratory performance (Feudner et al. 2009). Posterior segment VR training has been investigated but little is known regarding the learning curves associated with training on the simulator’s anterior segment modules as well as the validity of the simulator’s scoring system.

The aim of this study was to examine learning curves on the EYESi simulator anterior segment modules and whether achieved skills are transferable between tasks. Furthermore, we wanted to compare the performance score of the Capsulorhexis task on the simulator to a video-based scoring system of the same procedure.

Material and Methods

  1. Top of page
  2. Abstract.
  3. Introduction
  4. Material and Methods
  5. Results
  6. Discussion
  7. Acknowledgments
  8. References

Thirty-five medical students at Skåne University hospital participated in the study (Table 1). They were attending the ophthalmology rotation at the 9th semester. All of them underwent simulator training. They received standard oral instructions by one test leader who also supervised all tasks. The students were shown a short instructional film incorporated in the simulator system, before performing each task on the simulator. Before commencing the simulator tasks, the students were screened for previous experience with eye surgery simulators as an exclusion criterion. Age was recorded for each student. After the simulator training, stereoacuity was measured using the TNO (Laméris Ootech BV, Nieuwegen, the Netherlands) charts plate V–VII. Informed consent was acquired from each student.

Table 1.   Age and stereoacuity of participants.
 Group A (n = 17)Group B (n = 18)
Age median (range)25 (23–38)26 (25–35)
TNO 3054
TNO 60912
TNO 12011
TNO 48000
TNO >48021

EYESi simulator

The EYESi surgical simulator (software version 2.4) was used in the study. The simulator has software for training in both cataract and vitreoretinal surgery. It is provided with a virtual operating microscope, a model eye and handheld probes (forceps, cannula/cystotome, and pin) that are inserted into the model eye. It generates a virtual stereoscopic image through the oculars. The simulator comes with several different modules for cataract surgery, including both cataract-specific tasks such as capsulorhexis and phacoemulsification, as well as manipulation exercises. For each module, there are several levels with progressive difficulty. The simulator calculates a performance score between 0 and 100 for each iteration and gives metrics providing feedback on microscope handling, tissue treatment, target achievement, efficiency and instrument handling. Written or oral momentary simulator feedback is available if wanted. At task completion, the entire task sequence can be saved on a USB-stick for later use.

The participants in the study were tested on the Cataract navigation training module on level three of three (Fig. 1). Here, the trainee has to hold an instrument steady in spheres spread in the anterior chamber. The challenge is to be able to efficiently manoeuvre the instrument in the anterior chamber and hold it still in each sphere. We also used the Capsulorhexis module level four (of 10), where the trainee has to inject viscoelastics through a cannula, with a cystotome make a commencement of a capsulorhexis flap, and finally with a forceps form and complete a circular capsulorhexis (Fig. 1). Metric data collected in this study were parameters that were mutual for the two modules: overall score, procedure time with instrument inserted, injured cornea area value, injured lens area value, iris contact score, incision stress value and for the Capsulorhexis module also the parameters centring and roundness.

image

Figure 1.  EYESi virtual reality simulator modules for anterior segment surgery. Cataract navigation training module (left): the trainee has to insert and hold an instrument steady in spheres that are spread in the anterior chamber. Capsulorhexis module (right): the trainee has to form and complete a capsulorhexis.

Download figure to PowerPoint

The students were randomly divided into two groups, A and B (Fig. 2). Each student in group A performed 10 iterations on the Cataract navigation training module and thereafter two iterations on the Capsulorhexis module. The students in group B started with ten iterations on the Capsulorhexis module and then two iterations on the Cataract navigation training module. The second iteration on the Capsulorhexis for each student was recorded, and the video was saved for later evaluation. Five videos were not recorded (two from group A and three from group B); a corrupt USB memory card made three videos nonviewable and the tenth instead of the second video was recorded for two individuals.

image

Figure 2.  Study set-up. Training on one module (10 iterations) was immediately followed by two iterations on the other module at the same session.

Download figure to PowerPoint

Simulator film evaluation

The saved videos from the second Capsulorhexis were evaluated by a cataract surgeon according to the cataract performance rating tool Objective Structured Assessment of Cataract Surgical Skill (OSACSS) in applicable parts (Saleh et al. 2007). The simulator videos were also evaluated using the video-based modified Objective Structured Assessment of Technical Surgical Skills (OSATS) scoring system (Martin et al. 1997; Grantcharov et al. 2004) that has shown an ability to distinguish different levels of technical surgical skill in other ophthalmology areas (Ezra et al. 2009). The evaluator was masked regarding the simulator score, student and study group. The evaluation scores were correlated with the simulator performance score on the Capsulorhexis module. Correlation between the evaluation scores and the simulator performance score on the Capsulorhexis module were analysed.

Statistical analysis

The Friedman test was used for analyzing the learning curves. Multiple comparisons were made to identify when plateau of learning had occurred. Spearman correlation test was used to analyse the correlation between the visual evaluation of the capsulorhexis simulator videos and the simulator performance score. For the comparisons between groups A and B, the second iteration of Cataract navigation training and Capsulorhexis was analysed for each parameter with the Mann–Whitney U-test. Mann–Whitney U-test was used to compare median score between individuals with stereoacuity 60 seconds of arc or better and 120 seconds of arc or worse. A level of p < 0.05 was considered statistically significant.

Results

  1. Top of page
  2. Abstract.
  3. Introduction
  4. Material and Methods
  5. Results
  6. Discussion
  7. Acknowledgments
  8. References

Both group A (Cataract navigation training module) and group B (Capsulorhexis module) demonstrated significant improvements in performance over the ten iterations (Fig. 3). Improvement for capsulorhexis overall score was not significant p = 0.752 even though a significant difference could be detected comparing the first and last iteration (iteration 1 versus 10 p = 0.047, Wilcoxon). For the cataract navigation training, improvement in overall score was marked (p = 0.004) reaching a plateau at the third iteration. Time with instruments inserted decreased significantly for both modules p < 0.0001, plateau reached at the third iteration. Injured cornea area value decreased for capsulorhexis (p < 0.0001) reaching a plateau at the sixth iteration, and for cataract navigation training (p = 0.0003) reaching a plateau at the seventh iteration. The injured lens area value decreased but did not reach a plateau for the Cataract navigation training module (p = 0.0033). Injured lens area value for capsulorhexis was significantly lower at the 10th iteration when compared to the 1st (p = 0.022, Wilcoxon) but a significant learning curve could not be demonstrated during the ten iterations (p = 0.336). No significant improvement was observed for the specific capsulorhexis parameters centring and roundness (p = 0.091 and p = 0.873). For the parameters incision stress value and iris contact value, there were too few nonzero events on both the Capsulorhexis module and the Cataract navigation training module to allow for meaningful statistical analyses related to improvement.

image

Figure 3.  Initial learning curves for Capsulorhexis module (circles) and Cataract navigation training module (triangles). (A) Improvement over the ten iterations for capsulorhexis overall score was not significant p = 0.752 but score at the 10th iteration was significantly higher than at the 1st iteration (p = 0.047). For Cataract navigation, training improvement over the 10 iterations was significant p = 0.004 reaching a plateau at third iteration. (B) Time with instruments inserted decreased significantly for both modules p < 0.0001, plateau reached at third iteration. (C) Injured cornea area value decreased for Capsulorhexis module p < 0.0001 reaching a plateau at sixth iteration and for Cataract navigation training (p = 0.0003) reaching a plateau at seventh iteration. (D) Injured lens area value for capsulorhexis did not decrease significantly over the 10 iterations (p = 0.336) but was significantly lower at the 10th iteration when compared with the 1st p = 0.022. Injured lens area value for cataract navigation decreased significantly p = 0.0033 but did not reach a plateau.

Download figure to PowerPoint

The simulator overall score on the Capsulorhexis module had a significant positive correlation with the modified OSATS score (r2 = 0.59, p < 0.0001) and with the OSACSS score (r2 = 0.704, p < 0.0001).

There was no significant difference in performance between groups A and B when comparing the second iteration of Cataract navigation training module (Table 2). Likewise, we could not detect any significant difference for the Capsulorhexis module between the groups. Comparing the evaluations of the simulator video recordings, there was no significant difference in evaluation score between groups A and B (Table 2).

Table 2.   Comparison between groups A and B, iteration #2 (median, range).
ParameterGroup AGroup Bp-value
Cataract navigation training module
 Overall score (points)51 (0–78)46 (0–78)0.364
 Time with instruments inserted (seconds)202 (78–448)272 (84–639)0.291
 Incision stress value0.0 (0–8.0)0.0 (0.0–136.0)0.106
 Injured cornea area value0.0 (0.0–0.4)0 (0.0–1.8)0.296
 Injured lens area value0.0 (0.0–0.0)0.0 (0.0–0.73)0.163
 Iris contact value0.0 (0.0–1.4)0.0 (0.0)0.303
Capsulorhexis module
 Overall score (points)0 (0–93)7.5 (0–56)0.773
 Time with instruments inserted (seconds)114 (67–275)141 (73–435)0.151
 Incision stress value0.0 (0.0–5.7)0.0 (0.0–2.26)0.119
 Injured cornea area value0.1 (0.0–4.4)0.2 (0.0–9.5)0.763
 Injured lens area value1.5 (0.0–12.7)2.2 (0.0–18.1)0.817
 Iris contact value0.0 (0.0–0.9)0.0 (0.0–0.2)0.080
ScoreGroup AGroup Bp-value
  1. OSACSS, objective structured assessment of cataract surgical skill; OSATS, objective structured assessment of technical surgical skills.

Evaluation of video-recorded Capsulorhexis
 Evaluation OSACSS (points)8 (6–11)8 (5–10)0.64
 Evaluation modified OSATS (points)7 (4–13)10 (4–13)0.52

The median value for the sum of overall score for the second iteration of Capsulorhexis and Cataract navigation training modules was 73.5 for individuals with stereoacuity 60 seconds of arc or better, and 41.0 for those with stereoacuity 120 seconds of arc or worse. This difference was however not statistically significant (p = 0.062, Mann–Whitney) (Fig. 4).

image

Figure 4.  Sum of overall scores for second iteration on both Capsulorhexis and Cataract navigation training modules for various levels of stereoacuity. Median = thick horizontal line, boxes = 25 and 75% percentiles, whiskers = 10 and 90% percentiles.

Download figure to PowerPoint

No student had previous experience with eye surgery simulators.

Discussion

  1. Top of page
  2. Abstract.
  3. Introduction
  4. Material and Methods
  5. Results
  6. Discussion
  7. Acknowledgments
  8. References

Our study has demonstrated the initial learning curves for two different modules on the EYESi ophthalmic intraocular surgery simulator. A plateau for overall score as well as for time occurred after very few iterations on the Cataract navigation training module. Similar results have been reported for individuals more experienced in ophthalmology such as residents and experienced surgeons (Mahr & Hodge 2008). Rapid learning is common for other surgical simulator tasks as well (Park et al. 2007). Simulator training has shown to be beneficial in early clinical performance in other medical fields such as colonoscopy (Park et al. 2007) and laparoscopic surgery (Seymour et al. 2002). Seymour et al. (2002) showed that tissue damage such as injury and burns were five times more likely to occur in the nontrained group compared to the VR-trained group. On both our studied modules, the students learned how to more efficiently and cautiously handle instrument inside the model eye. The simulator therefore has the potential to be part of the initial training of new cataract surgeons.

The capsulorhexis procedure is considered to be one of the most difficult steps in a cataract operation (Dooley & O’Brien 2006). The trainee has to focus attention on both the instrument and on the rhexis formation. It is thus likely that considerably more training than 10 iterations is needed to reach a level of proficiency. In our study, the trainees reached a plateau regarding time but not regarding overall score. The overall score parameter includes quality parameters of the final rhexis and is a better representative of capsulorhexis skill acquisition than time. In the report by Feudner et al. (2009) designed to improve capsulorhexis wet-laboratory performance, all students (30 persons) and 29 of 32 residents were able to reach a score level of 90 of 100 after two training sessions. Each session included two rounds of nine different tasks, and four of them were capsulorhexis simulation tasks. They also showed that this training improved the performance of capsulorhexis wet-laboratory procedure. However, considering that experienced surgeons reached only a disappointing 155 of possible 300 score in a report from Le et al. (2008) where they also included the Capsulorhexis procedure, a lower performance goal might well be enough and time efficient to strive for. (In that report, experienced surgeons performed the Capsulorhexis task in conjunction with two manipulating tasks, significantly better compared to novices at initial practice.) More studies are needed to establish a true level of proficiency for experienced cataract surgeons and how many iterations in general a novice would need to reach that level. As a comparison, on a laparoscopy simulator participants receptive to training needed an average of 25 iterations to reach proficiency (Schijven & Jakimowicz 2004).

The set-up of training is important. Ten iterations on the same level is probably not the most efficient learning. Instead, gradually increasing difficulty, termed ‘shaping’ in the behavioural literature, has been suggested as one methodology of training (Gallagher et al. 2005). On the other hand, the increments have to be sufficiently large to be enough challenging to give an efficient training (Ali et al. 2002). As is noticed in our material, skills at baseline vary between individuals and we could see that some individuals with bad stereoacuity had difficulties in improving their performance score. Others have shown that previous computer experience and visuo-spatial skills affect performance on VR surgery simulators (Hassan et al. 2007; Rosser et al. 2007). Schijven & Jakimowicz (2004) found in their study one group that did not benefit from training despite their low initial scores and in ophthalmology, it has been reported that around 10% of residents have difficulties in learning surgical skills (Binenbaum & Volpe 2006). Considering these facts, individualized training towards a level of proficiency is desired. This is supported by research regarding learning of motor skills where self-controlled practice leads to more effective training (Wulf et al. 2010). To the best of our knowledge, no previous studies on the EYESi anterior segment modules have included more than five iterations of the same module on the same level. More studies are therefore needed to further investigate the capsulorhexis learning curve and training conditions.

Simulator overall score correlated with the OSACSS evaluation score. Because this evaluation tool has demonstrated construct validity for video-based evaluations of real cataract operations (Saleh et al. 2007), it strengthens the validity of the scoring system in the simulator. In a similar manner, we found a correlation also for the video-based modified OSATS scoring system. To our knowledge, this scoring system has not been used before to evaluate intraocular operations, but has shown an ability to distinguish technical surgical skill in ophthalmic microsurgery (Ezra et al. 2009). The OSATS scoring system is also widely used in video-based assessment in other surgical areas (Kundhal & Grantcharov 2009; Schijven et al. 2010).

Our study subjects all took a stereoacuity test. The estimated statistical power after grouping stereoacuity in <120 seconds of arc and ≥120 seconds of arc was, however, too low (<80%) to make any definite conclusions regarding effects on performance. Rossi et al. (2004) demonstrated that stereopsis correlated with performance in vitreoretinal simulation. A confounding factor in their study was however that those subjects with vitreoretinal experience had a better stereoacuity. We believe, however, that good stereoacuity would probably be an advantage in cataract surgery. Noteworthy, two of the three students with TNO >480 (one from group A and one from group B) scored zero points on seven and eight of their first 10 attempts on the simulator. This study was not designed to investigate the different performances depending on stereoscopic vision but it gives an indication on its importance in intraocular surgery. This factor may have large impact in the recruitment and training of new cataract surgeons. Additional studies are needed, however, to investigate this further.

In conclusion, when training on the EYESi simulator, the trainees quickly learned how to more efficiently and cautiously handle instruments inside the model eye. The simulator therefore has the potential to be part of the initial training of new cataract surgeons and it would be beneficial to train on both modules. However, the structure of training, especially for more complex tasks like capsulorhexis, demands for an individualization of training. Our experience with individuals with poor stereoacuity supports this strategy and might be of large impact in the recruitment and training of new cataract surgeons. Further studies are needed to optimize training programs and make them time efficient.

Acknowledgments

  1. Top of page
  2. Abstract.
  3. Introduction
  4. Material and Methods
  5. Results
  6. Discussion
  7. Acknowledgments
  8. References

This study was supported by a grant from the Herman Järnhardt Foundation, Malmö, Sweden. The authors have no commercial or proprietary interest in the instrument described.

References

  1. Top of page
  2. Abstract.
  3. Introduction
  4. Material and Methods
  5. Results
  6. Discussion
  7. Acknowledgments
  8. References