- Top of page
- Materials and Methods
- Conflict of Interest
With the growing adoption of minimally invasive surgery, many urology residency programmes have recognized the need to strengthen laparoscopic and robotic surgical training. Surveys of both US and Canadian urology trainees point to dissatisfaction in minimally invasive surgery exposure and training during residency [1, 2].
In recent years, attention has focused increasingly on the need for training tools for robotic surgery. Despite the rapid and widespread clinical adoption of robotic surgery, methods for training and establishing competency have been slow to develop. At present, no validated or standardized curriculum exists for training in basic robotic surgical skills . As has happened with the concept of Fundamentals of Laparoscopic Surgery (FLS) , significant efforts have been directed toward the development of basic robotic exercises for training in robotic surgery [5, 6]. Recently, virtual reality simulation has also been developed and validated in three predominant platforms: the Robotic Surgical Simulator (Simulated Surgical Systems, Williamsville, NY, USA) [7, 8]; dv-Trainer (Mimic Technologies, Seattle, WA, USA) [9-13]; and da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA/Mimic Technologies) [14-17]. For clinical training and evaluation, a validated assessment tool of robotic skills, the Global Evaluative Assessment of Robotic Skills (GEARS), has been created to determine clinical competency and to monitor robotic skills acquisition . GEARS has been previously validated in the clinical setting .
Traditional validation of training tools entails a stepwise progression of evaluation: face validity (realism of tool); content validity (utility as training tool); construct validity (ability to discern between novice and expert performance); concurrent validity (performance correlation with the ‘gold standard’). The challenge in robotic surgery today is the lack of an accepted ‘gold standard’ training method. Clinical robotic training (operating on real patients with expert supervision), as the default method of training, may not be the ideal starting point for novice robotic surgeons unfamiliar with the da Vinci interface. While concurrent validity traditionally assesses a novel training tool against the gold standard (by correlation of comparative performance), it may be more practical to evaluate a novel tool against other tools or methods that are being developed simultaneously. By correlating performance across different training methods, one may infer the relative utility of new robotic training tools (cross-method validity).
Inanimate tasks, virtual reality simulation, and in vivo training are three components of robotic training that have been independently developed and tested. In the present study, we externally evaluate these three standardized training methods for their construct validity and explore the concept of cross-method validity by correlating relative performance across the different methods.
- Top of page
- Materials and Methods
- Conflict of Interest
To our knowledge, this is the first study in a single setting to simultaneously correlate the performance of expert and novice/trainee surgeons across inanimate, virtual reality and particularly, in vivo platforms. Cross-correlation of individually validated tools is a novel concept (cross-method validity) that we propose as a method to provide comparative assessment of novel training tools and to establish internal consistency of a training curriculum. For the latter, this means each component of the educational programme is inter-related and directly supports the global goal of robotic surgical skills acquisition. In the present study, we confirmed the construct validities of the three robotic training methods and demonstrated significant cross-method correlation amongst a diverse cohort of both expert surgeons and novice operators.
External validation of the outcomes of construct validation studies is a valuable exercise, especially for emerging and novel training methods, in large diverse cohorts before their widespread adoption. Additionally, while a traditional validation step may compare a novel method to an established gold standard method (concurrent validity), robotic training thus far has lacked an established training method. With several training methods being explored and developed simultaneously (i.e. inanimate, virtual reality and in vivo), an alternative or additional validation step may be to prospectively compare them against each other: cross-method validity. While it may seem intuitive that training methods demonstrating construct validity would also have cross-method validity between them, such a relationship has not previously been demonstrated.
As individual training tools are developed for robotic surgery, residency programmes must determine which of these should be integrated into their robotic surgery curriculum. The American College of Surgeons has proposed a model for surgical skills acquisition, which includes expert demonstration and error avoidance, proficiency-based practice, and structured assessment . As part of the pre-clinical component of robotic training, we propose a multi-method approach (inanimate, virtual reality, and in vivo) to robotic surgery training, which would provide trainees with the opportunity to develop and demonstrate their proficiency with basic robotic surgical skills before proceeding to the clinical arena.
Inanimate training requires minimal additional cost once a robot is available at an institution. Simply constructed homemade materials can be used or, preferably, validated standardized task kits (e.g. the Fundamental Inanimate Robotic Skills Tasks described in the present study), can be used to provide hands-on experience with the controls and handling of the robotic instruments. The featured inanimate exercises are analogous to the widely practiced FLS tasks for laparoscopic training. Standardized validated training tasks and methods for evaluation are important for establishing consistent performance outcomes. One limitation of this method is the requirement of an available robotic system for training, which may pose an accessibility challenge at high-volume centres.
Virtual reality simulation is a novel and emerging method for robotic surgery training. While it involves an additional cost for either a stand-alone simulator (the Robotic Surgical Simulator or the dv-Trainer) or an add-on simulation unit for the robotic surgeon console (da Vinci Skills Simulator), it can allow familiarization with the robotic interface and facilitate training of basic surgical skills (i.e. needle handling). Currently, all commercially available simulators that have been extensively validated in the literature [7-17] are limited to basic skills training. Virtual reality has the potential to play a larger role in training once cognitive-based and procedure-specific modules (i.e. prostatectomy and partial nephrectomy) are developed. Initial validation studies already suggest that extended simulation training of basic skills has an impact on real tissue surgical performance .
In vivo training in the animal model is perhaps the most sophisticated training method before intraoperative clinical training. High-fidelity simulation becomes important once basic skills have been acquired and procedural learning begins, but this is expensive, requiring a dedicated training robot and an animal facility that few programmes can afford. Because of cost restraints, in vivo animal training is likely to be limited to advanced procedural training at select centres. At institutions where a robotic animal laboratory is available, in vivo training should be performed and assessed using a standardized assessment tool to track progress and proficiency. GEARS has previously been validated in the clinical model . In the present study, GEARS was used to assess performance on a standardized task that required both delicate tissue handling and suturing, which is currently not replicable in inanimate or virtual reality training. We confirmed that this assessment tool can reliably differentiate trainee and robotic expert performance (P < 0.001). Future efforts will be directed toward the development and validation of procedural-based competency assessment tools, including hilar dissection, tumour excision, and renorrhaphy for partial nephrectomy and bladder neck transection, pedicle control, nerve dissection and reconstruction for prostatectomy.
Clinical training involving patients should follow the establishment of proficiency with the above-mentioned methods. There are several challenges to robotic clinical training, most stemming from the robotic interface, which precludes hands-on teaching and limits control to a single surgeon. Clinical assessment tools, such as GEARS  can provide informative feedback on trainee performance and serve as a method to evaluate the clinical outcome of a skills training programme.
In the present study, we provide a novel assessment method (cross-method validity) that may help in the development of an integrated robotic surgery curriculum. To maximize efficiency, synchronized development of standardized training tools into a training curriculum is needed, with best practices established through rigorous performance data-based validation efforts. We propose evaluation of standard trainee-expert performance characteristics on new training tools using traditional forms of validation (i.e. face, content, construct, concurrent validities) as well as comparative assessment (cross-method validity) as decribed in the present study. And while training and assessment are two different domains, they are integral components of the education process. A well-designed and validated training tool may also be used as an assessment tool. As further performance data are accumulated for different levels of experience, proficiency benchmarks for skills evaluation can be generated.
The different methods, whether inanimate, virtual reality, or in vivo should concurrently target the same global skill sets; therefore, observation of performance correlation across methods is relevant and can serve to internally evaluate the individual components of the training programme for their value both as training and as assessment tools. For example, the simplest featured virtual reality task ‘Peg Board,’ probably has limited utility as it did not exhibit a significant correlation with the other two methods (inanimate and in vivo), while performance of the three more sophisticated tasks demonstrated significant correlation with the other methods (Table 3). In addition, performance of the inanimate tasks showed the strongest correlation to in vivo robotic performance, supporting the important role of such inanimate exercises in a robotic training programme. Accordingly, cross-method correlation can be used to select the most useful training and assessment tools when constructing a robotic training curriculum. Further efforts are needed to demonstrate the ability of different training methods to result in better clinical outcomes (predictive validity).
The present study is not without limitations. Its primary limitation is that it provides a static, snapshot assessment of robotic training in a single, although broad, cohort. Longitudinal studies to assess the impact of the training programme on robotic skill acquisition are needed. Another limitation may be the lack of comparison of deconstructed skills across methods. This is a limitation of today's robotic training tools, i.e. the lack of equally developed assessment metrics. For example, current limitations of virtual reality simulation prevent certain skills comparisons (i.e. suturing and realistic tissue deformation). Furthermore, inanimate task training, even the FLS, presently lack validated complex metrics apart from time and the register of errors. We expect that with growing attention and sophistication in training tools, these limitations will be addressed. The present study provides a global comparative assessment of skills.
Efforts to integrate a robotics training programme with didactic and cognitive components are under way, with incorporation of the three methods and where trainee performance is prospectively tracked over time. As minimum standards of proficiency are defined, the tools used in the present study may assist in establishing benchmarks for competency and credentialing for robotic surgical privileging. Further validation of the cross-method concept is actively being pursued with broader application across multiple novel training platforms.
In conclusion, the present findings externally confirm the construct validity of the featured training methods and demonstrate a significant performance correlation across virtual reality, inanimate, and in vivo settings. We present the concept of cross-method validation of individual training tasks, which may provide a method of comparatively evaluating novel tools developed for robotic surgery training.