Design and evaluation of an image-guidance system for robot-assisted radical prostatectomy

Authors


Abstract

What's known on the subject? and What does the study add?

  • Systems for image guidance during laparoscopic surgery can be broadly defined as systems that enable the surgeon to refer to preoperatively gathered information during the procedure. For a laparoscopic system the preoperative information can be overlaid onto the laparoscopic video screen. Examples of surgical image-guidance systems and the results of early testing are not uncommon but the technical methodologies used vary widely as do the visualisation methods.
  • This study reports our experience of using an image-guidance system on 13 patients. Furthermore, we use previously proposed methodology to form a development and evaluation framework specific to image-guided laparoscopic radical prostatectomy. Finally, we propose that if the system development process is properly designed, it should be possible to correlate system technical parameters with clinical outcomes. We present a possible plot for the key technical parameter of accuracy. Better understanding of this correlation should enable robust development and evaluation of surgical image-guidance systems to optimise patient outcomes.

Objective

  • To implement and test the feasibility of an image-guidance system for robot-assisted radical prostatectomy (RARP). Laparoscopic surgical outcomes may be improved through image guidance. However, to demonstrate improved outcomes rigorous evaluation techniques are required. Therefore we also present our work in establishing robust evaluation techniques.

Patients and Methods

  • Development work used three cadavers and an anatomy phantom. The system has been used on 13 patients.
  • During surgery the surgeon can refer to the patient's magnetic resonance imaging (collected before the operation) overlaid on the endoscopic video image.
  • The result of the overlay process was measured qualitatively by the surgeon with reference to the desired clinical outcomes.

Results

  • The use of the overlay system has not resulted in any measurable change in clinical outcomes.
  • The surgeons found the system to be a useful tool for reference during surgery.
  • A more rigorous evaluation method is proposed that will enable on-going development.

Conclusion

  • Image guidance during RARP is feasible. We propose a series of measures that will improve further development and evaluation.
Abbreviations
AR

augmented reality

3D

three-dimensional

(L)(RA)RP

(laparoscopic) (robot-assisted) radical prostatectomy

RMS

root mean square

US

ultrasound/ultrasonography

Introduction

The introduction of image guidance to robot assisted radical prostatectomy (RARP) is an area of increasing interest. This paper describes our recent experience in implementing a simple image-guidance system in theatre and initial use on 13 patients. It became clear during early use of the system that we lacked a rigorous way to evaluate the systems performance. Without a rigorous approach to system evaluation, it is not possible to show clinical benefit nor develop the system to improve patient outcome. The intent of the present paper is therefore two-fold. The paper begins with a description of the image-guidance system as implemented to date. The second part of the paper attempts to define an evaluation protocol that will enable proper development of the system.

Patients and Methods

Image-Guided Surgery Systems

The present study was concerned with augmented reality (AR) systems for image-guided laparoscopic surgery. Such systems work by showing the surgeon a processed preoperatively acquired image overlaid on the visible patient anatomy. The idea of using AR to aid surgery is not new, with various systems having been proposed [1-5], to name but a few. Systems tailored to the daVinci® surgical robot have also been proposed [6, 7]. Any AR image-guidance system must contain four core subsystems. These are:

  1. the preoperative imaging system,
  2. the registration system,
  3. the user interface,
  4. and the display system.

The various systems described in the literature each take a different approach to implement each of the subsystems. Robust evaluation, of the type put forward by McCulloch et al. [8], requires a common way to compare such systems and evaluate their performance. In some cases, methods are emerging to do this, e.g. different display systems can be classified using the taxonomy proposed by Kersten-Oertel et al. [9]. Similarly the use of open source software tool kits such as IGSTK (Image-Guided Surgery Toolkit) [10] enables easier comparison of systems. This paper is primarily concerned with developing a method to evaluate the registration system.

A minimalist image-guided surgery system

The goal of this project is to develop the registration subsystem for an image-guidance system for robot-assisted RP (RARP), and evaluate how the performance of the registration subsystem affects the clinical outcome. To this end the first task was to build a minimalist image-guidance system that could serve as a baseline for ongoing development and evaluation.

The remaining components were kept as simple as possible. The imaging system used unprocessed T2-weighted MRI images of the patients prostate. These were in use clinically for preoperative assessment by the surgical team, so their use for image guidance was straightforward. The display system simply shows the MRI images overlaid on the surgical scene with user variable opacity on a separate laptop screen or on the daVinci S auxiliary display. Figure 1 shows an example of the system in use.

Figure 1.

The surgeon's view of the image-guidance system. A preoperative image of the patient is overlaid onto the surgical scene. The opacity of the overlay can be varied between 0 and 100%. (20% shown at left, 100% at right).

We have maintained a simple MRI slice overlay for two reasons. Firstly, the implementation of systems that show rendered three-dimensional (3D) anatomy, e.g. [4, 6], require that the preoperative image (MRI) is first ‘segmented’. The anatomy of interest, (prostate, tumour location, neurovascular bundles) must all be defined in the image. At present this would be done manually by a radiologist. This process may introduce errors to the system that are poorly quantified and difficult to control for.

The user interface used is keyboard control via the laptop computer. Two different registration approaches were used. A more complete discussion of the registration system follows.

Image registration for AR image-guided surgery

To implement and image overlay system it is necessary to know the correct position of the preoperative image relative to the camera lens. Failure to correctly determine this relationship will result in a mismatch between the anatomy visible on the overlay and the actual patient anatomy visible through the camera. We refer to the process of determining the correct position and pose for the preoperative anatomy as a ‘registration’ process. Two images, the intraoperative camera view and the preoperative image, are registered so that they are aligned. For cases such as abdominal laparoscopic surgery, where the shape of the anatomy can change during surgery, it may also be necessary to deform the preoperative images to achieve an accurate registration. Within the literature there are many proposed methods for performing the registration process. To date the most common method is the use use of fiducial markers [1-3, 5]., use a calibrated and tracked camera together with fiducial markers. Such systems have the advantage that they will function regardless of what anatomy is visible through the laparoscope, i.e. they can operate blind. In cases where landmarks are visible through the camera the visible anatomy can itself be used for registration, as proposed for the daVinci by [6, 7].

The registration process can be defined as the determination of a set of mathematical transforms between coordinate systems. Figure 2 defines the coordinate systems of interest for this application.

Figure 2.

Any image-guided surgery system is defined by a set of geometric transforms between different coordinate systems. The figure below defines the coordinate systems relevant here. The transforms between individual coordinate systems are combined to give the transform between the preoperative MRI (CSMRI) and the laparoscope's video screen (CSScreen).

Whilst the eventual clinical utility of any image-guidance system will depend heavily on the visualisation and user interface used, at the technical core of any image-guidance system must lie the registration process. Our chosen area of study is therefore how to estimate the relevant transforms and how errors in the estimation will influence the clinical utility of the finished system.

We designed and tested two methods to estimate the registration transform from the camera lens to the preoperative image. Neither system accounts for non-rigid deformation of the tissue, which will occur in practice. Therefore their accuracy is limited by the degree of shape change between intraoperative imaging and surgery.

Both methods avoid the need for fiducial markers by using the pelvic bone. The pelvic bone is useful as it can be seen in various preoperative imaging modalities and intraoperatively, and the shape of the pelvic bone will not change. Additionally, the prostate is near the centroid of the pelvic bone, meaning registration errors will be minimised. Both methods estimate the position of the camera lens using an optical tracking system, shown in Fig. 3. Optical tracking was used in preference to daVinci kinematic data as the literature indicated it should be the more accurate tracking method [11-13].

Figure 3.

The laparoscope is tracked with 14 infrared-emitting diodes attached to a collar. The position of each diode is tracked using a three camera Optotrak Certus system.

The methods differ in how they determine the position of the patient relative to the laparoscope lens. The first method developed uses the fact that the internal surface of the pubic arch is visible through the laparoscope during the latter stages of RP. Before surgery an ordered set of 42 points on the inner surface of the pubic arch are manually defined in the MRI image. A wire-frame image of these points is shown overlaid on the surgical scene. The wire-frame image is manually aligned in two-dimensions with the visible pubic arch using a simple keyboard interface. Figure 4 shows the alignment process. Once the alignment is established it is in theory possible to maintain alignment using the optical tracking data for the laparoscope.

Figure 4.

A set of 42 points on the inner surface of the pubic arch are manually identified in the MRI image. These form a wire frame that can be projected over the surgical scene. A simple user interface is then used to align the projected wire frame to the visible anatomy. The left hand image shows the wire frame and visible anatomy out of alignment, the right hand image shows them after alignment. Alignment takes <30 s.

Using the manual alignment method was perceived to have two significant drawbacks. Firstly, it is difficult to properly quantify the alignment accuracy, although early results indicate that the system has an apparent error of ≈20 mm. Secondly, the requirement for visibility of the pubic arch prevents the use of the system in the early stages of the operation. Whether or not the second of these problems is significant or not will be discussed later, as it is an important point in the process of designing a clinically useful system. To enable the system to be used whether or not the pubic arch was visible an alternative method was developed, using a B-mode ultrasound (US) probe to percutaneously image the patient's pelvic bone in the operating room [14, 15]; Fig. 5 shows the process.

Figure 5.

A set of US images of the patient's pelvic bone are acquired immediately before surgery, with the patient in the operating position. These are aligned to a pseudo CT image of the patient's pelvis using an image-to-image registration algorithm.

Finding the pelvic bone using an ultrasonography should be more accurate than simple visual alignment and enables image guidance in the earliest stages of the procedure. These improvements come at the cost of significantly increased complexity. There is the obvious need for an US machine in the operating theatre, but there is also significant computational complexity within the algorithm to register the US images to the preoperative MRI images. Thus, having implemented two possible registration methods, the question arises how they are to be compared and assessed. Furthermore as the methods are developed and improved, how can any future assessments be assessed. The next section puts forward a framework to assess the performance of an image guidance system for RARP.

Defining the Image-Guidance System

Defining the clinical goals

The success or otherwise of any surgical innovation, including an image-guidance system, can only judged by the impact on clinically outcomes. The first stage in designing both the image-guidance system and the evaluation method is to define the relevant clinical outcomes. Table 1 lists the relevant clinical outcomes for RP.

Table 1. Clinical outcomes that define the success or otherwise of a RP. To be deemed a success an image-guidance system for RP should have some demonstrable positive impact on some or all of these factors.
Clinical outcomeMeasure
Positive margin rate%
Biochemical PSA recurrence%
Urinary continenceMonths, %
Erectile functionMonths, %
Damage to rectum%
Conversion to open%
Postoperative painVisual analogue scale
Length of hospital stayDays
Conversion to open surgery%
SurvivalYears

However, measuring clinical outcomes is of little use for the design and development of an image-guidance system. Listing the desired clinical outcomes tells us nothing of the design goals for an image-guidance system. In general, assessing the clinical outcomes requires substantial sample sizes to account for confounding factors and sufficient follow-up time. Measuring any changes in the clinical outcomes is therefore not a practical way to evaluate and compare the performance of image-guidance systems in the development stages. A more practical approach is to use the desired clinical outcomes to define a set of system design goals.

Defining the technical goals

Translating the clinical goals to technical goals is done by reviewing each clinical goal and calculating what the image-guidance system needs to show to aid the surgeon in achieving the clinical goals. Each technical aim defines something the system should ‘show’ the surgeon. By show we mean that the system is passive, only informing the surgeon of the system's estimate of the position of the anatomy, but leaving any decision making in the hands of the surgeon. How the system shows the anatomy is a feature of the user interface and does not need to be defined at this stage. Reducing the positive margin rate, survival and the recurrence of high PSA levels are all functions of being able to see the tumour location and the prostate capsule. Improving urinary continence is a function of the clean resection and subsequent reconstruction of the urethra. The urethra is cut in two places, at the interface between the bladder and prostate, and at the prostate apex. Aiding the identification of these areas should help improve continence outcomes. Preserving erectile function is a direct function of the preservation of the neurovascular bundles. This would be aided by showing the location of both the neurovascular bundles and the tumour. Avoiding damage to the rectum would be aided by helping to define the plane of the rectum below the prostate. One factor that leads to conversion to an open procedure is the occurrence of uncontrollable bleeding. This could be aided by showing the surrounding blood vessels. Reducing the hospital stay and the postoperative pain would both be achieved by preventing conversion to open surgery. Table 2 summarises the resulting technical goals.

Table 2. The design goals. To improve the outcomes shown in Table 1 the system should meet some or all of the goals shown here.
Design goalMeasure
Show prostateAccuracy
Show tumour locationAccuracy
Show bladder neck/prostate planeAccuracy
Show position of prostate apexAccuracy
Show prostate capsuleAccuracy
Show neurovascular bundlesAccuracy
Show plane of rectumAccuracy
Show surrounding blood vesselsAccuracy

How well the system meets the design goals can be measured in the very early stages of clinical trials, through the use of questionnaires after surgery for example. Defining technical goals that are relevant to the desired clinical outcomes enables assessment of system performance much earlier than relying on measuring clinical outcomes. The next stage is to determine what measurable system parameters will influence the design goals.

Design goals to system parameters

In practice the user interface and visualisation method will have a very large impact on how well the system meets it design goals. Accuracy is not very useful if the user cannot interpret the display. However, within the scope of this paper accuracy is the primary measurable system design parameter. Joining Tables 1 and 2 and adding a column for the system parameters yields Table 3 [8, 9].

Table 3. An image-guided liver surgery system is defined by the system parameters in the left most column. It is reasonable to expect that these will change significantly during system development. Further, but less significant changes, can be expected after release of the system. However, the system parameters are not of interest clinically. The success or failure of the system will be judged by the outcomes in the right-hand column. A key requirement for an effective development process is therefore to link the system parameters with the outcomes. Whilst the system parameters can in theory be quantified via experiment or measurement, the system outcomes cannot be assessed without using the system on a significant number of patients. The outermost columns can be linked by the careful assignment of system design goals. If these can be measured, even subjectively, during system development the development cycle can be significantly shortened.
System parametersDesign goalsClinical outcomes
  1. Preop., preoperative; postop., postoperative.
Measures  
AccuracyTumour locationPositive margin rate
Preop. image resolutionBladder/prostate interfaceBiochemical PSA reccurence
Preop. image distortionExtent of prostate capsuleUrinary continence (months, %)
Preop. image contrastShow rectumErectile function (months, %)
Delay between image and surgeryShow neurovascular bundlesDamage to rectum
Update rateAid Preop. planningConversion to open
Visualisation design [9] Postop. pain
User interface design Length of hospital stay
  Improved training
  Conversion
  Survival
Measurement methods  
Direct measurement and laboratory experimentObservation of system in use and user questionnaireAnalysis of trial results
Development stage [8]  
1 Idea, 2a Development2a Development, 2b Exploration2b Exploration, 3 Assessment, 4 Long-term study

Table 3 links the desired clinical outcomes with system parameters and design goals that can be measured at the earliest stages of system development. Using this approach enables a development programme that follows the guidelines set down by McCulloch et al. [8], increasing the likelihood that the system will produce clinical benefits.

Accuracy

There will always be an error in the position estimated by the guidance system and the actual position of a given anatomical point. For a daVinci system with 3D laparoscopic video, the system accuracy is defined as the magnitude of the distance between the estimated and actual anatomical point. In general, the system accuracy can be described as a statistical distribution around the true position of the point of interest. In this paper the accuracy figures given are root mean square (RMS) values. To aid visualisation of this error Fig. 6 shows a 5 mm RMS error projected onto a typical prostatectomy image. For several reasons, but primarily due to the movement of soft tissues, the accuracy will be different for each of the anatomical targets.

Figure 6.

A visualisation of a 5 mm RMS error. A single point, lying near the apex of the prostate and shown at the centre of the cross hairs, has been projected onto the screen 1000 times under the influence a normally distributed error with a standard deviation of 5 mm. The green ellipse represents a single standard deviation for the projected points, the blue ellipse two sds. If the image-guidance system as presented was used in multiple operations, we would expected the overlaid point to fall within the green ellipse ≈68% of the time and within the blue ellipse 93% of the time.

For an image-guidance system where the surgeon would otherwise be operating blind, e.g. neurosurgery where the needle tip is not visible, there is a reasonably straight forward relationship between the accuracy of the system and the clinical utility. However, for RARP all of the anatomy listed in Table 2, with the exception of the tumour itself, to some extent is already visible to the surgeon through the laparoscopic cameras. Thus there may not be the same clear relationship between the system accuracy and the clinical utility. Whilst it is clear that a system that shows the anatomy to within 1 mm will be more useful than a system that shows the anatomy to within 5 mm, it would be wrong to set a threshold error, above which the system becomes unusable. The surgeon retains the ability to mentally correct an inaccurately displayed image, using visible anatomy.

An interesting feature of the development and evaluation of an image-guidance system for RP is therefore trying to determine a relationship between the system's accuracy and the system's clinical utility. This is important because it is likely that increasing the accuracy of the system will also increase the system's complexity. Increased complexity suggests potential increases in cost, and reductions in robustness and intuitive behaviour.

One of the goals of the present study was to establish a framework to examine the compromise between accuracy and complexity. Such a framework would enable three key outcomes. Firstly, it should enable the objective evaluation of changes during the development of an image-guidance system. Secondly, it will enable the effects of potential improvements to be estimated before implementation, enabling the planned improvements to be plotted on a development roadmap. Thirdly, it enables objective comparison of competitive systems. At the core of this framework are two measures of system performance, clinical utility and system complexity. By attempting to measure how these change in relationship to system accuracy it should be possible to develop an intelligent balance between accuracy and complexity.

Clinical utility

We can define a measure, clinical utility, which measures the effect of using the system on the outcomes listed in Table 1. A system that has a beneficial outcome will have a positive clinical utility, a system with no impact on the clinical outcomes will have a score of zero. In reality it is unlikely that clinical utility could be measured absolutely, rather it could only be used as a way of comparing two or more systems. As a system's accuracy improves so should its clinical utility.

System complexity

We can define a similar measure, system complexity, which measures the complexity of a system. In the context of system accuracy this attempts to quantify the algorithmic complexity required to achieve a certain accuracy. An example of an image-guidance system with zero complexity is the daVinci S. Here there is no attempt to register the pre- and intra-operative images, hence the zero complexity. At the other end of the scale, an imaginary system with zero registration error could be defined as having a complexity of 1. All real systems that use some sort of registration algorithm to attempt to align the pre- and intra-operative images can then be placed between these two extremes. At present the numbers used for complexity are a purely subjective estimate. Attempts to improve the accuracy of a registration system will in general increase the system complexity.

Clinical utility vs complexity

With the two measures defined we can attempt to examine the compromise between clinical utility and system complexity for given systems. By using the links between clinical utility and system parameters (accuracy) developed in Table 3 it is possible to use accuracy as a proxy for clinical utility. Complexity and accuracy can be quantified for the existing systems, more usefully it should be possible to estimate the effect of proposed improvements, to determine whether they are likely to significantly improve clinical utility.

Measuring accuracy

The accuracy of the system using pelvic bone US for registration was determined using a combination of numerical simulation and laboratory experiment [16]. These experiments indicated that the system accuracy is ≈9 mm, not allowing for non-rigid deformation of the tissue. The US-based registration process accounting for ≈7 mm of error and the laparoscope tracking ≈5 mm. Measuring the accuracy of the system using the visible surface of the pubic arch has not been done fully. The measurement is complicated by the motion of the laparoscope. However, based on a subjective evaluation an accuracy of ≈20 mm was estimated.

System complexity cannot be measured absolutely, but it is possible to plot the systems relative position. The system that uses US for registration is more complex than the system using manual visual alignment. We suspect that the system using US registration will be more clinically useful, in part because it is more accurate, but also because it enables overlay before the pelvic bone becomes visible.

Whilst using complexity and clinical utility to quantify the image-guidance systems is interesting, of more use is the ability to use these values to map out the development of an image-guidance system. During development of our systems we have identified several ways that the system accuracy could be improved. These include using fiducial markers and improving the laparoscope tracking algorithm. In the longer term, it is theoretically possible to account for non-rigid motion of the patient during surgery, allowing a guidance system with errors of <2 mm [17]. Such methods increase the system accuracy, and in general the system complexity. By estimating their potential accuracy, clinical utility, and complexity, it is possible to plot charts showing the likely development trajectory of the system. Figure 7 plots the possible development trajectory of the system.

Figure 7.

Plots of complexity and predicted clinical utility vs system accuracy, for the two systems tested to date and a number of potential developments. NVB, neurovascular bundle.

Whilst we do not expect the numerical values used in Fig. 7 to be correct, they do form a useful framework for controlling system development. Furthermore, as development progresses, the plots in Fig. 7 can be populated with more accurate values of clinical utility, accuracy, and complexity. This forms a useful way to transfer knowledge to future development of similar systems.

Results

Table 4 summarises the clinical outcomes for the 13 patients included in the clinical study. Qualitatively the surgeon's found the system a useful addition in theatre. A proper understanding of the system design goals has enabled the development of a meaningful surgeon questionnaire to assess how well the current systems meet the design goals. This was not in place for the first nine cases, but was in for cases 10–13 and will be used for future cases.

Table 4. Clinical outcomes for the first 13 patients. There is no reason to expect that the system as implemented would have affected clinical outcomes.
Age, yearsPreoperative PSA level, ng/mLStageGleason GradeMarginsPostoperative PSA level, ng/mLUrinary continence at 8 weeks
587.1pT2c3+4Clear<0.03Dry
707.8pT3a3+4Focal – base<0.03Dry
606.4pT3a3+4Focal – apex<0.03Dry
6811pT2c3+4Clear<0.031 safety pad
699.8pT2a3+3Clear<0.03Dry
5713.23pT2c3+4Clear<0.03Dry
527.3pT2c3+4Clear<0.03Dry
5714.2pT2c3+4Focal – apex<0.03Dry
585.6pT2c3+4Clear<0.031 safety pad
7210.4pT2c4+4Clear<0.03Dry
665.8pT2c4+3Focal – apex<0.03Dry
5816.6pT3a3+4Clear<0.03Dry
646.7pT3a3+4Clear<0.03Dry

Discussion

We have developed and tested a simple image-guidance system for RARP. More importantly, we have introduced methods to quantify the system performance in a clinically useful way. Quantifying the system performance will enable the control of the system development process, as per McCulloch et al. [8]. Controlling the system-development process should yield a system that maximises positive patient outcomes, whilst ensuring a robust system. Furthermore, by showing a link between the measured system parameters, the system development goals, and the desired clinical outcomes it should be possible to show the clinical benefit of the system at an early stage. Potentially this could avoid the usual difficulties in setting up randomised controlled trials for surgical innovations.

Conflict of Interest

The work was funded by EPSRC DTA funding, Prostate Action, and the Guy's and St Thomas' Charity.

P. Dasguta and B. Challacombe acknowledge financial support from the Department of Health via the National Institute for Health Research (NIHR) comprehensive Biomedical Research Centre award to Guy's & St Thomas' NHS Foundation Trust in partnership with King's College London and King's College Hospital NHS Foundation Trust. They also acknowledge support from the MRC Centre for Transplantation and project grant funding from the Guy's and St. Thomas' Charity.

Ancillary