Augmented reality display of neurosurgery craniotomy lesions based on feature contour matching

Traditional neurosurgical craniotomy primarily uses two ‐ dimensional cranial medical images to estimate the location of a patient’s intracranial lesions. Such work relies on the experience and skills of the doctor and may result in accidental injury to important intracranial physiological tissues. To help doctors more intuitively determine patient lesion information and improve the accuracy of surgical route formulation and craniotomy safety, an augmented reality method for displaying neurosurgery craniotomy lesions based on feature contour matching is proposed. This method uses threshold segmentation and region growing algorithms to reconstruct a 3 ‐ D Computed tomography image of the patient’s head. The augmented reality engine is used to adjust the reconstruction model’s relevant parameters to meet the doctor’s requirements and determine the augmented reality matching method for feature contour matching. By using the mobile terminal to align the real skull model, the virtual lesion model is displayed. Using the designed user interface, doctors can view the patient’s personal information and can zoom in, zoom out, and rotate the virtual model. Therefore


| INTRODUCTION
Traditional craniotomy has some clinical problems, such as a high risk of critical tissue injury, high intensity of operation, and heavy reliance on doctor experience and skills. Medical staff usually obtain a two-dimensional medical image of the patient's head through a Computed tomography (CT) or MRI scan to understand the tumour information for the patient's body, but these images do not provide the relative position of lesions in 3-D space [1]. Doctors can only judge the 3-D injury area of patients by observing these two-dimensional images and estimating the location of brain injury lesions, which will cause large positioning errors. In the preparation stage for craniotomy, the doctor will expand the area of the skull fenestration, increasing the patient's trauma and the risk of surgery [2]. Therefore, augmented reality technology is needed to assist the 3-D visualization of lesions during craniotomy. Augmented reality technology uses related technologies to supplement virtual images to the real environment and then provides observers with a more comprehensive performance or better experience [3].
With the development of integrated science, technology, and medicine, augmented reality and wearable devices have shown great potential in medicine. 3-D reconstruction of medical images and visualization of lesions have become the focus of research at home and abroad. The fusion of augmented reality virtual data into the real scene of neurosurgery reduces the difficulty of analysis and planning for surgeons during the surgery process. This technique is widely used in the medical field [4]. In maxillofacial surgery, Zhu et al. [5,6] used augmented reality technology to register the dental membrane with two-dimensional reconstructed images through the dental membrane that can be identified. According to the augmented reality technology, the intraoperative navigation system was established, the positional relationship between the occlusal splint and the reference mark was determined, and the maxillofacial information was observed by Head-Mounted Display. The system can be used to treat maxillofacial hypoplasia, mandibular retraction, and mandibular angle hypertrophy. Max J Zinser et al. [7] put the virtual surgery plan into clinical application. According to the medical image, the maxilla reconstructed is correctly superimposed on the corresponding position of the patient through the video graphics array camera, thus improving the stereoscopic perception ability of medical staff. Bijaya Raj Basnet et al. [8] increased the accuracy of augmented reality alignment by reducing noise and improving the edge of the image.
In abdominal surgery and cardiac surgery, Hiroyuki Oizumi MD et al. [9] showed the anatomical position to the doctor by superimposing 3-D images on the abdomen. Mahmoud et al. [10] used simultaneous localization and mapping technology to track patients and used augmented reality technology to provide preoperative anatomical models for liver surgery. By comparing the virtual marker points to determine the location of the virtual model, surgical errors are reduced. Tetsuro Uchida et al. [11] identified the intubation position in cardiac surgery by an augmented reality projection. Xiaohui Zhang et al. [12] optimized the volume deformation algorithm and proposed an augmented reality projection method adapted to laparoscopic partial nephrectomy surgery navigation to provide visual support for laparoscopic surgery. Megha Kalia et al. [13] proposed a calibration method to solve the problem of the visual field blind area of the Leonardo da Vinci surgical robot for prostate surgery, which helps the doctor avoid the step of endoscope removal during operation.
In breast surgery, Jessica B Chang et al. [14] combine augmented reality technology with breast surgery to provide preoperative navigation for breast surgery through 3-D imaging, improve doctor perception, and increase the safety of breast surgery.
In the field of neurosurgery craniotomy, Leila Besharati Tabrizi BA et al. [15] projected the image of the region of interest onto the surface of the patient's head with augmented reality and used CT scanning to reconstruct the stereoscopic image by adding benchmark markers before operation and then manually measured the registration error. Liang Li et al. [16] developed an augmented reality navigation system for Sinuses and skull base surgery. The system can fuse endoscopic images with virtual images of patients, obtain spatial coordinates through tracking devices fixed on patients' skulls, and provide visual information for surgery. Zhiyu Dai et al. [17] use augmented reality technology and standardized interactive 3-D method of orthogonal transformation to map lesions to scalp models and generate conformal virtual incisions in real-time. Through the marked points on the patient's head, the contours of the craniotomy are observed in the virtual surgery space. Rong Wen et al. [18] developed a robot-assisted surgery system based on augmented reality and Kinect, which can help surgeons determine the position of internal organs and interact. Zhencheng Fan et al. [19] proposed an interactive system suitable for surgery teaching, surgery planning and navigation, and telemedicine. The system fuses preoperative and intraoperative images and uses an observation-based 3-D visualization scheme. In the mid-1990s, heads-up display systems based on integrated operating microscopes began to be developed, but mainly for intraoperative use.
Most of the above operations use marking equipment, which requires patients to pass multiple inspections to obtain medical images with marked points. This action increases the burden on patients. Some operations require projection on the body surface because of the irregularity of the body surface, and the accuracy of the projection is not high.
To reduce the burden of patients, eliminate the use of marking equipment, and avoid projection errors, doctors can intuitively observe the virtual 3-D lesions of patients. In the initial stage of craniotomy, it has been realized that the path planning stage of scalp incision and craniectomy is also very important. When the intracranial lesions meet the visualization, the doctor can obtain the optimal solution when further planning the path for the subsequent operation. This article has deeply studied the augmented reality display method of neurosurgery craniotomy lesions. By comparing point cloud matching with feature matching, the accuracy of feature matching is verified. It provides a theoretical basis for the accuracy of the 3-D display of the lesion location in the real surgical environment and provides a visual basis for the doctor to plan the method of craniotomy in the early stage of neurosurgery craniotomy.

| AUGMENTED REALITY VISUALIZATION OF LESION
To use augmented reality technology to provide spatial information of craniotomy, this paper proposes an augmented reality display method for neurosurgery craniotomy lesions based on feature contour matching (see Figure 1). The 2-D image set of the patient's head is reconstructed into a 3-D model, and the tissues that can affect the safety of the operation are visually changed in accord with the requirements of the doctor. Then, the optimal matching method is selected by comparing the feature point cloud and the feature contour, and the virtual model is accurately matched with the real model and presented to the doctor in the form of augmented reality so that the doctor can intuitively observe the location of the lesion, thus becoming familiar with the tissue structure around the lesion through the acquired space position of the lesion. When designing the operative route before surgery, it is possible to avoid the arteries and blood vessels and other dangerous areas of surgery as much as possible to increase the accuracy of the location of the craniotomy and the safety of the operation.

| MEDICAL IMAGE RECONSTRUCTION
Doctors often must rely solely on imagination to determine the relative position of the lesion in the real environment when viewing the patient's two-dimensional medical image information. To address this problem, an augmented reality virtual environment is created to express the 3-D information of the patient's head. First, the patient file is transferred to Mimics software, then the CT image of the patient can be observed on the interface. The interface is divided into the upper left coronal plane (coronal plane mode), upper right axis (axial mode), lower left sagittal plane (sagittal plane mode), and lower right 3-D image display area (see Figure 2). Finally, these four views can be linked to each other. Any position in the coronal pattern corresponds to the position in the axial pattern and sagittal pattern, and the 3-D coordinates of the position can be expressed through the coordinates of the three views. The doctor can accurately view information through this interface, such as the depth and position of the tumour relative to the head.
To enable the doctor to view the stereoscopic image of the patient, the CT image can be operated by a threshold segmentation algorithm. Different tissues have different CT values in CT images, and there are great differences between skulls and soft tissues. Therefore, through threshold segmentation, different thresholds can be set to extract stereoscopic images of skulls and soft tissue (see Figure 3). Doctors usually judge a tumour's location by the abnormal density of the intracranial tissue in the CT image of the patient. The different density of the CT image represents different tumour signs.
Because of noise [20], blood vessels in CT images cannot be clearly displayed, and the position of blood vessels cannot be observed completely. To not miss the pathological area and blood vessels, compare the MRI images to check the relative position. By changing the brightness of the CT image, increasing the clarity, observing different cross-sectional layers, and then comparing the same position of the sagittal, coronal, and axial views to check the position of the tumour and its blood vessels. The area growth algorithm and the 3-D magnetic wire lasso function are used to divide and trim the lesion area and its blood vessels. Use the area growth method to spread the seed points in the selected tumour centre area. The area of similar pixels is limited to a certain range by the 3-D magnetic lasso function. Then, by comparing the standard operations of the three views, the central area of the blood vessel is diffused by the seed point, and a 3-D view of the tumour and its blood vessel is obtained (see Figure 4). By further adjusting the range of details, the smoothness is increased, and defects are eliminated. For the purpose of meeting the technical requirements of enhancing the structure of the lesion, the reconstructed patient's skull, soft tissue, lesion, and vascular model were transmitted to the unity 3-D. To satisfy the doctor's visual senses and fully display the patient's intracranial information, the skull and soft tissue are adjusted to be transparent, meningiomas are adjusted to purple, intracranial vessels are adjusted to blue, and the location and volume of the lesions are displayed in high profile (see Figure 5). To get close to the real neurosurgical craniotomy environment, the effect of parallel light is increased. By adjusting the brightness and angle, the head is illuminated with parallel light to achieve a state similar to the real environment. The head is illuminated by parallel light, and the brightness and angle are adjusted to a state similar to the real environment. The Vuforia kit, which can perfectly fuse with the unity 3-D, is combined through the unity 3-D engine. The reconstructed skull, soft tissue, and 3-D lesion model are superimposed with the skull model, and the camera in the device is called to provide the function of real object recognition in augmented reality technology, which can maximize the scope of each anatomical area and increase contrast. Improve the clarity of lesion visualization.

| MATCHING VIRTUAL IMAGING WITH REAL SKULL MODEL
In neurosurgical craniotomy, augmented reality display needs to match the virtual lesion information with the patient's real skull to accurately increase spatial information for the doctor. But the most critical and difficult problem of augmented reality technology is the accurate registration of virtual objects relative to real objects.
Commonly used registration methods are divided into hardware-based registration methods and software-based registration methods [21]. Among them, the hardware-based registration is too expensive, and the registration accuracy is relatively low. Therefore, software-based registration methods have become mainstream, which are generally divided into two categories: (i) The registration method based on markers: Add identifiable markers in the actual scene through target tracking and other methods to identify the markers in the actual scene to obtain the actual scene information through the transformation matrix to achieve the virtual object and the actual object 3-D registration fusion. (ii) The registration method based on natural features: Through feature information in the actual scene, the natural feature points are extracted, and the corresponding natural points of the transformation matrix are transformed to achieve the 3-D registration and fusion of virtual and real objects.
In craniotomy, the registration method based on markers is often on the premise of harming the patient. Add nociceptive marking equipment to the head of the patient to obtain position information or use non-injurious marking equipment to increase the number of medical image detections, which increases the burden on doctors and patients. While the registration method based on natural features uses the characteristic information in the actual scene, which is the patient's head information, it does not hurt the patient.
To improve the authenticity of the experiment, the patient model reconstructed from CT image data was printed out by 3-D printing technology. The skull model and the soft tissue model are shown below (see Figure 6). Because the skull model is obtained according to the skull CT model by 3-D print, the size of the two models is the same, which can have the same effect as the real craniotomy.
In neurosurgical craniotomy, the patient's head is usually fixed on the positioning frame, so craniotomy failure caused by skull activity can be avoided. Two augmented reality matching methods are proposed, and the two methods are compared after experimental verification.

| Virtual and real matching by feature contour
The virtual model is matched with the real skull by obtaining the feature contour of the real skull model (see Figure 7). The feature contour is the most stable expression of the target in accord with doctors' visual perception. More importantly, compared with other features, the feature contour has better robustness and is less affected by the operating room environment. Model Target is the augmented reality module in the Vuforia, which can identify the characteristics of the object through the shape of the object with the help of the model target. By identifying the outline of the virtual stereo model as the guideline of sight, the virtual model is matched with the real model through the guideline of sight that can achieve the augmented reality demonstration effect of the virtual model.
The 3-D model of the skull is selected as a specific object using the model target. Because the object is geometrically rigid (not deformed or malleable) and has stable surface features (does not support glowing surfaces), the 3-D model of the skull can be used as a target to be tracked.
By setting the best line-of-sight angle when the doctor observes the patient's condition, the characteristic contour of the head under this angle can be obtained. In this way, the guiding line of sight described by the characteristic contour at this angle is obtained. Using the Unity 3-D platform, the virtual model with guiding line of sight and the model representing the real skull are unified into the world coordinate system, and the coordinates of the two models are overlapped. Allow the model to remain fixed under the guideline of sight. In this position, the guiding line of sight can match the characteristics of the real skull in the actual scene. When the doctor uses the camera in the mobile terminal to aim at the real model in the actual scene, the line of sight is guided to match the outline it represents. Thus, the eye-catching features of virtual models and lesions are displayed on the screen.
According to the matching model displayed in the mobile terminal, the lesion and the surrounding environment are highlighted. Doctors can mark the size and location of the tumour through the displayed virtual image (see Figure 8), discuss the surgical route, and display the planned trajectory on the scalp.

| Virtual and real matching through feature point clouds
By collecting data sets that can represent a large number of cranial feature points, the purpose of constructing the cranial feature point cloud has been achieved so that the virtual head can be matched with the real patient's head through the feature point cloud (see Figure 9). The position and direction of the 3-D object relative to the origin of the coordinate space are defined  The skull model is placed in the target image composed of overlapping triangles. The model's position is determined according to the coordinate origin and axis displayed by the scanned target image. When the model is scanned by a scanner, as many characteristic points as possible are obtained from the skull model. After multiple scans are carried out, the one with the most feature points, the most complete point cloud, and the best scanning effect is selected as the experimental sample. By matching the feature point cloud of the virtual head model with the feature point coordinates corresponding to the real model in the real world, doctors can use the lens of the mobile terminal to match the virtual model with the real model, thereby observing the virtual model of the patient at that position on the screen.
By using feature point cloud-based virtual and real matching methods for many experiments, it is concluded that the matching process is difficult, and it takes a long time for virtual objects to appear. Under the condition that the mobile terminal is kept stationary, the virtual model shakes severely after being displayed, which leads to a large registration error.
The images obtained by the two methods are processed when the virtual model is matched with the real model. The red line segment represents the edge of the virtual model, and the white line segment represents the edge of the real model. They are marked with arrows in the same location. It can be seen that the matching model is offset in the augmented reality matching method based on the feature point cloud. For the augmented reality matching method based on contour features, the matching model is consistent (see Figure 10). By comparing the results of the two experiments, it can be concluded that the matching effect of the virtual-real model is poor in the augmented reality matching method based on the characteristic point cloud. The virtual model will produce serious jitter, the steps of this method are tedious, and it takes a long time to obtain the feature points of the real model. The augmented reality matching method based on contour features has a good matching effect. This method is easier to operate and easy to obtain the guiding line of sight. When the virtual model matches the real model, the virtual model is stable, and the rendering effect meets the requirements of the focus display. It is worth noting that these two matching methods are based on the premise that the patient is fixed on the head positioning frame. The target of virtual object matching is stationary objects. When the object to be matched in the actual scene is a non-stationary object, these two matching methods are temporarily not applicable.

| HUMAN-COMPUTER INTERACTION
Given the success of the above matching method and to better observe the tiny information around the lesion and view the hidden details that are not easily detectable, the doctor can plan the craniotomy path while avoiding the dangerous area, and a human-computer interaction interface is designed. The doctor can operate the virtual objects displayed on the screen by pressing the buttons. For the convenience of doctors, the interface has just four button components and text that represents patient information (see Figure 11).
In this interface, the key functions of the different buttons are as follows: The ON button controls the display of the virtual lesion model (see Figure 12a).
The OFF button controls the disappearance of the virtual lesion model (see Figure 12b).
After clicking the RESET button, the adjusted virtual model can be restored to its original state.  It is fully considered that preoperational craniotomy path planning has a certain influence on the follow-up intraoperative operation. An augmented reality display method for neurosurgical craniotomy lesions based on feature contour matching is proposed that uses an augmented reality display method to provide doctors with accurate lesion information. It can visualize the patient's intracranial information and help doctors plan the path of scalp cutting and craniectomy. This method also performs non-rigid matching for the patient, eliminates additional injury to the patient, reduces the extra work of doctors to paste marking points for the patient, and reduces the burden of multiple medical scans for the patient. Through experiments to compare the feature point cloud matching and feature contour matching methods, it is proved that the feature contour matching method has a better display effect. In addition, a user interface is designed. The doctor can determine the patient's personal information through the text displayed in the upper left corner of the interface and zoom in, zoom out, and rotate the virtual model on the mobile terminal screen by pressing buttons. It provides a visual basis for the doctor's preoperative preparation. The method described in this article effectively improves the efficiency of the doctor's operation as well as patient safety. The proposed augmented reality matching method based on feature contours also provides basic theoretical help to apply augmented reality to neurosurgery in the future.