Identifying anatomical structures on ultrasound: assistive artificial intelligence in ultrasound-guided regional anesthesia

Ultrasound-guided regional anesthesia involves visualizing sono-anatomy to guide needle insertion and the perineural injection of local anesthetic. Anatomical knowledge and recognition of anatomical structures on ultrasound are known to be imperfect amongst anesthesiologists. This investigation evaluates the performance of an assistive artificial intelligence (AI) system in aiding the identification of anatomical structures on ultrasound. Three independent experts in regional anesthesia reviewed 40 ultrasound scans of seven body regions. Unmodified ultrasound videos were presented side-by-side with AI-highlighted ultrasound videos. Experts rated the overall system performance, ascertained whether highlighting helped identify specific anatomical structures, and provided opinion on whether it would help confirm the correct ultrasound view to a less experienced practitioner. Two hundred and seventy-five assessments were performed (five videos contained inadequate views); mean highlighting scores ranged from 7.87 to 8.69 (out of 10). The Kruskal – Wallis H -test showed a statistically significant difference in the overall performance rating ( χ 2 [6] = 36.719, asymptotic p < 0.001); regions containing a prominent vascular landmark ranked most highly. AI-highlighting was helpful in identifying specific anatomical structures in 1330/1334 cases (99.7%) and for confirming the correct ultrasound view in 273/275 scans (99.3%). These data demonstrate the clinical utility of an assistive AI system in aiding the identification of anatomical structures on ultrasound during ultrasound-guided regional anesthesia. Whilst further evaluation must follow, such technology may present an opportunity to enhance clinical practice and energize the important field of clinical anatomy amongst clinicians.

"Anatomical knowledge is clearly relevant to the invasive procedures undertaken in anaesthetic practice, and possibly vital to the interpretation of images generated by ultrasound devices."

| INTRODUCTION
Ultrasound-guided regional anesthesia (UGRA) involves visualizing sono-anatomy in real time to guide needle insertion and the subsequent perineural deposition of local anesthetic. This provides selective blockade of sensory and motor stimuli conveyed by peripheral nerves in order to produce anesthesia and/or analgesia of the affected region. Ultrasound has become the predominant technique to guide the performance of regional anesthesia (Helen et al., 2015;Munimara & McLeod, 2015). Its use has several potential advantages, including visualization of the relevant anatomical structures (Henderson & Dolan, 2016;Hutton et al., 2018). This requires a good understanding of the sono-anatomy and sonographic visualization of the area of interest for safe and effective conduct (Henderson & Dolan, 2016;Sites et al., 2009;Taylor & Grant, 2019).
Despite this, anatomical knowledge amongst anesthesiologists may be flawed, as demonstrated by the following report on the Fellowship of the Royal College of Anesthetists (FRCA) examination (Tremlett, 2014): "The lack of even basic knowledge of anatomy has been identified over a number of years, reflecting the fall in teaching of basic sciences at undergraduate level. The need to learn and test anatomy remains of fundamental importance particularly with the resurgence of Regional Anaesthesia in the UK in the last decade. Anaesthetists are commonly placing needles in a range of sites for local anaesthetic blocks and must understand key structures the needles may approach/hit." We have previously discussed the potential for variable recognition of anatomical structures on ultrasound, even by experienced regional anesthesiologists (Bowness, Turnbull, Taylor, Halcrow, Chisholm, et al., 2019;Bowness, Turnbull, Taylor, Halcrow, Raju, et al., 2019). Based on this information, we presented the case for the use of assistive artificial intelligence (AI) technology to facilitate the recognition of anatomical structures in UGRA (Bowness et al., 2020). This concept has also been proposed by other groups, both for UGRA (Alkhatib et al., 2019;Huang et al., 2019) and central neuraxial blockade (spinal and epidural) (Oh et al., 2019;Smistad et al., 2018;Tran & Rohling, 2010).
The current investigation presents an initial evaluation of an AI system called ScanNav Anatomy Peripheral Nerve Block (also known as ScanNav Anatomy PNB and formerly known as AnatomyGuide; Intelligent Ultrasound Ltd [IUL], Cardiff, UK). This system uses deep convolutional neural networks based on the U-Net architecture (Ronneberger et al., 2015) to perform semantic segmentation of the input ultrasound videos. A separate network was created for the anatomical region relevant to each specific peripheral nerve block.
Ultrasound scans of the region were recorded and manually segmented to identify the specific anatomical structures relevant to regional anesthesia. Through this process the neural network learns to perform segmentation (color overlay highlighting) of the anatomical structures on ultrasound scans in real time, to aid in identifying anatomy during UGRA. To our knowledge, this is the first investigation which presents performance of a system over multiple anatomical regions, from a clinical perspective.
The primary aims were to assess, in the opinion of expert regional anesthesiologists, the following: • Overall performance of the system when highlighting structures on ultrasound scans • The benefit of highlighting on the identification of individual structures on ultrasound scans • The benefit of highlighting in aiding confirmation of the correct ultrasound view to a less experienced practitioner.

| Ultrasound assessment
Basic demographic information was collected for all participants, including age, height, weight and BMI (available in Supplementary material S1). The regions scanned were relevant to specific peripheral nerve blocks as follows: • Interscalene-supraclavicular level brachial plexus (anterolateral neck, from the level of the C5 vertebra inferiorly to the supraclavicular fossa) • Axillary level brachial plexus (medial arm, adjacent to the anterior axillary fold) Data from all studies were aggregated. From these, 40 scans from each region were selected at random for this investigation.

| ScanNav anatomy peripheral nerve block highlighting
ScanNav Anatomy PNB performs highlighting for the following structures in each block region: (Figure 1 and supplementary

| Expert assessment
Three independent experts in regional anesthesia (with no involvement in the design of ScanNav Anatomy PNB) reviewed 40 videos of each block region. One expert is a consultant anesthetist in the UK and two are attending anesthesiologists in the USA. All have completed advanced training in regional anesthesia through a postgraduate fellowship and regularly conducts anesthesia using advanced UGRA techniques. The original ultrasound video was presented sideby-side with the ultrasound video overlaid by AI highlighting. The experts were asked to answer the following questions for each one: •

| Statistical analysis
Anonymized data were initially recorded in Microsoft Excel and transferred to SPSS version 27 (IBM Corp, 2020). Analysis was conducted by an independent researcher, not involved in data collection, to eliminate bias.
If the majority answer to the first question "does the video contain clinically relevant images for this block area?" was "no", the data relating to this video was not included in the analysis.  Table 3).
Ultrasound scans of the interscalene -supraclavicular and axillary levels of the brachial plexus both scored as helpful in 39/40 (97.5%) of cases.

| DISCUSSION
This paper reports a clinician-rated assessment of the utility of an assistive AI system to facilitate the identification of key anatomical structures on ultrasound for the purposes of UGRA. As far as the authors are aware, this is the first assessment of AI technology in this field which presents the evaluation from the perspective of the end user.
Three independent experts in regional anesthesia concluded that the overall performance of system highlighting, ranked 0 (very poor) - There has been a recent move to increase the use, and standardize practice, of UGRA amongst non-experts in the UK (Turbitt et al., 2020). The utilization of assistive technology may facilitate this approach and enhance consistency of ultrasound interpretation compared to human performance. There is potential for application of this system in other specialties, for example emergency medicine. Emergency medicine physicians perform UGRA on a less frequent basis than anesthesiologists, hence may be less confident in recognizing key structures on ultrasound, and so may benefit from this standardized assistive technology. Furthermore, this approach may support other image-guided interventional specialties/practice, such as interventional radiology.
This clinically orientated evaluation of AI anatomy identification is a novel approach to assessing such technology, as it is taken from the point of the end-user. Statistical techniques to provide a quantitative assessment of system performance have been used in prior publications, such as Intersection over Union (Huang et al., 2019) and the Dice co-efficient (Smistad et al., 2018). However, as there has been little work done to determine the clinical utility of any given threshold in these metrics, the approach in this investigation emphasizes the ultimate need for the clinician to recognize the salient anatomical structures (which the system is designed to aid). Furthermore, such metrics evaluate still image labelling, whilst the practice of UGRA relies on the interpretation of ultrasound videos in real time. Therefore, whilst evaluation of still frame is one component, the entire ultrasound video must also be considered as it is in clinical practice.
The authors acknowledge that this is a preliminary and subjective assessment of the system. For example, there was a statistically  Potential limitations to the system also exist. For example, if the observation that regions containing major vascular landmarks and distinct nerve targets (rather than fascial planes as targets) score more highly is consistent on further assessment, this must be explored. It will be important to determine whether this is due to human input to the system or a performance characteristic of the algorithm. If it represents a deficiency in the algorithm, this must clearly be addressed.

| CONCLUSIONS
This paper reports a preliminary evaluation of an assistive AI system which facilitates the recognition of anatomical structures on ultrasound for the purposes of UGRA. It is performed from a clinical viewpoint, with experts in the field rating overall performance of the system, assessing whether highlighting helped identify the relevant anatomical structures and if this would help confirm the correct ultrasound view to a less experienced practitioner. Whilst they must be validated with further study, the results show promise for the accuracy and clinical utility of the systemparticularly for nonexperts in UGRA. If "ultrasound … has given new life to the appreciation of clinical anatomy" (Soeding & Eizenberg, 2009), then new technology such as this may enhance the opportunity to energize the field of clinical anatomy and engage clinicians in a discipline for which current evidence indicates that anatomical knowledge is imperfect.

ACKNOWLEDGMENTS
The authors would like to thank Professor Pete Wall for his advice and guidance in undertaking this project.

FINANCIAL DISCLOSURE
This work, undertaken as part of the validation study for medical device regulatory approval, was funded by Intelligent Ultrasound Limited (Cardiff, UK).