Skip to main content

MINI REVIEW article

Front. Med., 25 October 2022
Sec. Intensive Care Medicine and Anesthesiology
This article is part of the Research Topic New Trends in Regional Analgesia and Anesthesia View all 12 articles

Artificial intelligence in ultrasound-guided regional anesthesia: A scoping review

\nDmitriy Viderman
Dmitriy Viderman1*Mukhit DossovMukhit Dossov2Serik SeitenovSerik Seitenov2Min-Ho LeeMin-Ho Lee3
  • 1Department of Biomedical Sciences, Nazarbayev University School of Medicine, Nur-Sultan, Kazakhstan
  • 2Department of Anesthesiology and Critical Care, Presidential Hospital, Nur-Sultan, Kazakhstan
  • 3Department of Computer Sciences, Nazarbayev University School of Engineering and Digital Sciences, Nur-Sultan, Kazakhstan

Background: Regional anesthesia is increasingly used in acute postoperative pain management. Ultrasound has been used to facilitate the performance of the regional block, increase the percentage of successfully performed procedures and reduce the complication rate. Artificial intelligence (AI) has been studied in many medical disciplines with achieving high success, especially in radiology. The purpose of this review was to review the evidence on the application of artificial intelligence for optimization and interpretation of the sonographic image, and visualization of needle advancement and injection of local anesthetic.

Methods: To conduct this scoping review, we followed the PRISMA-S guidelines. We included studies if they met the following criteria: (1) Application of Artificial intelligence-assisted in ultrasound-guided regional anesthesia; (2) Any human subject (of any age), object (manikin), or animal; (3) Study design: prospective, retrospective, RCTs; (4) Any method of regional anesthesia (epidural, spinal anesthesia, peripheral nerves); (5) Any anatomical localization of regional anesthesia (any nerve or plexus) (6) Any methods of artificial intelligence; (7) Settings: Any healthcare settings (Medical centers, hospitals, clinics, laboratories.

Results: The systematic searches identified 78 citations. After the removal of the duplicates, 19 full-text articles were assessed; and 15 studies were eligible for inclusion in the review.

Conclusions: AI solutions might be useful in anatomical landmark identification, reducing or even avoiding possible complications. AI-guided solutions can improve the optimization and interpretation of the sonographic image, visualization of needle advancement, and injection of local anesthetic. AI-guided solutions might improve the training process in UGRA. Although significant progress has been made in the application of AI-guided UGRA, randomized control trials are still missing.

Background

Regional anesthesia (RA) is increasingly used in pain management for various surgical procedures. Ultrasound (US) has been used to facilitate the performance of the regional block, increase the percentage of successfully performed procedures and reduce the complication rate. US rapidly gained popularity among practitioners due to its portability, absence of radiation, and the ability to track the performance of the procedure in a real-time fashion (1). Other benefits of US in regional anesthesia include direct visualization of nerves, blood vessels, muscles, bones, tendons, faster sensory onset time, visualization of the local anesthetic spread during injection, timely recognition of maldistribution of local anesthetics, possible prevention of complications (e.g., inadvertent intravascular injection, intra-neuronal injection of local anesthetic), longer duration of the block, possible avoidance of painful muscular contractions during nerve stimulation in cases of fractures), possible improvement of quality of block (27).

However, the application of ultrasound-guided regional anesthesia is associated with several technical challenges, which are especially prevalent in trainees and not experienced clinicians. The performance of a block can be complicated by the loss of the reflective signal between the needle and probe, which decreases the needle visibility, especially if a deep block is performed or a patient is overweight. Moreover, bone or hyperechoic soft tissue along the needle trajectory may worsen needle visibility. Therefore, clear needle localization is challenging, especially if deep blocks are performed.

Artificial intelligence (AI) has been studied in many medical disciplines with achieving high success, especially in radiology (8). Since sonographic visualization is commonly used in regional anesthesia, AI solutions might be useful for practitioners in anatomical landmark identification and reducing or avoiding possible complications such as injury to the nerve, artery, vein, and puncture of the peritoneum, pleura, internal organs, as well as local anesthetic systemic toxicity. AI-guided solutions can improve the optimization and interpretation of the sonographic image, and visualization of needle advancement and injection of local anesthetic (37).

The purpose of this scoping review (SR) was to synthesize and analyze the evidence on the application of artificial intelligence for optimization and interpretation of the sonographic image, and visualization of needle advancement and injection of local anesthetic.

Methods

Protocol

To conduct this SR, we followed the PRISMA guidelines during the design, implementation, and reporting of this review.

We followed the PICO items:

P (patient population): 1. Age 18 years of age and older;

I (intervention): Artificial intelligence-assisted in ultrasound-guided regional anesthesia.

C (comparator): standard methods.

Participants/population: Patients undergoing surgery under regional anesthesia.

Goals of the SR

1. To review and assess the value and performance of AI-assisted UGRA in different anatomical regions and nerves;

2. Machine learning models and algorithms;

3. To assess the benefits of automatic target detection;

4. To assess risks, failures and limitations of the AI-assisted UGRA.

Inclusion criteria

1) Application of Artificial intelligence-assisted in ultrasound-guided regional anesthesia;

2) Any human subject (of any age), object (manikin), or animal.

3) Study design: prospective, retrospective, RCTs;

4) Any method of regional anesthesia (epidural, spinal anesthesia, peripheral nerves);

5) Any anatomical localization of regional anesthesia (any nerve or plexus).

6) Any methods of artificial intelligence;

7) Settings: Any healthcare settings (Medical centers, hospitals, clinics, laboratories).

Exclusion criteria

1) Not enough data reported;

2) Out of inclusion criteria;

3) Application of AI other than anatomic landmark identification and guidance in UGRA (e.g., for AI-based prediction of the need for nerve blocks, AI for robotic nerve blocks, prediction of response of regional anesthesia.

Literature search

Search strategy

Studies were identified by electronic search in PubMed, Google Scholar, Embase, using the following search terms “Artificial intelligence,” “Deep learning,” “Ultrasound,” “Ultrasound-guided,” “Needle identification,” “Needle tracking,” “Regional anesthesia,” “Peripheral nerve block.” Additionally, we performed a manual search of the articles using the references from the published studies. Publications in English, German and Russian languages were considered.

Data collection and extraction

The data were extracted into a standardized form. Two authors independently screened the titles and abstracts for eligibility. The following data were extracted: citation, author, year, gender, study goals, sample size, types of surgery, nerve block, the algorithm of AI, comparator, the purpose of AI, benefits, risks and limitations of the study, model performance data and conclusions.

Results

The systematic searches identified 78 citations. After the removal of the duplicates, 19 full-text articles were assessed; and 15 studies were eligible for inclusion in the review (Supplementary Figure 1). The studies were conducted on healthy subjects, parturients in labor or scheduled for cesarean delivery, bovine/porcine lumbosacral, and bovine/porcine lumbosacral spine phantoms.

Characteristics of study goals

The included studies aimed to assess the value of AI by the following methods:

- Studying nerve structure and ultrasound image tracking (9);

- Assessing deep-learning performance for nerve tracking in ultrasound images (10);

- Studying the accuracy of real-time (AI) -based anatomical identification (11);

- Assessment of CNN-based framework for needle detection in curvilinear 2D US (12);

- Evaluation of success rate of spinal anesthesia of AI-assisted methods (13);

- Using AI for precise needle target localization (14);

- Identification of musculocutaneous, median, ulnar, and radial nerve) and blood vessels (15);

- Assessment of the utility of ScanNav to identify structures, teaching and learning UGRA, and increase operator confidence (16);

- Assessment of UGRA expert perception of risks of the use of ScanNav (risk of block failure, unwanted needle trauma (eg, arteries, nerves, and pleura/peritoneum (16);

- Identification of the difference in accuracy between deep learning (DL)-powered ultrasound guidance and regular ultrasound images; the use of artificial intelligence to optimize regional anesthesia puncture path; to identify the effectiveness of ultrasound-guided imaging “scapular nerve block” surgical pain of the fracture (17).

Anatomical region and the nerves

It was found that AI-assisted UGRA has the potential to facilitate the identification of anatomical structures and assist non-experts in locating the correct ultrasound anatomy to perform the intervention. The previous reports highlighted the apparent deficiencies in anatomical knowledge among junior anesthesiologists (18). These deficiencies may be supported by the assistance of ultrasound image interpretation. Therefore, such assistive AI approaches could improve the probability of successful interventions and reduce their risks (18).

Thus, artificial intelligence-assisted ultrasound-guided target identification was used for the identification of the following anatomical structures (nerves): musculocutaneous, median, ulnar, and radial nerves, “interscalene-supraclavicular” and “infraclavicular brachial plexus,” “axillary level brachial plexus,” “erector spinae plane,” rectus sheath, “suprainguinal fascia iliaca,” adductor canal, “popliteal sciatic nerve,” “transverses abdominis plane,” anesthesia in the lower vertebrae regions (sacrum, intervertebral gaps, and vertebral bones), sciatic nerves, femoral nerve, subarachnoid and epidural spaces, facet blocks, navigation of blood vessels during UGRA (915, 1821) (Table 1).

TABLE 1
www.frontiersin.org

Table 1. Study and cohort information.

Machine learning models and algorithms

The goal of the included studies was to accurately identify the target region (i.e., nerve block) on the ultrasound images in real-time (4). Therefore, some machine-learning methods have been proposed (Table 1) and their key techniques can be divided into (1) anatomic region segmentation, (2) target detection (i.e., feature extraction), and 3) tracking algorithm (915, 1821).

The U-net is a popular DNN framework to find the region of interest by its fast and precise segmentation performance (Table 2).

TABLE 2
www.frontiersin.org

Table 2. Artificial intelligence method and its purpose.

The feature extraction methods were divided into typical hand-crafted features and CNN approaches. In general, the hand-crafted feature is more suitable for the smaller size dataset, while the CNN has the strength for more complex classification problems with an automatic feature extraction in the end-to-end framework. The SIFT, LBP, AMBP, HOG, and bag-of-features are well-known hand-craft features and have shown promising results in the US images (9, 21, 24).

The deep-learning models are less optimized with the time complexity, and they predict the given sequential input image independently. Therefore, the model performance is highly sensitive to nerve disappearance due to artifact noise, illumination, or occlusion. Tracking algorithms are one solution for not losing the target object (i.e., nerve) from the initially represented features in the ROI. Previous studies have shown an efficient tracking performance with the conventional MI algorithms, such as Kalman/particle filter (25), mean shift (26), kanade-Lucas-Tomasi (KLT), etc (8). The DNN-based tracking approaches have recently been proposed in the CV domain, however, it is rarely used in sonographic image. Alkhatiba et al. (10) firstly investigated the performance of 13 DNN models, (e.g., ECO, SANet, SiameFC, CFNet) and compared their performance with the hand-crafted feature (AMBP-PF). The study indicates that the CNN models have outperformed the traditional MI algorithms in terms of accuracy and stability, and reported some important findings for enhancing the performance by (1) using a deeper layer, (2) reducing the redundancies, (3) incorporating particle filter (or RNN) in the network.

In many cases, DNN approaches have been implemented along with data augmentation, knowledge transfer, and visualization to overcome the limitations, i.e., small-size datasets, parameter optimization, and low interpretability, respectively. Positional augmentations (scaling, affine transformation, etc.) are common techniques; Pesteie et al. (14) proposed Walsh-Hadamard transform to train a deep network with a set of distinctive directional features from the spatial domain. Mwikirize et al. (12) employed transfer learning, where the network weights are initialized by non-medical images, then fine-tuned with US images.

Overall performance of detection rate were between 88 and 95% and 0.638–0.722 in terms of the precision rates, and IoU evaluation, respectively (19, 20), and tracking performance was above 85% (10).

Benefits of automatic target detection

The main benefits included an automatic detection and tracking of nerve structure, overall good performance, assistance in successful recognition of specific anatomical structures, confirming the correct placement of the needle, ultrasound view to anesthetists and standardization of clinical procedure, a real-time interpretation of anatomic structures for immediate decision-making during blocks, provides automatized nerve block using the remote control system, successful detection of vertebral regions at the real-time speed (915, 1821, 26, 27). It was reported that artificial intelligence can provide assistance for both novice trainees and experienced clinicians unfamiliar with ultrasound techniques. The ultrasound-guided approach does not increase as the automated ultrasound-guided neuraxial technique takes less than a minute. The automated approach was reported to result in a high rate of first attempt success rate that could reduce the complications from multiple entry attempts (19, 2528). In another study, DL-assisted ultrasound-guided imaging for scapular nerve block in scapular fracture surgery was more efficient, significantly shortened the time of performing nerve block, and reduced complication rate compared to the traditional method (17).

Risks, failures, and limitations of the AI-assisted UGRA

Although the application of automated solutions has several benefits, the risks, failures, and limitations were also reported. Thus, the most important limitation was detection and tracking failure (if the nerve appearance is similar to surrounding areas), risk of the nerve disappearance and identical appearance with the surrounding areas- losing the nerve, issues with real-time tracking error after numerous iterations risk of failing to re-track lost nerve (915, 1821). Another limitation of this technology is the failure of distinguishing osseous images. Although real-time allows proper scanning of block regions, it does not always result in the detection of the whole needle, which can occur at a steep insertion angle. The evidence on the application of AI-assisted technologies in regional anesthesia is still in its initial stage. Thus, limited evidence on accuracy in many patient populations, such as in pediatric/geriatric patients is currently available. Overreliance on an expert sonographer to detect the ground-truth tip localization is a limitation especially if the tip is completely invisible. The algorithm is highly specific only if all landmarks are detected. AI algorithms are not designed or validated in the case of complex spinal anatomy, geriatric patients, obesity patients, and pediatric patients. The risk of image misinterpretation could be high in case of abnormal anatomy (e.g., fusion or reduced interspinous distance).

The following risks were assessed and reported in the studies:

- increased risk of block failure;

- risk of needle trauma to structures (eg, arteries, nerves, pleura, peritoneum);

The assessed complications included:

- nerve injury and “postoperative neurological manifestations”;

- “local anesthetic systemic toxicity”;

- pleural injury (pneumothorax);

- peritoneal injury.

Discussion

Artificial intelligence-assisted medical image interpretation is one particularly popular research direction in healthcare artificial intelligence (18). Artificial intelligence has been used for the detection of the optimal needle insertion site, estimation of the trajectory of the needle insertion, and facilitating automatic tip localization. Tracking is one of the most widely used tasks in computer vision with such applications as video medical imaging, compression, and robotics.

Several artificial intelligence models have been reported to improve the quality of monographic anatomical target detection. Thus, a multiple model data association tracker has been used to track the left ventricle in the cardiac examination (8).

AI was reported to be helpful in 99.7% of the cases. Identification of specific anatomical structures by ultrasound and confirming the correct view are essential components of ultrasound-guided regional anesthesia (18).

A recent study reported a statistically significant difference between the performances of blocks in different regions. Thus, the rectus sheath and interscalene supraclavicular level brachial plexus regions yielded the lowest results, whereas the adductor canal block and axillary brachial plexus yielded the highest results (18). It is noteworthy to note that two of the three lowest-ranked blocks were plane blocks and anatomical regions that did not have major vascular landmarks in close proximity. Conversely, the highest-ranked anatomic regions have bones and vessels.

The results demonstrate the potential for the clinical utility of AI in UGRA and especially for non-experts users (18). It is challenging to develop the AI algorithms to identify all anatomical features using ultrasound de novo due to the diversity, complexity, and operator dependence, such as inter-and intra-individual variation (25). Therefore, automated image interpretation technologies can be trained to identify a wide variety of structures using machine learning (25). This technology could be used to improve the interpretation of ultrasound anatomy by improving target identification such as peripheral nerves and fascial planes, and the mapping of optimal insertion site by detecting the relevant landmarks and guidance structures (such as muscles and bones). The safety profile can be improved by highlighting anatomical structures such as blood vessels) to reduce or even avoid unwanted injury (26).

Although AI-assisted techniques appear to be promising, only a few applications are currently introduced in clinical practice, therefore, the potential for its utilization is yet to be proven (28). Understanding the sonographic anatomy and image interpretation represents critical importance in UGRA. Robust AI-assisted technologies could help clinicians to improve performance and training in ultrasound-guided nerve blocks (26).

AI-assisted technologies can change the practice of UGRA and its education. Anesthesia practitioners should contribute to the transformation of UGRA (28).

Although training can be performed in non-clinical settings, such as educational courses, clinical practice training takes a fundamental role.

AI-assisted UGRA is a novel medical device, with which many clinicians might not be familiar. Therefore, its initial use may be associated with lower confidence, which will improve with time of training and practice.

Generally, the included studies reported a low perception of increased risk associated with using AI assistance, although complications may be clinically important (eg, nerve injury/ LAST). Possible causes of error are related to technological performance, e.g., improper highlighting, which may result misinterpretation of the ultrasound images. Block failure and undesirable trauma to critical structures may be more likely if the practitioner is misleadingly reassured by the color on the screen. Other risks may be related to the usage of the device, e.g., highlighting resulting in distraction or focusing on one object and neglecting another structure.

AI-assisted technology therefore should be used as a source of additional information (image augmentation system) rather than a decision-maker. Furthermore, correct anatomical structure identification can be useful for anesthesiologists, although it does not ensure safe UGRA nor guide needle placement. Therefore, it is the performer's responsibility to take into consideration hazards (26, 28).

Challenges in using AI regional anesthesia

Tracking anatomical targets in ultrasound-guided procedures can be challenging due to Illumination changes, occlusion, noise, and deformation of the target, which can result in tracking failure. Moreover, the object motion may exhibit abrupt changes; the images may be corrupted by a multiplicative noise leading to false alarms, misdetection; some detected features may not belong to the object. It is important to highlight that the wrongly detected features should be neglected by the tracker because they may mislead medical professional and jeopardize the performance of the procedure (8). Finally, the object shape might change during the tracking (8).

Barriers to the development of AI-guided UGRA

AI especially CNNs has been improving success in image recognition for many years, since the development of LeNet-5 (29). One of the major reasons for this success is the development of new algorithms, the availability of large data sets, and improvements in hardware (30). The major limitation of training deep CNNs is the requirement of a large number of images; therefore, it is challenging to achieve good results with training deep CNNs using small data sets (24). The challenge, however, can be overcome with transfer learning that can be used for training CNNs on relatively small data sets (24, 27). Transfer learning uses knowledge learned from one area and applies in another area. Transfer learning can solve classification tasks in a new domain using pre-trained CNNs (27). It can be especially useful in medical image classification. To perform image classification, trained CNNs extract features via ascending layers of the network (27). CNNs that have been trained on a large number of images have optimized parameters for image recognition, and, therefore, that knowledge can be transferred to use for other tasks. Moreover, only a few products, especially those assessing images in a real-time manner have received regulatory approval.

Limitations of the current study

The main limitations of this study are that the studies included in this review are small sample size, therefore, the results should be replicated in studies with a larger number of participants with different anatomical abnormalities and comorbidities. Other limitations were an insufficient number of images with a large field of vision and deep depth, no data augmentation limiting image segmentation properties of the studied method. Some studies did not have a comparator arm.

Additional limitation was the “trustworthiness” of clinicians who are under-confident in their anatomical and sonographic expertise, and may over-rely on AI assistance. Therefore, it is important to appreciate that the AI may mistakenly identify the incorrect anatomical location, and a robust understanding of the sonographic anatomy is required even when AI-assisted technologies are used for such procedures (18). Regional anesthesia educators with suitable expertise must be central to training in UGRA and “AI-assisted devices” should not replace expert educators. Trainees should still practice standard methods of sonographic scanning, probe angulation, rotation pressure, and tilt to enhance image acquisition (26).

The next limitation is that the highest were scores demonstrated as regions with major vascular structures and nerves, rather than fascial planes used as a target. Therefore, it is important to find out whether it is due to the operator's input to the system or it is due to the algorithm. This may help to identify what anatomical landmarks and structures are the most beneficial for AI-assisted UGRA (18).

Additionally, the performance of AI-assisted UGRA could be evaluated by diverse criteria such as accuracy, consistency, time complexity, the robustness of noise, and sometimes the visualization results should be qualitatively evaluated by the human. However, current CNN studies have not fully investigated in terms of the model generalization toward a large-size dataset with sufficient evaluation assessments.

Future development

Ultrasound has become an integral part of regional anesthesia and significantly contributed to its development. Nevertheless, it is challenging to develop excellent skills to interpret ultrasound images and achieve the necessary level of proficiency to perform regional anesthesia safely and reduce the rate of block failure, especially for beginners. Moreover, there is a degree of subjectivity in interpreting ultrasound images, which leads to heterogeneous interpretation even among experienced users. Therefore, the application of AI in UGRA might maximize the benefits of ultrasound guidance, improve efficacy and safety and reduce the failure rate.

Computer vision is one of the most promising areas of application of AI in medicine. Deep learning may hold the highest potential to advance image interpretation in UGRA but a high amount of images would be required for its training, followed by validation prior to its implementation into clinical practice. Therefore, a close collaboration of clinicians and engineers is crucial. Clinicians should play a more active role in these collaborations, since they are instrumental in image acquisition, conducting clinical trials, advising, and overall moving this field forward.

Conclusion

Since sonographic visualization is commonly used in regional anesthesia, AI solutions might be useful in anatomical landmark identification, reducing or even avoiding possible complications (such as injury to the anatomical structures and local anesthetic systemic toxicity. AI-guided solutions can improve the optimization and interpretation of the sonographic image, visualization of needle advancement, and injection of local anesthetic. AI-guided solutions might improve the training process in UGRA. Although significant progress has been made in the application of AI-guided UGRA, randomized control trials are still missing. More high-quality studies are warranted to generate evidence application of AI-guided UGRA in different patient populations, such as pediatric, and geriatric patients, and in different anatomical regions, nerve blocks, and surgeries. This SR could potentially be used as a basis for future clinical trials and systematic reviews and enable future researchers to identify the directions for applications of AI in regional anesthesia. This review can also enable researchers to avoid the limitations of previous studies, which will be suitable for future systematic reviews and meta-analyses.

Author contributions

DV: conceptualization, design and methodology, writing initial draft, and editing. MD and SS: data extraction. M-HL: editing and writing. All authors approved the manuscript.

Funding

This meta-analysis was supported by the Nazarbayev University Faculty Development Competitive Research Grant 2021-2023. Funder Project Reference: 021220FD2851.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmed.2022.994805/full#supplementary-material

References

1. Lee CS, Tyring AJ, Wu Y, Xiao S, Rokem AS, DeRuyter NP, et al. Generating retinal flow maps from structural optical coherence tomography with artificial intelligence. Scient. Rep. (2019) 9:1. doi: 10.1038/s41598-019-42042-y

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Choy G, Khalilzadeh O, Michalski M, Do S, Samir AE, Pianykh OS, et al. Current applications and future impact of machine learning in radiology. Radiology. (2018) 288:318–28. doi: 10.1148/radiol.2018171820

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Sites BD, Chan VW, Neal JM, Weller R, Grau T, Koscielniak-Nielsen ZJ, et al. The American society of regional anesthesia and pain medicine and the European society of regional Anaesthesia and pain therapy joint committee recommendations for education and training in ultrasound-guided regional anesthesia. Reg Anesth Pain Med. (2009) 34:40–6. doi: 10.1097/AAP.0b013e3181926779

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Strakowski JA. Ultrasound-guided peripheral nerve procedures. Phys Med Rehabil Clin N Am. (2016) 27:687–715. doi: 10.1016/j.pmr.2016.04.006

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Greher M, Retzl G, Niel P, Kamolz L, Marhofer P, Kapral S. Ultrasonographic assessment of topographic anatomy in volunteers suggests a modification of the infraclavicular vertical brachial plexus block. Br J Anaesth. (2002) 88:632–6. doi: 10.1093/bja/88.5.632

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Marhofer P, Greher M, Kapral S. Ultrasound guidance in regional anaesthesia. Br J Anaesth. (2005) 94:7–17. doi: 10.1093/bja/aei002

PubMed Abstract | CrossRef Full Text | Google Scholar

7. Kapral S, Krafft P, Eibenberger K, Fitzgerald R, Gosch M, Weinstabl C. Ultrasound-guided supraclavicular approach for regional anesthesia of the brachial plexus. Anesth Analg. (1994) 78:507–13. doi: 10.1213/00000539-199403000-00016

PubMed Abstract | CrossRef Full Text | Google Scholar

8. Nascimento JC, Marques JS. Robust shape tracking with multiple models in ultrasound images. IEEE Trans Image Process. (2008) 17:392–406. doi: 10.1109/TIP.2007.915552

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Alkhatib M, Hafiane A, Tahri O, Vieyres P, Delbos A. Adaptive median binary patterns for fully automatic nerves tracking in ultrasound images. Comput Methods Programs Biomed. (2018) 160:129–40. doi: 10.1016/j.cmpb.2018.03.013

PubMed Abstract | CrossRef Full Text | Google Scholar

10. Alkhatib M, Hafiane A, Vieyres P, Delbos A. Deep visual nerve tracking in ultrasound images. Comput Med Imaging Graph. (2019) 76:101639. doi: 10.1016/j.compmedimag.2019.05.007

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Gungor I, Gunaydin B, Oktar SO, Buyukgebiz BM, Bagcaz S, Ozdemir MG, et al. A real-time anatomy identification via tool based on artificial intelligence for ultrasound-guided peripheral nerve block procedures: an accuracy study. J Anesth. (2021) 35:591–4. doi: 10.1007/s00540-021-02947-3

PubMed Abstract | CrossRef Full Text | Google Scholar

12. Mwikirize C, Nosher JL, Hacihaliloglu I. Convolution neural networks for real-time needle detection and localization in 2D ultrasound. Int J Comput Assist Radiol Surg. (2018) 13:647–57. doi: 10.1007/s11548-018-1721-y

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Oh TT, Ikhsan M, Tan KK, Rehena S, Han NL, Sia AT, et al. Novel approach to neuraxial anesthesia: application of an automated ultrasound spinal landmark identification. BMC Anesthesiol. (2019) 19:1–8. doi: 10.1186/s12871-019-0726-6

PubMed Abstract | CrossRef Full Text | Google Scholar

14. Pesteie M, Lessoway V, Abolmaesumi P, Rohling RN. Automatic localization of the needle target for ultrasound-guided epidural injections. IEEE Trans Med Imaging. (2017) 37:81–92. doi: 10.1109/TMI.2017.2739110

PubMed Abstract | CrossRef Full Text | Google Scholar

15. Smistad E, Johansen KF, Iversen DH, Reinertsen I. Highlighting nerves and blood vessels for ultrasound-guided axillary nerve block procedures using neural networks. J Med Imaging. (2018) 5:044004. doi: 10.1117/1.JMI.5.4.044004

PubMed Abstract | CrossRef Full Text | Google Scholar

16. Bowness JS, El-Boghdadly K, Woodworth G, Noble JA, Higham H, Burckett-St Laurent D. Exploring the utility of assistive artificial intelligence for ultrasound scanning in regional anesthesia. Reg Anesth Pain Med. (2022) 47:375–9. doi: 10.1136/rapm-2021-103368

PubMed Abstract | CrossRef Full Text | Google Scholar

17. Liu Y, Cheng L. Ultrasound images guided under deep learning in the anesthesia effect of the regional nerve block on scapular fracture surgery. J Healthc Eng. (2021) 7:2021. doi: 10.1155/2021/6231116

PubMed Abstract | CrossRef Full Text | Google Scholar

18. Bowness J, Varsou O, Turbitt L, Burkett-St Laurent D. Identifying anatomical structures on ultrasound: assistive artificial intelligence in ultrasound-guided regional anesthesia. Clin Anat. (2021) 34:802–9. doi: 10.1002/ca.23742

PubMed Abstract | CrossRef Full Text | Google Scholar

19. Hetherington J, Lessoway V, Gunka V, Abolmaesumi P, Rohling R. SLIDE: automatic spine level identification system using a deep convolutional neural network. Int J Comput Assist Radiol Surg. (2017) 12:1189–98. doi: 10.1007/s11548-017-1575-8

PubMed Abstract | CrossRef Full Text | Google Scholar

20. Huang C, Zhou Y, Tan W, Qiu Z, Zhou H, Song Y, et al. Applying deep learning in recognizing the femoral nerve block region on ultrasound images. Ann Transl Med. (2019) 7:453. doi: 10.21037/atm.2019.08.61

PubMed Abstract | CrossRef Full Text | Google Scholar

21. Tran D, Rohling RN. Automatic detection of lumbar anatomy in ultrasound images of human subjects. IEEE Trans Biomed Eng. (2010) 57:2248–56. doi: 10.1109/TBME.2010.2048709

PubMed Abstract | CrossRef Full Text | Google Scholar

22. Bowness JS, Burckett-St Laurent D, Hernandez N, Keane PA, Lobo C, Margetts S, et al. Assistive artificial intelligence for ultrasound image interpretation in regional anaesthesia: an external validation study. Br J Anaesth. (2022). doi: 10.1016/j.bja.2022.06.031. [Epub ahead of print].

PubMed Abstract | CrossRef Full Text | Google Scholar

23. Yang XY, Wang LT, Li GD, Yu ZK, Li DL, Guan QL, et al. Artificial intelligence using deep neural network learning for automatic location of the interscalene brachial plexus in ultrasound images. Eur J Anaesthesiol. (2022) 39:758–65. doi: 10.1097/EJA.0000000000001720

PubMed Abstract | CrossRef Full Text | Google Scholar

24. Oquab M, Bottou L, Laptev I, Sivic J. Learning and transferring mid-level image representations using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2014). p. 1717–24.

Google Scholar

25. Shen D, Wu G, Zhang D, Suzuki K, Wang F, Yan P. Machine learning in medical imaging. Comput Med Imag Graph. (2015) 41:1–2. doi: 10.1016/j.compmedimag.2015.02.001

PubMed Abstract | CrossRef Full Text | Google Scholar

26. Bowness J, El-Boghdadly K, Burckett-St Laurent D. Artificial intelligence for image interpretation in ultrasound-guided regional anaesthesia. Anaesthesia. (2021) 76:602–7. doi: 10.1111/anae.15212

PubMed Abstract | CrossRef Full Text | Google Scholar

27. Lloyd J, Morse R, Taylor A, Phillips D, Higham H, Burckett-St Laurent D, et al. Artificial intelligence: innovation to assist in the identification of Sono-anatomy for ultrasound-guided regional Anaesthesia. Adv Exp Med Biol. (2022) 1356:117–40. doi: 10.1007/978-3-030-87779-8_6

PubMed Abstract | CrossRef Full Text | Google Scholar

28. Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. (2019) 17:1–9. doi: 10.1186/s12916-019-1426-2

PubMed Abstract | CrossRef Full Text | Google Scholar

29. LeCun Y, Boser B, Denker JS, Henderson D, Howard RE, Hubbard W, et al. Backpropagation applied to handwritten zip code recognition. Neural Comput. (1989) 1:541–51. doi: 10.1162/neco.1989.1.4.541

CrossRef Full Text | Google Scholar

30. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition. IEEE (2016). p. 2818–26.

Google Scholar

Keywords: artificial intelligence, ultrasound, regional anesthesia, ultrasound-guided regional anesthesia, training, machine learning, peripheral nerve block, sono-anatomy

Citation: Viderman D, Dossov M, Seitenov S and Lee M-H (2022) Artificial intelligence in ultrasound-guided regional anesthesia: A scoping review. Front. Med. 9:994805. doi: 10.3389/fmed.2022.994805

Received: 15 July 2022; Accepted: 22 September 2022;
Published: 25 October 2022.

Edited by:

Shun Ming Chan, Tri-Service General Hospital, Taiwan

Reviewed by:

David Cárdenas-Peña, Technological University of Pereira, Colombia

Copyright © 2022 Viderman, Dossov, Seitenov and Lee. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Dmitriy Viderman, drviderman@gmail.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.