Your new experience awaits. Try the new design now and help us make it even better

REVIEW article

Front. Oncol., 12 January 2026

Sec. Genitourinary Oncology

Volume 15 - 2025 | https://doi.org/10.3389/fonc.2025.1730628

This article is part of the Research TopicKidney Cancer Awareness Month 2025: Current Progress and Future Prospects on Kidney Cancer Prevention, Diagnosis and TreatmentView all 20 articles

Deep learning in renal ultrasound: applications, challenges, and future outlook

  • 1College of Biomedical Engineering, Sichuan University, Chengdu, China
  • 2Department of Medical Ultrasound, West China Hospital, Sichuan University, Chengdu, China

Kidney disease poses a significant global health burden, often progressing to end-stage renal disease with serious complications. Renal ultrasound, which is real-time, accessible, and noninvasive, serves as a primary imaging tool for evaluating renal structure and pathology. However, its diagnostic accuracy is limited by interobserver variability. Artificial intelligence (AI), particularly deep learning (DL), offers a promising solution for enhancing objectivity and automation throughout the renal ultrasound workflow. This review systematically summarizes DL applications across key tasks—including kidney segmentation, volume measurement, functional prediction, and disease diagnosis—and evaluates the performance of models such as CNNs and transformers. The results indicate that DL has significantly improved the accuracy and efficiency of kidney disease analysis, including chronic kidney disease (CKD), but challenges remain in terms of data quality, model interpretability, generalizations, and clinical integration. In the future, the combination of DL with multimodal data, large model technology, federated learning and interpretable artificial intelligence will be essential to achieve intelligence, standardization and personalization of renal ultrasound.

Introduction

Renal diseases have become a major challenge in global public health (1). CKD affects more than 850 million people worldwide and is one of the leading causes of death (2). Owing to its advantages of noninvasiveness, real-time imaging, and economy, ultrasound has become the core imaging method for diagnosing and treating renal diseases. As a primary diagnostic method, it can clearly display the structure of the kidneys (size, shape, and cortical thickness) and the state of the collecting system and has high sensitivity for detecting structural abnormalities such as hydronephrosis and renal stones (35). Doppler technology can also assess renal vascular hemodynamics and assist in diagnosing functional lesions (68). In terms of disease detection, ultrasound can detect various renal lesions, such as congenital abnormalities, stones, cysts, and tumorous lesions (9). In kidney transplant patients, ultrasound is indispensable for evaluating the function and vascular complications of the transplanted kidney (10, 11). New technologies such as ultrasound elastography can also be used to assess the degree of renal fibrosis quantitatively (12). However, traditional ultrasound has significant limitations: (1) it is highly dependent on the operator, and different physicians have low consistency in judging small tumors; (2) it relies mainly on qualitative assessment and lacks objective quantitative indicators such as elastic parameters; and (3) the diagnostic efficiency for complex cases is limited (13, 14).

In recent years, AI technology has provided a revolutionary solution to overcome the limitations of traditional diagnosis (15, 16). In the field of renal ultrasound, AI has evolved from traditional machine learning to deep learning, significantly improving the accuracy and efficiency of image analysis. Convolutional neural networks (CNNs) excel in local feature extraction and perform well in kidney image classification and segmentation (17, 18). ResNet solves the problem of gradient disappearance in deep networks through residual connections, improving the accuracy of identifying complex renal boundaries (19). The self-attention mechanism of the transformer model can capture global feature correlations, helping to analyze the spatial relationship between the kidney and surrounding tissues (20). With the optimization of deep learning algorithms, renal ultrasound diagnosis is shifting from an experience-dependent mode to an intelligent and standardized mode, providing a new path for improving the effectiveness of renal disease diagnosis and treatment (21, 22). In recent years, several reviews on AI in the diagnosis and treatment of kidney diseases have been published. For example, De et al. (15) outlined the potential of deep learning in renal ultrasound from the perspective of a technical basis and clinical application, but their discussions are mostly limited to algorithm performance and lack systematic construction of clinical transformation pathways. Although Xu et al. (16) further explored the multitask application of AI in renal ultrasound, their research focused mainly on technical reviews, and no specific technology integration scheme or expansion center collaboration mechanism was proposed. This article not only focuses on the full-chain application of AI in renal ultrasound (from image acquisition to clinical decision support) but also systematically summarizes its application status and frontier progress in kidney segmentation, functional assessment, disease diagnosis and other aspects. It also proposes the construction of a structured framework for clinical transformation for the first time and deeply discusses frontier directions such as multimodal fusion, federated learning and large models. This study provides a systematic reference for promoting the standardized application and precise decision-making of AI technology in renal ultrasound diagnosis. The main framework of this article is shown in Figure 1.

Figure 1
Diagram outlining deep learning in renal ultrasound, showing applications, challenges, and future outlook. Applications include segmentation, volume measurement, function prediction, and disease diagnosis. Challenges are data-related, technology-related, and clinical integration issues. Future outlook involves integrating emerging technologies, intelligent diagnosis systems, and multidisciplinary cooperation.

Figure 1. The main framework of this article.

Methodology

This study retrieved all the articles from the PubMed and Web of Science databases up to July 30, 2025. The search terms used were “artificial intelligence”, “ultrasound”, “kidney”, “renal”, and related terms. The literature screening process is shown in Figure 2. We followed the PRISMA guidelines and initially retrieved 426 articles from the PubMed and Google Scholar databases. After removing duplicates, 280 articles were retained for the screening stage. In the first round of screening on the basis of titles/abstracts, 117 articles were excluded for not meeting the criteria (exclusion reasons: articles that did not use the DL method/retracted articles). The remaining 163 articles were included in the full-text review stage. Following a detailed evaluation, 65 articles were excluded (for exclusion reasons: non-English literature/conference abstracts that did not provide complete data, reviews, etc./animal experiments with nonhuman subjects). Ultimately, 98 articles were included in this review.

Figure 2
Flowchart of a systematic review process with four stages: Identification, Screening, Eligibility, Inclusion. Starts with 426 publications from PubMed and Google Scholar. After removing duplicates, 280 remain. Title/abstract screening excluded 117 for not being DL methods or retracted, leaving 163 for full-text review. Finally, 65 were excluded for reasons like non-English documents and insufficient data, resulting in 98 included publications.

Figure 2. Selection criteria.

Technical foundation

The technical foundation of AI in the field of renal ultrasound relies mainly on core algorithm paradigms such as supervised learning, unsupervised learning, transfer learning, and multitask learning, which jointly drive a series of breakthroughs from image processing to advanced cognitive tasks (23, 24). Currently, supervised learning is the mainstream method, with its core lies in the use of expert-labeled data to train models to learn the mapping relationships between inputs and outputs (25, 26). In renal ultrasound image analysis, the following types of deep neural network architectures play a key role:

In 2015, the proposal of U-Net created an encoder-decoder architecture for medical image segmentation, which laid the foundation for kidney ultrasound segmentation. The subsequent variant (UNet++/nnU-Net) became the benchmark model for kidney segmentation. The outstanding local feature extraction ability of the convolutional neural network (CNN) significantly promotes the classification and segmentation performance of renal ultrasound images (27, 28). Studies have shown that CNNs perform well in tasks such as automatic segmentation and volume measurement of kidneys, especially in the automatic identification of hydronephrosis, where models based on CNNs can accurately capture the characteristic manifestations in images and effectively improve diagnostic consistency (29). ResNet alleviates the problem of gradient disappearance in deep networks by introducing residual connections, improving the recognition accuracy of complex anatomical structures (30). In renal ultrasound, structures such as ResNet50 have been proven to be able to more clearly identify renal boundaries and internal structures, indicating high value for CKD staging and renal tumor differentiation. DenseNet promotes feature reuse through dense connections, showing significant advantages in the case of limited training samples. In response to the characteristics of ultrasound images, attention U-Net introduces an attention mechanism to highlight key areas; U-Net++ improves the segmentation ability of small targets through nested skip connections (31); and nnU-Net becomes the benchmark model for multicenter research through an automatic configuration strategy (32). In 2017, the transformer model was born. Its self-attention mechanism solves the pain point of insufficient global feature capture by the CNN, which can effectively capture the global context relationship and significantly improve the modeling ability of the spatial relationship between the kidney and the surrounding tissues. Promote the upgrade of diagnosis from “local feature dependence” to “global relationship modeling” (20).

Unsupervised learning has unique value in scenarios with scarce labeled data in renal ultrasound. This method effectively expands the boundaries of supervised learning by mining the intrinsic patterns in unlabeled data. The main technical paths include 1) clustering analysis, which can be used to identify tissue characteristics of the renal cortex, medulla, etc., or distinguish different stages of chronic renal disease populations; 2) an autoencoder (AE), through an encoding-decoding structure to learn compact representations, whose derivative models, such as a denoising autoencoder (DAE), can improve image quality, and a variational autoencoder (VAE) can generate synthetic samples that conform to the real distribution to expand data; and 3) a generative adversarial network (GAN), through the game mechanism of the generator and discriminator, can synthesize realistic pathological images, alleviating the problem of insufficient samples (33). In 2020, cross-modal fusion technology has matured, and generative models such as CycleGAN have realized domain conversion from CT to ultrasound, overcoming the bottleneck of the scarcity of renal ultrasound labeling data and improving the segmentation accuracy in small sample scenes. These methods perform well in tasks such as feature learning, anomaly detection, and domain adaptation (34).

Transfer learning is another important way to alleviate the shortage of labeled data (35). Its core idea is to transfer the general features pretrained on large-scale source domains to the renal ultrasound task. Common strategies include the following: 1) feature extraction: fix the convolutional layer weights of the pretrained model as the feature extractor; and 2) fine-tuning: partially unlock the network layers and use the target data for iterative optimization. Practice has shown that this method can significantly reduce the dependence on the annotation scale, accelerate convergence, and improve the generalization performance. For example, by leveraging cross-modal transfer learning methods, accurate segmentation of renal ultrasound images can be achieved even under limited annotation conditions (36). Moreover, domain adaptation techniques further enhance the model’s robustness across different devices or centers by aligning the distributions between the source domain (annotated data) and the target domain (unannotated clinical data).

Multitask learning (MTL) efficiently utilizes annotation information by sharing underlying features and simultaneously optimizing multiple related tasks (such as segmentation, classification and volume measurement) 37. In renal ultrasound, MTL has three advantages: first, it improves overall performance by leveraging task correlations (such as using segmentation to assist volume estimation); second, it enhances generalization ability through knowledge transfer; and third, it reduces resource consumption for multi-model deployment. Since 2023: deep integration of large models and multimodal data. Through the combination of pretrained large models and cross-modal alignment technology, deep integration of ultrasound, CT, and genomic data has been achieved, which has improved the diagnostic accuracy of rare kidney disease and promoted the technology to expand from “single-task optimization” to “full-cycle kidney disease management” (3436).

The three core algorithmic classes (CNNs vs. transformers vs. multimodal fusion) in the field of renal ultrasound DL differ significantly in terms of their technical characteristics, applicable scenarios and performance. The strengths, limitations and typical clinical suitability of these methods are shown in Table 1.

Table 1
www.frontiersin.org

Table 1. Strengths, limitations and typical clinical suitability of the three core algorithmic classes.

The application of AI technology in renal ultrasound

Renal segmentation and volume measurement

Renal ultrasound image segmentation and its derived volume measurement technology together constitute the core technical system of AI analysis of kidney diseases, and they show a close basic and extended relationship in clinical application (3741). Segmentation is the premise of accurate measurement, and its accuracy directly determines the reliability of volume quantification, morphological evaluation and lesion localization (4244). In turn, the clinical demand of volume measurement promotes the segmentation algorithm to be accurate and scenario iterative, forming a complete chain from morphological recognition to functional evaluation (45). Accurate segmentation and measurement have multiple synergistic meanings. First, three-dimensional reconstruction of the renal cortex, medulla and collecting system can be achieved by automatic segmentation, which can accurately calculate the total renal volume and cortical thickness and provide key quantitative indicators for the assessment of CKD progression and monitoring of renal allograft function. Studies have shown that renal cortical volume is significantly positively correlated with the glomerular filtration rate and that progressive volume reduction is an independent predictor of CKD progression (46). Second, the volume ratio of the renal pelvis to the renal parenchyma obtained by segmentation can provide an important basis for the timing of surgical intervention in patients with hydronephrosis, and the ratio is significantly related to the degree of renal function injury (47). Third, in clinical operations such as tumor ablation planning, accurate segmentation is the cornerstone of lesion localization and classification, whereas monitoring dynamic volume changes can indicate graft rejection or dysfunction early (48, 49).

Traditional methods (such as the threshold method, region growing method and active contour model) are limited by problems such as uneven gray levels in ultrasound images, noise interference and weak boundaries, which make it difficult to meet the requirements of boundary accuracy for volume measurement (50, 51). In recent years, deep learning technology has promoted the breakthrough progress of both methods. Table 2 lists several representative studies on renal segmentation and volume measurements. In terms of segmentation algorithm innovation, Song et al. (52) and Guo et al. (53) used cross-modal data enhancement techniques (CycleGAN, CUT network) to effectively alleviate the problem of scarcity of labeled data through domain conversion from CT to ultrasound. For low-resolution images, Khan et al. (54) proposed that MLOU-Net introduces a deep supervised attention mechanism and hybrid loss function to achieve a 90.21% Dice coefficient on low-resolution renal ultrasound images. Alex et al. (55) designed the boundary feature enhancing network YSegNet combined with a long and short jump connection mechanism, and the Dice coefficient still reached 97% under the weak boundary challenge. Chen et al. (56) designed a multiscale feature fusion architecture (MSIP and MOS) to aggregate renal features of different scales, and the Dice index was 95.86%. Nipuna et al. (57) used 3D and multimodal fusion technology (3D U-Net fusion B-mode and power Doppler data) to achieve high-precision volume segmentation of the fetal kidney. Innovations in these segmentation techniques directly enable volumetric measurement applications. Jaidip M et al. (59) reported that the segmentation results of 3D ultrasound automatic measurements of ADPKD patients based on 2D U-Net and transfer learning were highly consistent with those of MRI (Dice=80%). The fast-unet++ proposed by Oghli et al. (60) can achieve high-precision segmentation (DSC>95%) of the sagittal and transverse planes and simultaneously predict multidimensional parameters such as renal length, width, thickness and volume. Kim et al. (61) developed a hybrid learning method of U-Net and an active contour model for the automatic calculation of renal volume in children, which was highly correlated with the CT measurement results (ICC = 0.925). Esser et al. (62) verified good interobserver agreement (ICC 0.83–0.94) via semiautomatic 3D ultrasound segmentation in the assessment of pediatric hydronephrosis.

Table 2
www.frontiersin.org

Table 2. Application of DL technology in renal segmentation and volume measurements.

Existing studies have focused mostly on normal renal structure, and the generalization ability of tumors, cysts, severe hydronephrosis and other pathological conditions has not been fully verified. Moreover, these methods generally rely on single-center data and lack large-scale external datasets and prospective clinical trial verification across devices and institutions (45, 63). Future work should focus on developing robust segmentation algorithms for abnormal kidneys, constructing multi-disease and multicenter collaborative datasets, promoting the transformation of renal ultrasound AI systems from single-center research to real clinical scenarios, and verifying their feasibility and safety as routine diagnostic and treatment tools through multicenter clinical trials (64).

Renal function prediction

Renal function prediction is a key step in the diagnosis and treatment of renal diseases (65, 66). Traditional methods rely mainly on biochemical indicators such as serum creatinine and urea nitrogen and use formulas such as CKD-EPI to estimate the glomerular filtration rate (GFR) (67, 68). However, these indicators are easily affected by muscle mass, diet and other factors and are less sensitive to early kidney injury (69). Traditional ultrasound interpretation is subjective and lacks quantitative analysis ability (70). In recent years, DL technology has significantly improved the objectivity and efficiency of renal function assessment through deep fusion of multimodal ultrasound data and clinical indicators to reduce human error (71, 72). Texture analysis can be used to extract functional information from images, overcome morphological limitations, and achieve quantitative descriptions of microstructures such as fibrosis and microangiopathy (73). In CKD, DL models can integrate multisource data such as electronic medical records, radiomics, and biomarkers to predict the risk of rapid progression of eGFR decline ≥5 mL/min/1.73 m² per year (7476). In acute kidney injury (AKI), DL systems can identify high-risk patients before biochemical changes occur (7780). Table 3 lists several representative studies on renal function prediction.

Table 3
www.frontiersin.org

Table 3. Application of DL technology in renal function prediction.

Ziman Chen et al. (81) used radiomics to extract many quantitative features and combined them with clinical indicators to construct a prediction model, which achieved a noninvasive assessment of moderate to severe renal fibrosis (AUC = 0.85). Han Yuan et al. (82) used ultrasound viscoelastic imaging technology to assess renal function effectively and the degree of fibrosis via mechanical parameters such as the Emean and Vmean (AUC = 0.91). Xinyue Huang et al. (83) fused clinical data, conventional ultrasound, shear wave elastography, and plane wave hypersensitivity flow imaging to construct a Fisher discriminant model, which successfully distinguished different fibrosis grades (the highest accuracy was 84.7%). Yidan Tang et al. (84) integrated conventional ultrasound, contrast-enhanced ultrasound, and elastography to construct a multimodal ultrasound knowledge map and AI prediction model for the risk prediction of sepsis-related acute kidney injury. Ahmed M et al.’s XAI-CKD system (85), which is based on an extra tree classifier combined with SHAP interpretability analysis, achieved near-perfect performance (AUC = 1.0) in CKD classification. Shuyuan Tian et al. (86) integrated ResNet34 depth features and traditional texture features (GLCM+HOG) to achieve CKD diagnosis, especially in the G5 stage (AUC = 0.931). Minyan Zhu et al. (87) used an SVM to integrate various types of ultrasound image information and successfully predicted the degree of renal interstitial fibrosis (AUC = 0.943 when the IFTA > 50%). Fuzhe Ma (65) and Fu Ying (71) proposed the HMAN-based detection model and PCNN-based image enhancement algorithm, respectively, which improved the image quality and diagnostic reliability. Chin-Chi Kuo et al. (88) combined ResNet-101 and XGBoost to achieve automatic estimation of the eGFR and CKD grade (AUC = 0.904).

However, an examination of these representative studies reveals the core challenges facing the field (8991). First, the generalizability of the models is generally questionable (9294). For example, the near-perfect performance (AUC = 1.0) reported by Ahmed M et al. (85) is highly unusual in real medical data, strongly suggesting that the model may be overfitted on a specific dataset, and its cross-center applicability urgently needs to be verified. Second, the clinical translation of the technology faces a realistic bottleneck. For example, the multimodal fusion scheme of Xinyue Huang et al. (83) has improved performance, but its dependence on a variety of advanced imaging technologies is difficult to popularize in primary medical institutions, and the actual application cost is high. In addition, the limitations of research methods urgently need to be overcome. Although the pioneering work of Chin-Chi Kuo et al. (88) verified its technical feasibility, the limitations of its single-center design and lack of prospective validation are still common problems in many subsequent studies. In summary, although current research has made continuous breakthroughs in model performance, it is generally limited by key bottlenecks such as single-center data dependence, insufficient cross-center validation, and insufficient consideration of clinical applicability (95, 96).

Renal disease diagnosis

Ultrasound imaging plays an irreplaceable role in the diagnosis of renal diseases (27). It is widely used in the assessment of renal morphology, screening of space-occupying lesions, diagnosis of hydronephrosis, monitoring after renal transplantation, and differentiation of cystic and solid lesions (97, 98). Especially in children, pregnant women and patients with renal insufficiency, ultrasound has become the preferred imaging method because of its safety (99). However, traditional renal ultrasound diagnosis also has obvious shortcomings: the results are highly dependent on the experience and skills of the operators, and there is strong subjectivity (100). It is not sensitive to early changes in renal function or slight structural changes (101). The ability of quantitative analysis is limited; for example, accurate assessment of renal fibrosis, diffuse lesions, or small hemodynamic changes is difficult. In addition, the low degree of standardization among different devices and scanning parameters also affects the comparability and repeatability of the results. DL technology has shown great potential in the diagnosis of renal diseases (102, 103). Table 4 lists several representative studies on renal disease diagnosis.

Table 4
www.frontiersin.org

Table 4. Application of DL technology in the diagnosis of renal diseases.

Miguel Molina-Moreno et al. (75) developed URI-CADS, an automatic system based on a multitask convolutional neural network, to realize the integrated analysis of kidney image segmentation and multi-pathological diagnosis, and its AUC reached 0.819 for multiple pathological diagnoses. Shi Yin et al. (104) proposed a multi-example deep learning framework to distinguish children with congenital anomalies of the kidney and urinary tract (CAKUT) effectively from those with unilateral hydronephrosis by clustering multi-view ultrasound image features. The AUC of the MIL model was as high as 0.961. Umar Islam et al. (27) designed a novel double-path convolutional neural network, which was significantly superior to classical models (such as VGG16 and ResNet50) in the detection of hydronephrosis, reaching 99.8% accuracy. Jinjin Hai et al. (105) integrated 2D and 3D convolutional structures to construct CD-ConcatNet to achieve fusion feature extraction and disease classification of multi-view renal ultrasound images (AUC = 0.8667). In addition, Sudharson et al. (98) achieved a high-precision four-classification task by integrating multiple pretrained models (ResNet-101, ShuffleNet, and MobileNet-v2) with an accuracy of 96.54%. Ming-Chin Tsai et al. (106) used transfer learning to optimize ResNet-50 to screen for children’s kidney abnormalities, and the AUC of the model was 0.959. Maosheng Xu et al. (22) combined two-dimensional ultrasound, color Doppler and shear wave elastography to construct a multimodal combined diagnostic model, and the AUC reached 0.75 in the classification of glomerular diseases in children.

These studies show the broad prospects of deep learning in improving the automation, quantification and multi-disease discrimination of renal ultrasound diagnosis (107, 108). However, the in-depth analysis identified several key issues of concern: first, the prudent evaluation of model performance. For example, the 99.8% accuracy reported by Umar Islam et al. (27) is extremely rare in medical image analysis and may reflect improper partitioning of the dataset or overfitting risk. Second, the clinical usefulness of complex models is questionable. Although the multitask system of Miguel Molina-Moreno et al. (75) has comprehensive functions, its stability under multicenter and different equipment conditions has not been verified. However, the multimodal method of Maosheng Xu et al. (22) has relatively limited performance (AUC = 0.75), suggesting that complex technology fusion may not improve performance. In addition, the study population was underrepresented. Most studies have focused on common conditions in children, and the applicability of these findings to complex abnormal renal structures (such as severe malformations and postoperative changes) remains to be investigated. In summary, these current studies are constantly innovating at the technical level, but there are still obvious deficiencies in the ability verification of model generalization and clinical practicality assessment (109).

Challenges of DL technology in renal ultrasound applications

Data-related challenges

Data quality and labeling

The data quality of renal ultrasound images directly affects the performance of DL models. Common noise, artifacts, and inconsistent resolution of ultrasound images lead to blurred renal structural boundaries and inapparent features, which affect the extraction of key information by the model. Guo et al. (58) demonstrated that a 30% reduction in the signal-to-noise ratio of ultrasound images can reduce the Dice coefficient of deep learning renal segmentation models by 8%-12%. High-quality labeling is the basis for training a reliable model, but renal ultrasound labeling faces many challenges (110). The unclear boundaries of the kidney and the lesion area lead to large differences in labeling between different doctors, and it takes 10–15 minutes for senior sonographers to label a single image (111). In addition, the inconsistency of labeling standards makes it difficult to integrate multicenter data and affects the generalizability of the model. Wu et al. (112) demonstrated that the inconsistency of observer labeling leads to a significant decrease in the average accuracy (AP) of a deep learning model. The AP50 value was 92.17% when the full labeling method was used, whereas the AP50 value increased to 98.57% when the local labeling method was used. To improve the quality of labeling, some studies have used enhanced data labeling strategies and automated pre-labeling techniques while evaluating labeling consistency through gradient mapping (105). These methods are helpful for improving the training effect of renal ultrasound AI models (113).

Insufficient data

In DL research on renal ultrasound, data scarcity, especially for rare diseases such as renal medullary cystic disease or hereditary nephritis, is the core challenge (114). Small samples can easily lead to overfitting and poor generalization ability of the model (115). Data heterogeneity in multiple centers further aggravates the problem of uneven data distribution. The DL model developed by Akbari et al. (64) showed high consistency (correlation coefficient >0.9) on single-center data, whereas the correlation coefficient decreased to approximately 0.8 when external multicenter validation was performed. To solve these problems, researchers have used data enhancement techniques (rotation, scaling, noise, etc.), which can improve the accuracy of small sample models by 5%-10% (116). Transfer learning can reduce the dependence on the amount of task-specific data by transferring generic features and still maintain high classification performance when the training samples are halved (117). In addition, cross-center data standardization and synthetic data generation, such as the diffusion model Med-DDPM, have also been explored to mitigate data scarcity and privacy issues. However, these methods still need more clinical validation to address potential limitations, such as algorithm transparency and data security.

Technology-related challenges

Interpretability of the algorithm

At present, in renal ultrasound diagnosis, although complex DL algorithms such as deep neural networks have high accuracy, their “black box” characteristics make the decision-making basis difficult to understand, which severely restricts clinical trust and application (118). For example, models cannot explain why tumors with ill-defined boundaries are considered malignant, forcing physicians to rely on traditional pathological tests (119). Alderden et al. (120) used attention mechanisms to highlight key image areas, resulting in a 35% increase in physician trust. Feature visualization technology reveals decision logic by showing the texture, echo, and other features that the model focuses on (121). In addition, a lack of interpretability exacerbates ethical risks, making it difficult to assess potential biases of models against specific populations, such as different ages or genders (122). These challenges highlight the importance of developing interpretable artificial intelligence (XAI) methods. It is necessary to enhance transparency through visual interpretation, rule extraction and other techniques and establish a standardized ethical review framework to promote the safe application of AI in renal ultrasound diagnosis (123).

Robustness and generalization ability of the model

The robustness and generalizability of renal ultrasound AI models face multiple challenges, which are reflected in three main aspects: equipment differences, operator factors, and individual patient differences (124). The imaging principles and parameter settings of different ultrasound devices lead to differences in image feature distributions (125). For example, images from high-end ultrasound devices have high resolution and low noise, whereas images from primary hospitals may have obvious artifacts. The image quality was also affected by the operator’s scanning technique and section selection. The images of the same patient collected by different doctors may cause the Dice coefficient of the model segmentation results to fluctuate by 5%-7% (126). A small kidney size and incompletely developed structure in children and renal atrophy and fatty infiltration in elderly patients often lead to a 22% decrease in the performance of models trained for adults on children’s data (127).

Clinical integration challenges

Integration with the clinical workflow

The core challenges of DL technology in the standardization of renal ultrasound data are the lack of standardization of multisource heterogeneous data and the differences in equipment models, imaging parameters, and operation specifications across different medical institutions, which limits the generalizability of AI models across institutions. The survey revealed that only 32% of the hospitals used unified scanning standards, which severely affected their clinical suitability (128). Števik et al. (129) explored the integration of AI into clinical workflows, but the need to manually upload images for analysis extended a single examination by 8–10 minutes, which did not meet clinical requirements for efficiency. This review revealed that AI-assisted diagnosis can significantly improve diagnostic accuracy. The diagnostic accuracy of AI-modified methods for complex renal diseases has increased by 21%, especially in the automatic detection of hydronephrosis and the classification of chronic renal disease, highlighting the advantages of standardization. Current technical bottlenecks include process interruptions caused by offline analysis modes and the lack of uniform image quality assessment standards, but real-time image analysis and computer-aided diagnosis systems enabled by convolutional neural networks have shown the potential to optimize workflows (130, 131). In the future, it is necessary to establish cross-platform data standards, develop embedded AI systems, and solve key problems such as algorithm interpretability and insufficient clinical validation to realize intelligent integration of the whole process from image acquisition to diagnostic reporting (132, 133).

Regulatory and ethical issues

Regulation and ethics are not to be ignored; Muralidharan et al. (134). reported that only 3.6% of FD-approved AI/IUI medical devices reported race/ethnicity, 99.1% did not provide socioeconomic data, and 81.6% did not report the age of the study subjects. The issue of data privacy is particularly prominent (135, 136). Because renal ultrasound images contain sensitive information, data sharing from multiple centers often conflicts with privacy regulations (137). In addition, ethical review should focus on algorithm fairness, including the evaluation of diagnostic differences in patients of different races and economic levels. The current solution emphasizes multiparty collaboration: the need to establish unified regulatory guidelines, improve data anonymization technology, develop a liability identification framework, and reduce algorithm bias through diverse dataset training is essential to promote the safe application of AI in renal ultrasound.

Future prospects of DL technology in renal ultrasound

Integration of emerging technologies

The integration of AI and multimodal imaging technology has significantly improved the diagnostic ability for renal diseases. With the anatomical details and functional information provided by CT and MRI, combined with the real-time advantages of ultrasound, the multimodal deep learning model can improve the diagnostic accuracy of renal tumor staging by 23% compared with that of a single ultrasound (138). The combination of molecular imaging technology and AI enables earlier molecular diagnosis, such as targeted contrast ultrasound molecular imaging, which can detect renal inflammation before renal dysfunction (139). The integration of wearable devices and Internet of Things technology has created a new mode of remote monitoring. Portable ultrasound devices can provide early warning through cloud-based AI analysis, and clinical trials have shown that acute exacerbation of chronic renal disease can be warned 3–5 days earlier (140). Given the limitations of single-center, single-modality and small samples, large medical models pretrained on large-scale multimodal data can extract shared representations of ultrasound-CT-MRI without massive labeling through cross-modal alignment and self-supervised learning. This approach significantly improves the ability to identify rare kidney diseases, such as hereditary nephritis, and alleviates the overfitting problem caused by scarce data. Edge computing technologies deploy lightweight AI models on portable devices to achieve point-of-care diagnosis. Federated learning achieved a diagnostic accuracy of 90.2% in the collaboration of 10 hospitals through the parameter sharing mechanism, which effectively solved the problem of data privacy (141). Combined with the federated learning and fine-tuning strategy, each center can share the basic model parameters while retaining local data, realizing multicenter coevolution and overcoming the bottleneck of single-center generalization. Augmented reality technology superimposes AI-processed tumor boundary information on the surgical field in real time, which improves the integrity rate of renal tumor resection by 18% (84). A large medical model pretrained on massive multimodal data shows strong adaptability and achieves 82% accuracy in the ultrasound diagnosis of rare renal diseases (142). The future large model will provide an interpretable basis for malignant diagnosis through visualization of the attention mechanism and chain-of-thought reasoning, transform black-box decisions into traceable clinical logic, and enhance the trust of doctors. These technical advances have promoted the rapid development of renal disease diagnosis, from morphological evaluation to functional, molecular, and real-time dynamic monitoring.

Integrated intelligent diagnosis system

The ultimate goal of AI in the field of renal ultrasound is to develop an intelligent diagnostic system that can provide full-process, high-efficiency, and high-precision decision support for clinical practice through deep integration of a variety of AI functional modules and optimized human–computer interactions. The current research frontiers focus on building a one-stop diagnostic platform, deepening the application of personalized medicine, and promoting deep multimodal integration (143). The technical basis of the system is built on the framework of a multimodal large model fusion mechanism: unified representation learning is used to integrate ultrasound, CT, MRI and pathomics data, and a cross-attention module is used to achieve dynamic weighting of cross-modal features, which overcomes the limitations of traditional narrow AI, which only processes a single image. At the bottom of the platform, a lightweight real-time inference engine is deployed to compress the number of large model parameters to the scale that can be deployed on edge devices to meet the clinical needs of millisecond response. The core of the one-stop diagnostic platform seamlessly integrates the full chain of renal ultrasound AI applications, including real-time image quality assessment and standardized section guidance, automatic renal segmentation and volume measurement, dynamic prediction of renal function on the basis of image features and elastography parameters, intelligent identification and classification of common renal diseases, and automatic generation of structured diagnostic reports (144). The platform adopts the paradigm of large model pretraining + domain fine-tuning. First, self-supervised pretraining is carried out on millions of multicenter and multimodal data, and then efficient fine-tuning techniques such as LoRA are used to adapt local device parameters and population characteristics on the center-specific data to ensure cross-center generalization ability and localization accuracy. The platform uses a microservice architecture and workflow engine, allowing each AI module to call on demand, feed results to each other, and realize a closed loop of “scan, analysis, report” through a unified interface. Clinical verification shows that the integrated system can reduce the time of renal ultrasound examination and diagnosis by more than 40% and improve the consistency of diagnosis. By integrating the visualization research module of the attention mechanism, the closed loop can generate heatmaps in real time and overlay them on the original image, clearly label the decision basis of the model, and transform the black box into a transparent decision chain. The core of personalized medicine is the use of AI to mine multidimensional data of individual patients (such as dynamic changes in ultrasound imaging features, genetic background, serum/urine biomarker trajectories, comorbidities, and medication history in electronic health records) and the construction of patient-specific disease progression prediction models and treatment response models. For example, Bayesian deep learning models that combine trends in kidney texture features with genomic data can generate customized predictions of kidney decline trajectories for each CKD patient. The large model-based longitudinal dynamic fusion framework uses a temporal fusion transformer (temporal fusion transformer), which captures the imaging evolution of patients for months or even years, combined with time series data from electronic medical records, to realize dynamic risk warning and adaptive adjustment of treatment plans. Multimodal fusion is the technical cornerstone for achieving precision personalization. The future system will break through the current simple fusion of “ultrasound + clinical data” and evolve into deep heterogeneous data. Cross-image modality fusion uses AI to align and fuse the complementary information of renal ultrasound and CT/MRI/PET-CT. Radiomics and environment fusion integrate ultrasound radiomics features, serum/urine proteomics/metabolomics, and environmental exposure factors (116). The final multicenter, multimodal, and large-sample standardized platform aggregates multicenter data through federated learning. The basic model of 100 million parameters is trained to form a clinical decision-making center that can be transferred, interpreted and responded to in real time, and the fundamental challenge of the disconnect between traditional AI and clinical practice is completely solved.

Multidisciplinary cooperation

The breakthrough of DL in the field of renal ultrasound requires deep interdisciplinary integration, integrating the expertise of computer scientists (algorithm design), imaging experts (image annotation and clinical relevance), nephrologists (diagnosis and treatment decision-making) and biomedical engineers (signal and software and hardware optimization) (145). Through joint discussion and clinical rotation to establish a common understanding, collaboration should run through the full cycle from clinical requirement definition, data collection, and model development to clinical verification to avoid technology being divorced from reality (146). This collaborative model can not only improve the reliability and interpretability of AI in the diagnosis of hydronephrosis and nephropathy but also optimize its clinical applicability so that resources can be focused on real bottleneck problems (147).

Conclusions

DL has brought transformative advances to renal ultrasound, enhancing diagnostic accuracy, efficiency, and standardization across image segmentation, volumetry, disease diagnosis, and functional prediction while addressing traditional ultrasound limitations such as operator dependence and insufficient quantification—with performance comparable to that of professional physicians in renal structure recognition and lesion detection (148); however, critical research gaps, including uneven data quality, inadequate standardization/labeling, limited algorithm interpretability, poor cross-device generalization, clinical integration barriers, and incomplete regulatory/ethical frameworks, hinder its clinical translation, and accelerating this process requires targeted steps such as establishing unified data standards, developing explainable AI, deepening interdisciplinary collaboration, and refining regulatory guidelines, with future advancements in multimodal fusion, federated renal ultrasound systems, and medical large language models driving AI toward intelligent, personalized renal ultrasound systems that optimize the full workflow from image acquisition to clinical decision-making, ultimately enabling AI to become a core tool for precise renal disease diagnosis and treatment and supporting global kidney health management.

Author contributions

YZha: Writing – original draft, Methodology, Supervision, Data curation, Conceptualization, Software, Investigation, Resources, Validation, Formal Analysis, Project administration, Funding acquisition, Visualization, Writing – review & editing. YH: Writing – review & editing, Conceptualization, Resources, Data curation. YZhu: Investigation, Methodology, Writing – review & editing, Supervision. KC: Writing – review & editing, Project administration, Formal Analysis, Data curation. WL: Visualization, Investigation, Data curation, Funding acquisition, Writing – review & editing. YL: Funding acquisition, Project administration, Methodology, Writing – review & editing. JL: Methodology, Conceptualization, Writing – review & editing, Project administration, Validation. TQ: Writing – review & editing.

Funding

The author(s) declared that financial support was received for this work and/or its publication. This research was supported by Sichuan Province Science and Technology Support Program 2025ZNSFSC1760.

Acknowledgments

We would like to thank the Biomedical Engineering Experimental Teaching Center of Sichuan University for their assistance in the experiments.

Conflict of interest

The authors declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Bello AK, Ronksley PE, Tangri N, Kurzawa J, Osman MA, Singer A, et al. Quality of chronic kidney disease management in canadian primary care. JAMA network Open. (2019) 2:e1910704. doi: 10.1001/jamanetworkopen.2019.10704

PubMed Abstract | Crossref Full Text | Google Scholar

2. Rashidi P and Bihorac A. Artificial intelligence approaches to improve kidney care. Nat Rev Nephrol. (2020) 16:71–2. doi: 10.1038/s41581-019-0243-3

PubMed Abstract | Crossref Full Text | Google Scholar

3. Kranert PC, Kranert P, Banas MC, Jung EM, Banas B, Putz FJ, et al. Utility of ultrasound-guided attenuation parameter (UGAP) in renal angiomyolipoma (AML): first results. Diagnostics. (2024) 14:2002. doi: 10.3390/diagnostics14182002

PubMed Abstract | Crossref Full Text | Google Scholar

4. Dinescu SC, Stoica D, Bita CE, Nicoara AI, Cirstei M, Staiculesc MA, et al. Applications of artificial intelligence in musculoskeletal ultrasound: narrative review. Front Med. (2023) 10:1286085. doi: 10.3389/fmed.2023.1286085

PubMed Abstract | Crossref Full Text | Google Scholar

5. Revzin MV, Srivastava B, and Pellerito JS. Ultrasound of the upper urinary tract. Radiologic Clinics North America. (2025) 63:57–82. doi: 10.1016/j.rcl.2024.09.002

PubMed Abstract | Crossref Full Text | Google Scholar

6. Gunabushanam G, Chaubal R, and Scoutt LM. Doppler ultrasound of the renal vasculature. J ultrasound Med. (2024) 43:1543–62. doi: 10.1002/jum.16466

PubMed Abstract | Crossref Full Text | Google Scholar

7. Xu Y, Luo Y, Chen M, Peng Q, and Niu C. Super-resolution ultrasound imaging of renal microcirculation in a murine model of renal fibrosis. J ultrasound Med. (2025) 44:2229–41. doi: 10.1002/jum.70003

PubMed Abstract | Crossref Full Text | Google Scholar

8. Brasseler M, Finkelberg I, Müntjes C, and Cetiner M. Case Report: Renal artery stenosis in children: ultrasound as a decisive diagnostic and therapy-accompanying technique. Front Pediatr. (2023) 11:1251757. doi: 10.3389/fped.2023.1251757

PubMed Abstract | Crossref Full Text | Google Scholar

9. Tufano A, Antonelli L, Di Pierro GB, Flammia RS, Minelli R, Anceschi U, et al. Diagnostic performance of contrast-enhanced ultrasound in the evaluation of small renal masses: A systematic review and meta-analysis. Diagnostics. (2022) 12:2310. doi: 10.3390/diagnostics12102310

PubMed Abstract | Crossref Full Text | Google Scholar

10. Franke D, Renz DM, and Mentzel HJ. Bildgebung nach Nierentransplantation im Kindes- und Jugendalter. Radiologie (Heidelberg Germany). (2024) 64:45–53. doi: 10.1007/s00117-023-01249-x

PubMed Abstract | Crossref Full Text | Google Scholar

11. Langdon J, Sharbidre K, Garner MS, Robbin M, and Scoutt LM. Renal transplant ultrasound: assessment of complications and advanced applications. Abdominal Radiol. (2025) 50:2558–85. doi: 10.1007/s00261-024-04731-9

PubMed Abstract | Crossref Full Text | Google Scholar

12. Ng KH, Wong JHD, and Leong SS. Shear wave elastography in chronic kidney disease - the physics and clinical application. Phys Eng Sci Med. (2024) 47:17–29. doi: 10.1007/s13246-023-01358-w

PubMed Abstract | Crossref Full Text | Google Scholar

13. Shi LQ, Sun J, Yuan L, Wang XW, and Li W. Diagnostic performance of renal cortical elasticity by supersonic shear wave imaging in pediatric glomerular disease. Eur J Radiol. (2023) 168:111113. doi: 10.1016/j.ejrad.2023.111113

PubMed Abstract | Crossref Full Text | Google Scholar

14. Kelly BC, Fung R, and Fung C. Risk stratification framework to improve the utility of renal ultrasound in acute kidney injury. SA J Radiol. (2024) 28:2889. doi: 10.4102/sajr.v28i1.2889

PubMed Abstract | Crossref Full Text | Google Scholar

15. De Jesus-Rodriguez HJ, Morgan MA, and Sagreiya H. Deep learning in kidney ultrasound: overview, frontiers, and challenges. Adv chronic Kidney Dis. (2021) 28:262–9. doi: 10.1053/j.ackd.2021.07.004

PubMed Abstract | Crossref Full Text | Google Scholar

16. Cai L and Pfob A. Artificial intelligence in abdominal and pelvic ultrasound imaging: current applications. Abdominal Radiol. (2025) 50:1775–89. doi: 10.1007/s00261-024-04640-x

PubMed Abstract | Crossref Full Text | Google Scholar

17. McDonald R, Watchorn J, and Hutchings S. New ultrasound techniques for acute kidney injury diagnostics. Curr Opin Crit Care. (2024) 30:571–6. doi: 10.1097/MCC.0000000000001216

PubMed Abstract | Crossref Full Text | Google Scholar

18. Jia J, Wang B, Wang Y, and Han Y. Application of ultrasound in early prediction of delayed graft function after renal transplantation. Abdominal Radiol. (2024) 49:3548–58. doi: 10.1007/s00261-024-04353-1

PubMed Abstract | Crossref Full Text | Google Scholar

19. Yan L, Li Q, Fu K, Zhou X, and Zhang K. Progress in the application of artificial intelligence in ultrasound-assisted medical diagnosis. Bioengineering. (2025) 12:288. doi: 10.3390/bioengineering12030288

PubMed Abstract | Crossref Full Text | Google Scholar

20. Xu T, Zhang XY, Yang N, Jiang F, Chen GQ, Pan X, et al. A narrative review on the application of artificial intelligence in renal ultrasound. Front Oncol. (2024) 13:1252630. doi: 10.3389/fonc.2023.1252630

PubMed Abstract | Crossref Full Text | Google Scholar

21. Niyyar VD, Ross DW, and O'Neill WC. Performance and interpretation of sonography in the practice of nephrology: core curriculum 2024. Am J Kidney Dis. (2024) 83:531–45. doi: 10.1053/j.ajkd.2023.09.006

PubMed Abstract | Crossref Full Text | Google Scholar

22. Xu M, Guo X, Chen X, Wu Y, Huang X, Li X, et al. Noninvasive assessment of pediatric glomerular disease: multimodal ultrasound. Quantitative Imaging Med Surg. (2025) 15:15–29. doi: 10.21037/qims-24-1126

PubMed Abstract | Crossref Full Text | Google Scholar

23. Pak S, Park SG, Park J, Cho ST, Lee YG, Ahn H, et al. Applications of artificial intelligence in urologic oncology. Invest Clin Urol. (2024) 65:202–16. doi: 10.4111/icu.20230435

PubMed Abstract | Crossref Full Text | Google Scholar

24. Li J, Xi C, Dai H, Wang J, Lv Y, Zhang P, et al. Enhanced PET imaging using progressive conditional deep image prior. Phys Med Biol. (2023) 68:175047. doi: 10.1088/1361-6560/acf091

PubMed Abstract | Crossref Full Text | Google Scholar

25. Bachnas MA, Andonotopo W, Dewantiningrum J, Adi Pramono MB, Stanojevic M, Kurjak A, et al. The utilization of artificial intelligence in enhancing 3D/4D ultrasound analysis of fetal facial profiles. J perinatal Med. (2024) 52:899–913. doi: 10.1515/jpm-2024-0347

PubMed Abstract | Crossref Full Text | Google Scholar

26. Sabiri B, Khtira A, El Asri B, and Rhanoui M. Investigating contrastive pair learning’s frontiers in supervised, semisupervised, and self-supervised learning. J Imaging. (2024) 10:196. doi: 10.3390/jimaging10080196

PubMed Abstract | Crossref Full Text | Google Scholar

27. Islam U, Al-Atawi A, Alwageed HS, Mehmood G, Khan F, Innab N, et al. Detection of renal cell hydronephrosis in ultrasound kidney images: a study on the efficacy of deep convolutional neural networks. PeerJ. Comput Sci. (2024) 10:e1797. doi: 10.7717/peerj-cs.1797

PubMed Abstract | Crossref Full Text | Google Scholar

28. Ma J, Kong D, Wu F, Bao L, Yuan J, and Liu Y. Densely connected convolutional networks for ultrasound image based lesion segmentation. Comput Biol Med. (2024) 168:107725. doi: 10.1016/j.compbiomed.2023.107725

PubMed Abstract | Crossref Full Text | Google Scholar

29. Zebari DA. Kidney disease segmentation and classification using firefly sigma seeker and magWeight rank techniques. Bioengineering. (2025) 12:350. doi: 10.3390/bioengineering12040350

PubMed Abstract | Crossref Full Text | Google Scholar

30. Shen Z, Tang C, Xu M, and Lei Z. Removal of speckle noises from ultrasound images using parallel convolutional neural network. Circuits Systems Signal Process. (2023) 42:5041–64. doi: 10.1007/s00034-023-02349-8

Crossref Full Text | Google Scholar

31. Zhou Z, Siddiquee MMR, Tajbakhsh N, and Liang J. UNet++: A nested U-net architecture for medical image segmentation. Deep Learn Med Image Anal Multimodal Learn Clin Decision Support: 4th Int Workshop. (2018) 11045:3–11. doi: 10.1007/978-3-030-00889-5_1

PubMed Abstract | Crossref Full Text | Google Scholar

32. Rajan K, Zielesny A, and Steinbeck C. DECIMER: toward deep learning for chemical image recognition. J cheminformatics. (2020) 12:65. doi: 10.1186/s13321-020-00469-w

PubMed Abstract | Crossref Full Text | Google Scholar

33. Oliveira DA, Bresolin T, Coelho SG, Campos MM, Lage CFA, Leão JM, et al. A polar transformation augmentation approach for enhancing mammary gland segmentation in ultrasound images. Comput Electron Agric. (2024) 220:108825. doi: 10.1016/j.compag.2024.108825

Crossref Full Text | Google Scholar

34. Goel P and Ganatra A. Unsupervised domain adaptation for image classification and object detection using guided transfer learning approach and JS divergence. Sensors. (2023) 23:4436. doi: 10.3390/s23094436

PubMed Abstract | Crossref Full Text | Google Scholar

35. Guo S, Chen H, Sheng X, Xiong Y, Wu M, Fischer K, et al. Cross-modal transfer learning based on an improved cycleGAN model for accurate kidney segmentation in ultrasound images. Ultrasound Med Biol. (2024) 50:1638–45. doi: 10.1016/j.ultrasmedbio.2024.06.009

PubMed Abstract | Crossref Full Text | Google Scholar

36. Liu Y, Zhao Y, Xiao Z, Geng L, and Xiao Z. Multiscale subgraph adversarial contrastive learning. IEEE Trans Neural Networks Learn Syst. (2025) 36:15001–14. doi: 10.1109/TNNLS.2025.3543954

PubMed Abstract | Crossref Full Text | Google Scholar

37. Marsousi M, Plataniotis KN, and Stergiopoulos S. Kidney detection in 3-D ultrasound imagery via shape-to-volume registration based on spatially aligned neural network. IEEE J Biomed Health Inf. (2019) 23:227–42. doi: 10.1109/JBHI.2018.2805777

PubMed Abstract | Crossref Full Text | Google Scholar

38. Peng T, Gu Y, Ruan SJ, Wu QJ, and Cai J. Novel solution for using neural networks for kidney boundary extraction in 2D ultrasound data. Biomolecules. (2023) 13:1548. doi: 10.3390/biom13101548

PubMed Abstract | Crossref Full Text | Google Scholar

39. Khaledyan D, Marini TJ, O'Connell A, Meng S, Kan J, Brennan G, et al. WATUNet: a deep neural network for segmentation of volumetric sweep imaging ultrasound. Mach Learning: Sci Technol. (2024) 5:015042. doi: 10.1088/2632-2153/ad2e15

PubMed Abstract | Crossref Full Text | Google Scholar

40. Singla R, Ringstrom C, Hu R, Hu Z, Lessoway V, Reid J, et al. Automatic measurement of kidney dimensions in two-dimensional ultrasonography is comparable to expert sonographers. J Med Imaging. (2023) 10:34003. doi: 10.1117/1.JMI.10.3.034003

PubMed Abstract | Crossref Full Text | Google Scholar

41. Khan R, Xiao C, Liu Y, Tian J, Chen Z, Su L, et al. Transformative deep neural network approaches in kidney ultrasound segmentation: empirical validation with an annotated dataset. Interdiscip sciences Comput Life Sci. (2024) 16:439–54. doi: 10.1007/s12539-024-00620-3

PubMed Abstract | Crossref Full Text | Google Scholar

42. Guo J, Odu A, and Pedrosa I. Deep learning kidney segmentation with very limited training data using a cascaded convolution neural network. PloS One. (2022) 17:e0267753. doi: 10.1371/journal.pone.0267753

PubMed Abstract | Crossref Full Text | Google Scholar

43. Yu Z, Zhao T, Xi Z, Zhang Y, Zhang X, Wang X, et al. Using CT images to assist the segmentation of MR images via generalization: Segmentation of the renal parenchyma of renal carcinoma patients. Med Phys. (2025) 52:951–64. doi: 10.1002/mp.17494

PubMed Abstract | Crossref Full Text | Google Scholar

44. Ghaith N, Malaeb B, Itani R, Alnafea M, and Al Faraj A. Correlation of kidney size on computed tomography with GFR, creatinine and hbA1C for an accurate diagnosis of patients with diabetes and/or chronic kidney disease. Diagnostics. (2021) 11:789. doi: 10.3390/diagnostics11050789

PubMed Abstract | Crossref Full Text | Google Scholar

45. Fernandez JM, Hernandez-Socorro CR, Robador LO, Rodríguez-Esparragón F, Medina-García D, Quevedo-Reina JC, et al. Ultrasound versus magnetic resonance imaging for calculating total kidney volume in patients with ADPKD: a real-world data analysis. ultrasound J. (2025) 17:13. doi: 10.1186/s13089-025-00400-0

PubMed Abstract | Crossref Full Text | Google Scholar

46. Silva F, Malheiro J, Pestana N, Ribeiro C, Nunes-Carneiro D, Mandanelo M, et al. Lower donated kidney volume is associated with increased risk of lower graft function and acute rejection at 1 year after living donor kidney-a retrospective study. Transplant Int. (2020) 33:1711–22. doi: 10.1111/tri.13740

PubMed Abstract | Crossref Full Text | Google Scholar

47. Khosravi M, Mokhtari G, Ramezanzade E, Yazdanipour MA, Monfared A, Haghighi H, et al. Relationship between donated kidney volume determined by ultrasound adjusted for clinical factors and 1-month and 1-year creatinine clearance: A retrospective study. Clin Nephrol. (2023) 99:1–10. doi: 10.5414/CN110964

PubMed Abstract | Crossref Full Text | Google Scholar

48. Al Salmi I, Al Hajriy M, and Hannawi S. Ultrasound measurement and kidney development: a mini-review for nephrologists. Saudi J Kidney Dis Transplant. (2021) 32:174–82. doi: 10.4103/1319-2442.318520

PubMed Abstract | Crossref Full Text | Google Scholar

49. Cai L, Li Q, Zhang J, Zhang Z, Yang R, Zhang L, et al. Ultrasound image segmentation based on Transformer and U-Net with joint loss. PeerJ. Comput Sci. (2023) 9:e1638. doi: 10.7717/peerj-cs.1638

PubMed Abstract | Crossref Full Text | Google Scholar

50. Zhang WB, Zhou P, Chen Y, and Zhou GQ. Frequency-phase guided attention complex-valued network for ultrasound image segmentation. IEEE J Biomed Health Inf. (2025) 29:5773–86. doi: 10.1109/JBHI.2025.3565311

PubMed Abstract | Crossref Full Text | Google Scholar

51. Xiao X, Zhang J, Shao Y, Liu J, Shi K, He C, et al. Deep learning-based medical ultrasound image and video segmentation methods: overview, frontiers, and challenges. Sensors. (2025) 25:2361. doi: 10.3390/s25082361

PubMed Abstract | Crossref Full Text | Google Scholar

52. Song Y, Zheng J, Lei L, Ni Z, Zhao B, Hu Y, et al. CT2US: Cross-modal transfer learning for kidney segmentation in ultrasound images with synthesized data. Ultrasonics. (2022) 122:106706. doi: 10.1016/j.ultras.2022.106706

PubMed Abstract | Crossref Full Text | Google Scholar

53. Chen G, Yin J, Dai Y, Zhang J, Yin X, Cui L, et al. A novel convolutional neural network for kidney ultrasound images segmentation. Comput Methods programs biomedicine. (2022) 218:106712. doi: 10.1016/j.cmpb.2022.106712

PubMed Abstract | Crossref Full Text | Google Scholar

54. Khan R, Zaman A, Chen C, Xiao C, Zhong W, Liu Y, et al. MLAU-Net: Deep supervised attention and hybrid loss strategies for enhanced segmentation of low-resolution kidney ultrasound. Digital Health. (2024) 10:20552076241291306. doi: 10.1177/20552076241291306

PubMed Abstract | Crossref Full Text | Google Scholar

55. Alex DM, Abraham Chandy D, Hepzibah Christinal A, Arvinder S, and Pushkaran M. YSegNet: a novel deep learning network for kidney segmentation in 2D ultrasound images. Neural Computing Appl. (2022) 34:22405–16. doi: 10.1007/s00521-022-07624-4

Crossref Full Text | Google Scholar

56. Sharifzadeh M, Benali H, and Rivaz H. Investigating shift variance of convolutional neural networks in ultrasound image segmentation. IEEE Trans Ultrasonics Ferroelectrics Frequency Control. (2022) 69:1703–13. doi: 10.1109/tuffc.2022.3162800

PubMed Abstract | Crossref Full Text | Google Scholar

57. Weerasinghe NH, Lovell NH, Welsh AW, and Stevenson GN. Multi-parametric fusion of 3D power doppler ultrasound for fetal kidney segmentation using fully convolutional neural networks. IEEE J Biomed Health Inf. (2021) 25:2050–7. doi: 10.1109/JBHI.2020.3027318

PubMed Abstract | Crossref Full Text | Google Scholar

58. Guo S, Sheng X, Chen H, Zhang J, Peng Q, Wu M, et al. A novel cross-modal data augmentation method based on contrastive unpaired translation network for kidney segmentation in ultrasound imaging. Med Phys. (2025) 52:3877–87. doi: 10.1002/mp.17663

PubMed Abstract | Crossref Full Text | Google Scholar

59. Jagtap JM, Gregory AV, Homes HL, Wright DE, Edwards ME, Akkus Z, et al. Automated measurement of total kidney volume from 3D ultrasound images of patients affected by polycystic kidney disease and comparison to MR measurements. Abdominal Radiol. (2022) 47:2408–19. doi: 10.1007/s00261-022-03521-5

PubMed Abstract | Crossref Full Text | Google Scholar

60. Oghli MG, Bagheri SM, Shabanzadeh A, Mehrjardi MZ, Akhavan A, Shiri I, et al. Fully automated kidney image biomarker prediction in ultrasound scans using Fast-Unet+. Sci Rep. (2024) 14:4782. doi: 10.1038/s41598-024-55106-5

PubMed Abstract | Crossref Full Text | Google Scholar

61. Kim DW, Ahn HG, Kim J, Yoon CS, Kim JH, and Yang S. Advanced kidney volume measurement method using ultrasonography with artificial intelligence-based hybrid learning in children. Sensors. (2021) 21:6846. doi: 10.3390/s21206846

PubMed Abstract | Crossref Full Text | Google Scholar

62. Esser M, Tsiflikas I, Jago JR, Rouet L, Stebner A, and Schäfer JF. Semiautomatic three-dimensional ultrasound renal volume segmentation in pediatric hydronephrosis: interrater agreement and correlation to conventional hydronephrosis grading. Pediatr Radiol. (2025) 55:1298–307. doi: 10.1007/s00247-025-06249-8

PubMed Abstract | Crossref Full Text | Google Scholar

63. Dev H, Zhu C, Sharbatdaran A, Raza SI, Wang SJ, Romano DJ, et al. Effect of averaging measurements from multiple MRI pulse sequences on kidney volume reproducibility in autosomal dominant polycystic kidney disease. J magnetic resonance Imaging. (2023) 58:1153–60. doi: 10.1002/jmri.28593

PubMed Abstract | Crossref Full Text | Google Scholar

64. Akbari P, Nasri F, Deng SX, Khowaja S, Lee SH, Warnica W, et al. Total kidney volume measurements in ADPKD by 3D and ellipsoid ultrasound in comparison with magnetic resonance imaging. Clin J Am Soc Nephrol. (2022) 17:827–34. doi: 10.2215/CJN.14931121

PubMed Abstract | Crossref Full Text | Google Scholar

65. Ma F, Sun T, Liu L, and Jing H. Detection and diagnosis of chronic kidney disease using deep learning-based heterogeneous modified artificial neural network. Future Generation Comput Syst. (2020) 111:17–26. doi: 10.1016/j.future.2020.04.036

Crossref Full Text | Google Scholar

66. Weaver JK, Milford K, Rickard M, Logan J, Erdman L, Viteri B, et al. Deep learning imaging features derived from kidney ultrasounds predict chronic kidney disease progression in children with posterior urethral valves. Pediatr Nephrol. (2023) 38:839–46. doi: 10.1007/s00467-022-05677-0

PubMed Abstract | Crossref Full Text | Google Scholar

67. Lee M, Wei S, Anaokar J, Uzzo R, and Kutikov A. Kidney cancer management 3.0: can artificial intelligence make us better? Curr Opin Urol. (2021) 31:409–15. doi: 10.1097/MOU.0000000000000881

PubMed Abstract | Crossref Full Text | Google Scholar

68. Khalid F, Alsadoun L, Khilji F, Mushtaq M, Eze-Odurukwe A, Mushtaq MM, et al. Predicting the progression of chronic kidney disease: A systematic review of artificial intelligence and machine learning approaches. Cureus. (2024) 16:e60145. doi: 10.7759/cureus.60145

PubMed Abstract | Crossref Full Text | Google Scholar

69. Jhamb M, Weltman MR, Devaraj SM, Lavenburg LU, Han Z, Alghwiri AA, et al. Electronic health record population health management for chronic kidney disease care: A cluster randomized clinical trial. JAMA Internal Med. (2024) 184:737–47. doi: 10.1001/jamainternmed.2024.0708

PubMed Abstract | Crossref Full Text | Google Scholar

70. Inaguma D, Kitagawa A, Yanagiya R, Koseki A, Iwamori T, Kudo M, et al. Increasing tendency of urine protein is a risk factor for rapid eGFR decline in patients with CKD: A machine learning-based prediction model by using a big database. PloS One. (2020) 15:e0239262. doi: 10.1371/journal.pone.0239262

PubMed Abstract | Crossref Full Text | Google Scholar

71. Ying F, Chen S, Pan G, et al. Artificial intelligence pulse coupled neural network algorithm in the diagnosis and treatment of severe sepsis complicated with acute kidney injury under ultrasound image. J healthcare Eng. (2021) 2021:6761364. doi: 10.1155/2021/6761364

PubMed Abstract | Crossref Full Text | Google Scholar

72. Alqaissi E, Algarni A, Alshehri M, Alkhaldy H, and Alshehri A. A recursive embedding and clustering technique for unraveling asymptomatic kidney disease using laboratory data and machine learning. Sci Rep. (2025) 15:5820. doi: 10.1038/s41598-025-89499-8

PubMed Abstract | Crossref Full Text | Google Scholar

73. Chen Z, Chen J, Ying TC, Chen H, Wu C, Chen X, et al. Development and deployment of a novel diagnostic tool based on conventional ultrasound for fibrosis assessment in chronic kidney disease. Acad Radiol. (2023) 30:S295–304. doi: 10.1016/j.acra.2023.02.018

PubMed Abstract | Crossref Full Text | Google Scholar

74. Delrue C, De Bruyne S, and Speeckaert MM. Application of machine learning in chronic kidney disease: current status and future prospects. Biomedicines. (2024) 12:568. doi: 10.3390/biomedicines12030568

PubMed Abstract | Crossref Full Text | Google Scholar

75. Molina-Moreno M, Gonzalez-Diaz I, Rivera Gorrin M, Burguera Vion V, and Díaz-de-María F. URI-CADS: A fully automated computer-aided diagnosis system for ultrasound renal imaging. J Imaging Inf Med. (2024) 37:1458–74. doi: 10.1007/s10278-024-01055-4

PubMed Abstract | Crossref Full Text | Google Scholar

76. Gogoi P and Valan JA. Machine learning approaches for predicting and diagnosing chronic kidney disease: current trends, challenges, solutions, and future directions. Int Urol Nephrol. (2025) 57:1245–68. doi: 10.1007/s11255-024-04281-5

PubMed Abstract | Crossref Full Text | Google Scholar

77. Shi S. A novel hybrid deep learning architecture for predicting acute kidney injury using patient record data and ultrasound kidney images. Appl Artif Intell. (2021) 35:1329–45. doi: 10.1080/08839514.2021.1976908

Crossref Full Text | Google Scholar

78. Jeong I, Cho NJ, Ahn SJ, Lee H, and Gil HW. Machine learning approaches toward an understanding of acute kidney injury: current trends and future directions. Korean J Internal Med. (2024) 39:882–97. doi: 10.3904/kjim.2024.098

PubMed Abstract | Crossref Full Text | Google Scholar

79. Xu Q, Qiang B, Pan Y, and Li J. Alteration in shear wave elastography is associated with acute kidney injury: a prospective observational pilot study. Shock. (2023) 59:375–84. doi: 10.1097/SHK.0000000000002070

PubMed Abstract | Crossref Full Text | Google Scholar

80. Wang Y, Xu F, Han Q, Geng D, Gao X, Xu B, et al. AI-based automatic estimation of single-kidney glomerular filtration rate and split renal function using noncontrast CT. Insights into Imaging. (2025) 16:84. doi: 10.1186/s13244-025-01959-x

PubMed Abstract | Crossref Full Text | Google Scholar

81. Chen Z, Ying MTC, Wang Y, Chen J, Wu C, Han X, et al. Ultrasound-based radiomics analysis in the assessment of renal fibrosis in patients with chronic kidney disease. Abdominal Radiol. (2023) 48:2649–57. doi: 10.1007/s00261-023-03965-3

PubMed Abstract | Crossref Full Text | Google Scholar

82. Yuan H, Huang Q, Wen J, and Gao Y. Ultrasound viscoelastic imaging in the noninvasive quantitative assessment of chronic kidney disease. Renal failure. (2024) 46:2407882. doi: 10.1080/0886022X.2024.2407882

PubMed Abstract | Crossref Full Text | Google Scholar

83. Huang X, Wei T, Li J, Xu L, Tang Y, Liao JT, et al. Multimodal ultrasound for assessment of renal fibrosis in biopsy-proven patients with chronic kidney disease. Ultraschall der Med. (2025) 31:1–10. doi: 10.1055/a-2559-7743

PubMed Abstract | Crossref Full Text | Google Scholar

84. Tang Y and Qin W. Application of multimodal ultrasonography to predicting the acute kidney injury risk of patients with sepsis: artificial intelligence approach. PeerJ. Comput Sci. (2024) 10:e2157. doi: 10.7717/peerj-cs.2157

PubMed Abstract | Crossref Full Text | Google Scholar

85. Elshewey AM, Selem E, and Abed AH. Improved CKD classification based on explainable artificial intelligence with extra trees and BBFS. Sci Rep. (2025) 15:17861. doi: 10.1038/s41598-025-02355-7

PubMed Abstract | Crossref Full Text | Google Scholar

86. Tian S, Yu Y, Shi K, Jiang Y, Song H, Wang Y, et al. Deep learning radiomics based on ultrasound images for the assisted diagnosis of chronic kidney disease. Nephrology. (2024) 29:748–57. doi: 10.1111/nep.14376

PubMed Abstract | Crossref Full Text | Google Scholar

87. Zhu M, Ma L, Yang W, Tang L, Li H, Zheng M, et al. Elastography ultrasound with machine learning improves the diagnostic performance of traditional ultrasound in predicting kidney fibrosis. J Formosan Med Assoc. (2022) 121:1062–72. doi: 10.1016/j.jfma.2021.08.011

PubMed Abstract | Crossref Full Text | Google Scholar

88. Kuo CC, Chang CM, Liu KT, Lin WK, Chiang HY, Chung CW, et al. Automation of the kidney function prediction and classification through ultrasound-based kidney imaging using deep learning. NPJ digital Med. (2019) 2:29. doi: 10.1038/s41746-019-0104-2

PubMed Abstract | Crossref Full Text | Google Scholar

89. Alnazer I, Bourdon P, Urruty T, Falou O, Khalil M, Shahin A, et al. Recent advances in medical image processing for the evaluation of chronic kidney disease. Med image Anal. (2021) 69:101960. doi: 10.1016/j.media.2021.101960

PubMed Abstract | Crossref Full Text | Google Scholar

90. Lim WTH, Ooi EH, Foo JJ, Ng KH, Wong JHD, Leong SS, et al. Shear wave elastography: A review on the confounding factors and their potential mitigation in detecting chronic kidney disease. Ultrasound Med Biol. (2021) 47:2033–47. doi: 10.1016/j.ultrasmedbio.2021.03.030

PubMed Abstract | Crossref Full Text | Google Scholar

91. Qiang B, Xu Q, Pan Y, Wang J, Shen C, Peng X, et al. Shear wave elastography: A noninvasive approach for assessing acute kidney injury in critically ill patients. PloS One. (2024) 19:e0296411. doi: 10.1371/journal.pone.0296411

PubMed Abstract | Crossref Full Text | Google Scholar

92. Miller ZA and Dwyer K. Artificial intelligence to predict chronic kidney disease progression to kidney failure: A narrative review. Nephrology. (2025) 30:e14424. doi: 10.1111/nep.14424

PubMed Abstract | Crossref Full Text | Google Scholar

93. Puccinelli C, Pelligra T, Lippi I, and Citi S. Diagnostic utility of two-dimensional shear wave elastography in nephropathic dogs and its correlation with renal contrast-enhanced ultrasound in course of acute kidney injury. J veterinary Med Sci. (2023) 85:1216–25. doi: 10.1292/jvms.23-0065

PubMed Abstract | Crossref Full Text | Google Scholar

94. Zaky A, Beck AW, Bae S, Sturdivant A, Liwo A, Zdenek N, et al. The biosonographic index. A novel modality for early detection of acute kidney injury after complex vascular surgery. A protocol for an exploratory prospective study. PloS One. (2020) 15:e0241782. doi: 10.1371/journal.pone.0241782

PubMed Abstract | Crossref Full Text | Google Scholar

95. Loftus TJ, Shickel B, Ozrazgat-Baslanti T, Ren Y, Glicksberg BS, Cao J, et al. Artificial intelligence-enabled decision support in nephrology. Nat Rev Nephrol. (2022) 18:452–65. doi: 10.1038/s41581-022-00562-3

PubMed Abstract | Crossref Full Text | Google Scholar

96. Ozcan SGG and Erkan M. Reliability and quality of information provided by artificial intelligence chatbots on postcontrast acute kidney injury: an evaluation of diagnostic, preventive, and treatment guidance. Rev da Associacao Med Bras. (2024) 70:e20240891. doi: 10.1590/1806-9282.20240891

PubMed Abstract | Crossref Full Text | Google Scholar

97. Samal L, Kilgallon JL, Lipsitz S, Baer HJ, McCoy A, Gannon M, et al. Clinical decision support for hypertension management in chronic kidney disease: A randomized clinical trial. JAMA Internal Med. (2024) 184:484–92. doi: 10.1001/jamainternmed.2023.8315

PubMed Abstract | Crossref Full Text | Google Scholar

98. Sudharson S and Kokil P. An ensemble of deep neural networks for kidney ultrasound image classification. Comput Methods programs biomedicine. (2020) 197:105709. doi: 10.1016/j.cmpb.2020.105709

PubMed Abstract | Crossref Full Text | Google Scholar

99. Weaver JK, Logan J, Broms R, Antony M, Rickard M, Erdman L, et al. Deep learning of renal scans in children with antenatal hydronephrosis. J Pediatr Urol. (2023) 19:514. doi: 10.1016/j.jpurol.2022.12.017

PubMed Abstract | Crossref Full Text | Google Scholar

100. Jacq A, Tarris G, Jaugey A, Paindavoine M, Maréchal E, Bard P, et al. Automated evaluation with deep learning of total interstitial inflammation and peritubular capillaritis on kidney biopsies. Nephrology dialysis Transplant. (2023) 38:2786–98. doi: 10.1093/ndt/gfad094

PubMed Abstract | Crossref Full Text | Google Scholar

101. David N and Horrow MM. Pitfalls in renal ultrasound. Ultrasound Q. (2020) 36:300–13. doi: 10.1097/RUQ.0000000000000519

PubMed Abstract | Crossref Full Text | Google Scholar

102. Shehata M, Abouelkheir RT, Gayhart M, Van Bogaert E, Abou El-Ghar M, Dwyer AC, et al. Role of AI and radiomic markers in early diagnosis of renal cancer and clinical outcome prediction: A brief review. Cancers. (2023) 15:2835. doi: 10.3390/cancers15102835

PubMed Abstract | Crossref Full Text | Google Scholar

103. Sheikhy A, Dehghani Firouzabadi F, Lay N, Chaudhri S, Chandarana H, and Bagga B. State of the art review of AI in renal imaging. Abdominal Radiol. (2025) 50:5305–23. doi: 10.1007/s00261-025-04963-3

PubMed Abstract | Crossref Full Text | Google Scholar

104. Yin S, Peng Q, Li H, Zhang Z, You X, Fischer K, et al. Multi-instance deep learning of ultrasound imaging data for pattern classification of congenital abnormalities of the kidney and urinary tract in children. Urology. 2020:142:183–189. doi: 10.1016/j.urology.2020.05.019

PubMed Abstract | Crossref Full Text | Google Scholar

105. Hai J, Qiao K, Chen J, Liang N, Zhang L, Yan B, et al. Multiview features integrated 2D\3D Net for glomerulopathy histologic types classification using ultrasound images. Comput Methods Programs Biomedicine. (2021) 212:106439. doi: 10.1016/j.cmpb.2021.106439

PubMed Abstract | Crossref Full Text | Google Scholar

106. Tsai MC, Lu HHS, Chang YC, Huang YC, and Fu LS. Automatic screening of pediatric renal ultrasound abnormalities: deep learning and transfer learning approach. JMIR Med Inf. (2022) 10:e40878. doi: 10.2196/40878

PubMed Abstract | Crossref Full Text | Google Scholar

107. Lanza C, Carriero S, Biondetti P, Angileri SA, Carrafiello G, Ierardi AM, et al. Advances in imaging guidance during percutaneous ablation of renal tumors. Semin ultrasound CT MR. (2023) 44:162–9. doi: 10.1053/j.sult.2023.03.003

PubMed Abstract | Crossref Full Text | Google Scholar

108. Serhal M, Rangwani S, Seedial SM, Thornburg B, Riaz A, Nemcek AA, et al. Safety and diagnostic efficacy of image-guided biopsy of small renal masses. Cancers. (2024) 16:835. doi: 10.3390/cancers16040835

PubMed Abstract | Crossref Full Text | Google Scholar

109. Sharma NK and Sarode SC. Evolving Artificial Intelligence (AI) at the Crossroads: Potentiating Productive vs. Declining Disruptive Cancer Res Cancers. (2024) 16:3646. doi: 10.3390/cancers16213646

PubMed Abstract | Crossref Full Text | Google Scholar

110. Aljabri M, AlAmir M, AlGhamdi M, Abdel-Mottaleb M, and Collado-Mesa F. Toward a better understanding of annotation tools for medical imaging: a survey. Multimedia Tools Appl. (2022) 81:25877–911. doi: 10.1007/s11042-022-12100-1

PubMed Abstract | Crossref Full Text | Google Scholar

111. Zheng F, XingMing L, JuYing X, MengYing T, BaoJian Y, Yan S, et al. Significant reduction in manual annotation costs in ultrasound medical image database construction through step by step artificial intelligence preannotation. PloS digital Health. (2025) 4:e0000738. doi: 10.1371/journal.pdig.0000738

PubMed Abstract | Crossref Full Text | Google Scholar

112. Wu L, Xia D, Wang J, Chen S, Cui X, Shen L, et al. Deep learning detection and segmentation of facet joints in ultrasound images based on convolutional neural networks and enhanced data annotation. Diagnostics. (2024) 14:755. doi: 10.3390/diagnostics14070755

PubMed Abstract | Crossref Full Text | Google Scholar

113. Wang Y, Cheungpasitporn W, Ali H, Qing J, Thongprayoon C, Kaewput W, et al. A practical guide for nephrologist peer reviewers: evaluating artificial intelligence and machine learning research in nephrology. Renal failure. (2025) 47:2513002. doi: 10.1080/0886022X.2025.2513002

PubMed Abstract | Crossref Full Text | Google Scholar

114. Lee S, Kang M, Byeon K, Lee SE, Lee IH, Kim YA, et al. Machine learning-aided chronic kidney disease diagnosis based on ultrasound imaging integrated with computer-extracted measurable features. J digital Imaging. (2022) 35:1091–100. doi: 10.1007/s10278-022-00625-8

PubMed Abstract | Crossref Full Text | Google Scholar

115. Kumar SS, Khandekar N, Dani K, Bhatt SR, Duddalwar V, D'Souza A, et al. A scoping review of population diversity in the common genomic aberrations of clear cell renal cell carcinoma. Oncology. (2025) 103:341–50. doi: 10.1159/000541370

PubMed Abstract | Crossref Full Text | Google Scholar

116. Alexa R, Kranz J, Kramann R, Kuppe C, Sanyal R, Hayat S, et al. Harnessing artificial intelligence for enhanced renal analysis: automated detection of hydronephrosis and precise kidney segmentation. Eur Urol Open Sci. (2024) 62:19–25. doi: 10.1016/j.euros.2024.01.017

PubMed Abstract | Crossref Full Text | Google Scholar

117. Wang J, Wang K, Yu Y, Lu Y, Xiao W, Sun Z, et al. Self-improving generative foundation model for synthetic medical image generation and clinical applications. Nat Med. (2025) 31:609–17. doi: 10.1038/s41591-024-03359-y

PubMed Abstract | Crossref Full Text | Google Scholar

118. Valerio AG, Trufanova K, de Benedictis S, Vessio G, and Castellano G. From segmentation to explanation: Generating textual reports from MRI with LLMs. Comput Methods programs biomedicine. (2025) 270:108922. doi: 10.1016/j.cmpb.2025.108922

PubMed Abstract | Crossref Full Text | Google Scholar

119. Yao J, Wang Y, Lei Z, Wang K, Feng N, Dong F, et al. Multimodal GPT model for assisting thyroid nodule diagnosis and management. NPJ digital Med. (2025) 8:245. doi: 10.1038/s41746-025-01652-9

PubMed Abstract | Crossref Full Text | Google Scholar

120. Alderden J, Johnny J, Brooks KR, Wilson A, Yap TL, Zhao YL, et al. Explainable artificial intelligence for early prediction of pressure injury risk. Am J Crit Care. (2024) 33:373–81. doi: 10.4037/ajcc2024856

PubMed Abstract | Crossref Full Text | Google Scholar

121. Ullah N, Guzman-Aroca F, Martinez-Alvarez F, De Falco I, and Sannino G. A novel explainable AI framework for medical image classification integrating statistical, visual, and rule-based methods. Med image Anal. (2025) 105:103665. doi: 10.1016/j.media.2025.103665

PubMed Abstract | Crossref Full Text | Google Scholar

122. Dixit S, Sharma D, Sharma N, and Shukla VK. A review of software in clinical trials: FDA regulatory frameworks and addressing challenges. Rev Recent Clin trials. (2025) 29:1–7. doi: 10.2174/0115748871359356250523033831

PubMed Abstract | Crossref Full Text | Google Scholar

123. Hassan SU, Abdulkadir SJ, Zahid MSM, and Al-Selwi SM. Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review. Comput Biol Med. (2025) 185:109569. doi: 10.1016/j.compbiomed.2024.109569

PubMed Abstract | Crossref Full Text | Google Scholar

124. Brandenburg JM, Müller-Stich BP, Wagner M, and Schaar M. Can surgeons trust AI? Perspectives on machine learning in surgery and the importance of eXplainable Artificial Intelligence (XAI). Langenbeck’s Arch Surg. (2025) 410:53. doi: 10.1007/s00423-025-03626-7

PubMed Abstract | Crossref Full Text | Google Scholar

125. Drukker L, Noble JA, and Papageorghiou AT. Introduction to artificial intelligence in ultrasound imaging in obstetrics and gynecology. Ultrasound obstetrics gynecology. (2020) 56:498–505. doi: 10.1002/uog.22122

PubMed Abstract | Crossref Full Text | Google Scholar

126. Xia Q, Du M, Li B, Hou L, and Chen Z. Interdisciplinary collaboration opportunities, challenges, and solutions for artificial intelligence in ultrasound. Curr Med Imaging. (2022) 18:1046–51. doi: 10.2174/1573405618666220321123126

PubMed Abstract | Crossref Full Text | Google Scholar

127. Ostrowski DA, Logan JR, Antony M, Broms R, Weiss DA, Van Batavia J, et al. Automated Society of Fetal Urology (SFU) grading of hydronephrosis on ultrasound imaging using a convolutional neural network. J Pediatr Urol. (2023) 19:566. doi: 10.1016/j.jpurol.2023.05.014

PubMed Abstract | Crossref Full Text | Google Scholar

128. Liang X, Du M, and Chen Z. Artificial intelligence-aided ultrasound in renal diseases: a systematic review. Quantitative Imaging Med Surg. (2023) 13:3988–4001. doi: 10.21037/qims-22-1428

PubMed Abstract | Crossref Full Text | Google Scholar

129. Števík M, Malík M, Vetešková Š, Trabalková Z, Hliboký M, Kolárik M, et al. Hybrid artificial intelligence solution combining convolutional neural network and analytical approach showed higher accuracy in A-lines detection on lung ultrasound in thoracic surgery patients compared with radiology resident. Neuro Endocrinol Lett. (2024) 45:229–37.

PubMed Abstract | Google Scholar

130. Bai L, Liu M, and Sun Y. Overview of food preservation and traceability technology in the smart cold chain system. Foods. (2023) 12:2881. doi: 10.3390/foods12152881

PubMed Abstract | Crossref Full Text | Google Scholar

131. Shaikh F, Kenny JE, Awan O, Markovic D, Friedman O, He T, et al. Measuring the accuracy of cardiac output using POCUS: the introduction of artificial intelligence into routine care. ultrasound J. (2022) 14:47. doi: 10.1186/s13089-022-00301-6

PubMed Abstract | Crossref Full Text | Google Scholar

132. Thomas J, Ledger GA, and Mamillapalli CK. Use of artificial intelligence and machine learning for estimating Malignancy risk of thyroid nodules. Curr Opin endocrinology diabetes Obes. (2020) 27:345–50. doi: 10.1097/MED.0000000000000557

PubMed Abstract | Crossref Full Text | Google Scholar

133. Xu C, Wang Z, Zhou J, Hu F, Wang Y, Xu Z, et al. Application research of artificial intelligence software in the analysis of thyroid nodule ultrasound image characteristics. PloS One. (2025) 20:e0323343. doi: 10.1371/journal.pone.0323343

PubMed Abstract | Crossref Full Text | Google Scholar

134. Muralidharan V, Adewale BA, Huang CJ, Nta MT, Ademiju PO, Pathmarajah P, et al. A scoping review of reporting gaps in FDA-approved AI medical devices. NPJ digital Med. (2024) 7:273. doi: 10.1038/s41746-024-01270-x

PubMed Abstract | Crossref Full Text | Google Scholar

135. Tepe M and Emekli E. Assessing the responses of large language models (ChatGPT-4, gemini, and microsoft copilot) to frequently asked questions in breast imaging: A study on readability and accuracy. Cureus. (2024) 16:e59960. doi: 10.7759/cureus.59960

PubMed Abstract | Crossref Full Text | Google Scholar

136. Cacciamani GE, Chen A, Gill IS, and Hung AJ. Artificial intelligence and urology: ethical considerations for urologists and patients. Nat Rev Urol. (2024) 21:50–9. doi: 10.1038/s41585-023-00796-1

PubMed Abstract | Crossref Full Text | Google Scholar

137. Jiang L, Wu Z, Xu X, Zhan Y, Jin X, Wang L, et al. Opportunities and challenges of artificial intelligence in the medical field: current application, emerging problems, and problem-solving strategies. J Int Med Res. (2021) 49:3000605211000157. doi: 10.1177/03000605211000157

PubMed Abstract | Crossref Full Text | Google Scholar

138. Saber A, Hassan E, Elbedwehy S, Awad WA, and Emara TZ. Leveraging ensemble convolutional neural networks and metaheuristic strategies for advanced kidney disease screening and classification. Sci Rep. (2025) 15:2487. doi: 10.1038/s41598-025-93950-1

PubMed Abstract | Crossref Full Text | Google Scholar

139. Zhao D, Wang W, Tang T, Zhang YY, and Yu C. Current progress in artificial intelligence-assisted medical image analysis for chronic kidney disease: A literature review. Comput Struct Biotechnol J. (2023) 21:3315–326. doi: 10.1016/j.csbj.2023.05.029

PubMed Abstract | Crossref Full Text | Google Scholar

140. Amin MS, Ahmad S, and Loh WK. Federated learning for Healthcare 5.0: a comprehensive survey, taxonomy, challenges, and solutions. Soft Computing. (2025) 29:673–700. doi: 10.1007/s00500-025-10508-z

Crossref Full Text | Google Scholar

141. Almogadwy B and Alqarafi A. Fused federated learning framework for secure and decentralized patient monitoring in healthcare 5. 0 using IoMT. Sci Rep. (2025) 15:24263. doi: 10.1038/s41598-025-06574-w

PubMed Abstract | Crossref Full Text | Google Scholar

142. Durant AM, Medero RC, Briggs LG, Choudry MM, Nguyen M, Channar A, et al. The current application and future potential of artificial intelligence in renal cancer. Urology. (2024) 193:157–63. doi: 10.1016/j.urology.2024.07.010

PubMed Abstract | Crossref Full Text | Google Scholar

143. Shiraga T, Makimoto H, Kohlmann B, Magnisali CE, Imai Y, Itani Y, et al. Improving valvular pathologies and ventricular dysfunction diagnostic efficiency using combined auscultation and electrocardiography data: A multimodal AI approach. Sensors. (2023) 23:9834. doi: 10.3390/s23249834

PubMed Abstract | Crossref Full Text | Google Scholar

144. Han Z, Huang Y, Wang H, and Chu Z. Multimodal ultrasound imaging: A method to improve the accuracy of diagnosing thyroid TI-RADS 4 nodules. J Clin ultrasound. (2022) 50:1345–52. doi: 10.1002/jcu.23352

PubMed Abstract | Crossref Full Text | Google Scholar

145. Jiang J, Chan L, and Nadkarni GN. The promise of artificial intelligence for kidney pathophysiology. Curr Opin Nephrol hypertension. (2022) 31:380–6. doi: 10.1097/MNH.0000000000000808

PubMed Abstract | Crossref Full Text | Google Scholar

146. Huo Y, Deng R, Liu Q, Fogo AB, and Yang H. AI applications in renal pathology. Kidney Int. (2021) 99:1309–20. doi: 10.1016/j.kint.2021.01.015

PubMed Abstract | Crossref Full Text | Google Scholar

147. Zhou XJ, Zhong XH, and Duan LX. Integration of artificial intelligence and multi-omics in kidney diseases. Fundam Res. (2022) 3:126–48. doi: 10.1016/j.fmre.2022.01.037

PubMed Abstract | Crossref Full Text | Google Scholar

148. Lin Z, Li S, Wang S, Gao Z, Sun Y, Lam CT, et al. An orchestration learning framework for ultrasound imaging: Prompt-Guided Hyper-Perception and Attention-Matching Downstream Synchronization. Med image Anal. (2025) 104:103639. doi: 10.1016/j.media.2025.103639

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: renal ultrasound, deep learning, chronic kidney disease (CKD), multimodal data, large model technology

Citation: Zhang Y, Hou Y, Qiu T, Zhuang Y, Chen K, Ling W, Luo Y and Lin J (2026) Deep learning in renal ultrasound: applications, challenges, and future outlook. Front. Oncol. 15:1730628. doi: 10.3389/fonc.2025.1730628

Received: 23 October 2025; Accepted: 18 December 2025; Revised: 24 November 2025;
Published: 12 January 2026.

Edited by:

Ronald M. Bukowski, Cleveland Clinic, United States

Reviewed by:

Chen Yu, Tongji University, China
Bartosz Malkiewicz, Wroclaw Medical University, Poland

Copyright © 2026 Zhang, Hou, Qiu, Zhuang, Chen, Ling, Luo and Lin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jiangli Lin, bGluamlhbmdsaUBzY3UuZWR1LmNu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.