Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Med., 12 January 2026

Sec. Precision Medicine

Volume 12 - 2025 | https://doi.org/10.3389/fmed.2025.1657123

This article is part of the Research TopicAI-Driven Smart Sensing and Processing for Personalized HealthcareView all 9 articles

Robust colonoscopy polyp segmentation using dynamic-Nu T-Loss with multi-scale and uncertainty-aware adaptation

Alireza Norouziazad,Alireza Norouziazad1,2Mahan Najafpour Ghazvini FardshadMahan Najafpour Ghazvini Fardshad1Fatemeh Esmaeildoost,Fatemeh Esmaeildoost1,3Mehrdad Najafpour Ghazvini FardshadMehrdad Najafpour Ghazvini Fardshad1Razieh Salahandish,
Razieh Salahandish1,2*
  • 1Laboratory of Advanced Biotechnologies for Health Assessments (LAB-HA), Lassonde School of Engineering, York University, Toronto, ON, Canada
  • 2Department of Electrical Engineering and Computer Science (EECS), Lassonde School of Engineering, York University, Toronto, ON, Canada
  • 3Department of Mechanical Engineering, Lassonde School of Engineering, York University, Toronto, ON, Canada

Accurate segmentation of polyps in colonoscopy images is essential for early colorectal cancer detection; however, it remains a challenging task due to reflections, occlusions, motion artifacts, inter- and intra-polyp appearance variability, and the presence of noisy or inconsistent ground-truth annotations. In this work, we introduce dynamic-Nu T-Loss (DNA-TLoss), a robust, adaptive loss function based on the heavy-tailed Student’s 𝑡-distribution that incorporates three novel extensions: (1) a per-image learnable degrees-of-freedom parameter ν, predicted by a lightweight NuPredictor network to dynamically adjust robustness to outliers; (2) per-pixel precision weights λ for spatially adaptive error sensitivity; and (3) a multi-scale aggregation scheme that computes and combines loss at multiple spatial resolutions to capture both coarse and fine details. Integrated into a U-Net with a ResNet-34 encoder, DNA-TLoss was evaluated on five public benchmarks: CVC-300, CVC-ClinicDB, ETIS-LaribPolypDB, Kvasir, and CVC-ColonDB. Our method achieves the lowest Hausdorff distance across all datasets, with an average reduction of 14.6% compared to T-Loss; notably, on CVC-300, it yields a significant decrease of 45.96%. It also obtains the lowest false discovery rate on all five datasets, improving over T-Loss by up to 38.7% on CVC-300 and 24.5% on Kvasir. Furthermore, DNA-TLoss provided best-in-class calibration, achieving expected calibration error as low as 0.44% on CVC-300 and outperforming all other baselines on four out of five datasets. These results highlight the promise of joint global and local uncertainty adaptation, coupled with multi-scale optimization, for advancing trustworthy, real-time computer-aided polyp detection in colonoscopy.

1 Introduction

Colonoscopy is widely used for the early detection and prevention of colorectal cancer. It enables direct visualization of the colon and rectum, allowing early identification of lesions or polyps before they become malignant (1). In the context of medical imaging analysis, accurate medical image segmentation plays an important part in the diagnosis and determining which treatment should be done. Segmentation aids in identifying anatomical structures and localizing abnormalities or lesions accurately (2). This involves the segmentation of polyps, which are clinically significant structures; incomplete segmentation may lead to false negatives, potentially resulting in missed or delayed diagnoses (3). Routine colonoscopy has been reported to miss 17–28% of polyps, especially small and flat lesions, during conventional colonoscopy (4). When an AI-assisted colonoscopy is performed prior to the traditional examination, the sensitivity of the subsequent procedure decreases by 4.76%, not due to diminished effectiveness, but because the AI system has already identified a significant number of lesions (5). Annotation detail significantly impacts performance; models trained on precise outline masks achieve 99.6% sensitivity, while those using coarser, 20%, inflated bounding boxes drop to 94.84% (5).

Unlike general medical imaging, segmentation of colonoscopy data presents unique challenges, including intense mucosal reflections from the colonoscopy light source, frequent fluid occlusions obscuring the field of view, motion artifacts from peristalsis or camera movement, and substantial variability in polyp size, shape, and texture (6).

Deep learning models are often trained on noisy or poor-quality annotations due to human errors in labeling, and the limited expert staff who can annotate the data at a high quality (2, 79). Label noise is a well-known challenge and can significantly impact model performance, as noise leads to uncertainty during training, which can lead to poor generalization and unsafe predictions (10). The inter-annotator average Dice coefficient found in previous studies (e.g., polyp segmentation and MRI) ranges from 0.72 to 0.88 (11). There are also well-documented instances of low-quality medical imaging, arising from inter-expert discrepancies in annotations, ambiguous cases, and inconsistent policies in the medical field (1214). To address these challenges, we introduce a deep neural network model including preprocessing, an encoder-decoder architecture, evaluation metrics and a loss function.

Loss functions make a significant impact in training machine learning models, particularly in clinical settings where prediction errors can have serious consequences. In classification tasks, loss functions that are commonly applied include cross-entropy (15), hinge loss (16), and exponential loss (17), while regression tasks often apply squared error (18), absolute error, or robust alternatives, such as Huber loss (19).

In challenging tasks like object detection and semantic segmentation, traditional loss functions in deep learning may be impacted by class imbalance and noisy labels. Different variations of loss functions for noisy labels are shown to improve spatial accuracy and feature separability, such as focal loss (20), IoU-based loss (21), standard contrastive loss (11), and ArcFace loss (22). While these approaches have demonstrated improved spatial precision, they remain susceptible to annotation errors commonly found in medical data (19).

Successful robust loss functions in other domains cannot help with colonoscopy tasks because they assume i.i.d. noise, whereas colonoscopy errors are structured. The rich measured noise that comes from colonoscopy data is, however, fundamentally problematic: the extent of systematic errors that occur in colonoscopy is not amenable to robust loss functions in a standard way (12). Robust loss functions help ensure reliable model performance in clinical settings (23). They handle noisy annotations and reduce the impact of outliers. This leads to greater precision and reliability in predictions. As a result, diagnoses are safer, and clinical benefits increase (24).

Some loss functions, such as those proposed in Patrini et al. (25), incorporate a noise transition matrix to adjust how the loss is applied. Subsequent work (26) further advanced this approach by modeling the transition matrix more accurately, thereby reducing the impact of label noise on the effective loss. Robust loss functions, like Generalized Cross Entropy (27) or Symmetric Cross Entropy (28), do not explicitly model label noise directly but have demonstrated strong performance in noisy settings. Other strategies for enhancing robustness include outlier-aware loss functions (29) and normalized loss functions (30), which aim to mitigate the impact of mislabeled data. Both Zhang and Sabuncu (27) and Wang et al. (28) also contribute to this effort by addressing robustness from a theoretical and empirical perspective. The only difference between these two approaches is whether noise is modeled directly in the loss functions. Anticipating noise is indeed a different approach to tackling the problems of noise, using the design of the loss function.

Here, we build upon the Student’s-𝑡-based T-loss (31), a robust loss function that models label ambiguity using the heavy-tailed Student’s 𝑡-distribution. We extend this framework by proposing a new variant, dynamic-Nu T-Loss (DNA-TLoss), which introduces additional adaptability and robustness to varying data conditions. A key innovation of DNA-TLoss is the introduction of a learnable degree of freedom parameter, which adjusts dynamically to image-level label ambiguity during training. This dynamic modeling allows for improved classification behavior on ambiguous annotations.

This paper is organized as follows: Section 2 introduces the datasets used for evaluation, including Kvasir (32), CVC-ClinicDB (33), CVC-ColonDB (34), ETIS-LaribPolypDB (35), and CVC-300 (36). It also details the network architecture, defines the problem setting, outlines the motivation for the proposed DNA-TLoss, and provides its mathematical formulation. Section 3 presents the experimental results, including quantitative performance across multiple polyp segmentation benchmarks using metrics such as Dice, Intersection-over-Union (IoU), false discovery rate (FDR), Hausdorff distance (HD), average symmetric surface distance (ASSD), expected calibration error (ECE), and mutual information (MI). Finally, Section 4 concludes the study by summarizing our main contributions and discussing the broader implications of our findings for robust medical image segmentation.

2 Methods

2.1 Dataset and preprocessing

We evaluated our method on multiple publicly available colonoscopy polyp segmentation datasets: Kvasir, CVC-ClinicDB, CVC-ColonDB, ETIS-LaribPolypDB, and CVC-300. For training, we used a combined set of 900 randomly selected images from Kvasir and 550 training images from CVC-ClinicDB. Evaluation was performed across all five datasets to assess generalization performance.

CVC-ClinicDB includes 612 high-resolution images; 62 of these were reserved for testing. CVC-ColonDB contains 380 earlier-generation colonoscopy frames, while ETIS-LaribPolypDB comprises 196 challenging test samples with diverse imaging conditions. Kvasir consists of 1,000 annotated frames extracted from real-world colonoscopy videos, showcasing polyps of varying size and morphology. CVC-300 consists of 300 colonoscopy video frames that do not overlap with ClinicDB.

All images were resized to a uniform resolution of 512 × 608 pixels, selected to balance detail preservation with computational efficiency while closely matching the native dimensions of datasets, minimizing interpolation artifacts. To preserve the aspect ratio, zero-padding was applied where needed. During training, we applied comprehensive data augmentation using the Albumentations library to simulate common clinical noise and intra-patient variability. This included horizontal and vertical flipping (p = 0.5), elastic deformation (alpha = 1.0, sigma = 50.0, p = 0.5), grid distortion (num_steps = 5, distort_limit = 0.3, p = 0.5), random adjustments to brightness and contrast within a range of ±20% (p = 0.5), and the addition of Gaussian noise with a variance limit of (10.0, 50.0) (p = 0.5). After augmentation, each image was normalized using the dataset’s channel-wise mean and standard deviation. These preprocessing steps aimed to improve the model’s robustness to clinical variations in polyp appearance and imaging conditions.

2.2 Network architecture

The segmentation model employed in this study was a standard U-Net (37) with a pretrained ResNet-34 (38) encoder. The network consists of a contracting path (encoder) to capture contextual features and a symmetric expanding path (decoder) to enable precise spatial localization (39). The input to the model comprises RGB colonoscopy images, and the model produced per-pixel logits for two classes: background and polyp. We converted these logits to a single foreground probability map p[0,1]H×W by applying a sigmoid function for single-channel output, or a Softmax function for two-channel outputs, followed by the polyp class channel. During training, raw logits were directly used for loss computation. At inference time, the resulting probability map was thresholded at 0.5 to produce a binary segmentation mask. The segmentation network was trained using the standard Adam optimizer with a polynomial learning-rate schedule (40). The network outputs a single-channel logit map, which is passed through a sigmoid activation during inference. For training, raw logits were used in the loss computation. The decoder employs skip connections from the encoder to preserve fine spatial information. The overall architecture and data flow of the proposed DNA-TLoss framework are illustrated in Figure 1, highlighting the integration of the NuPredictor module, per-pixel precision adaptation, and multi-scale loss computation within the segmentation pipeline. The segmentation network generates a probability map supervised by the ground-truth mask. Simultaneously, features extracted from the input are passed through a lightweight NuPredictor that estimates a per-image degrees of freedom parameter (𝜈), which controls the global robustness of the loss function. The predicted probability map is then resized to multiple scales and aggregated in a multi-scale processing block, where each scale has its own spatially learnable precision map 𝜆. These λ maps adjust local sensitivity to errors by weighting pixel-wise residuals. The DNA-TLoss combines the adaptive 𝜈 and resolution-specific 𝜆 values to compute the final loss. All components, including segmentation parameters, 𝜈-predictor, and 𝜆-maps, are jointly optimized via backpropagation, as indicated by the green flow of gradients.

Figure 1
Flowchart illustrating a computation process starting with input images. Features are extracted, followed by a segmentation network generating a predicted probability map. Multi-scale aggregation and scale-specific resizing occur, leading to computing TLoss. DNA-TLoss involves making predictions and updating parameters. A ground truth mask and gradient computation update segmentation parameters.

Figure 1. Overview of the DNA-TLoss framework. The model integrates a segmentation network with a NuPredictor module for per-image robustness estimation and per-pixel λ maps for adaptive precision. Multi-scale loss aggregation enables improved learning under label noise and structural variability.

We deliberately adopted a standard U-Net architecture with a ResNet-34 encoder for two key reasons. First, our primary goal was to evaluate the impact of the proposed DNA-TLoss function, rather than to optimize or innovate on the backbone architecture. Using a widely adopted, canonical architecture ensures that improvements can be attributed to the loss function itself, rather than architectural variations. This setup aligns with many prior studies in medical image segmentation, facilitating more direct and meaningful comparisons with existing methods. Second, ResNet-34 offers a balanced trade-off between feature representation capacity and computational efficiency. It is sufficiently deep to extract expressive features while remaining lightweight enough to support real-time inference, which is important in clinical applications such as colonoscopy (38).

2.3 Multi-scale dynamic-Nu T-Loss

The core contribution of this work is the introduction of a novel robust loss function, DNA-TLoss, which extends the Student’s-𝑡 negative log-likelihood (T-Loss NLL) (31) to the context of multi-scale semantic segmentation, incorporating image-wise and spatially adaptive robustness mechanisms. This formulation enables the loss to dynamically adjust to both global (image-level) and local (pixel-level) sources of uncertainty, enhancing the model’s resilience to label noise and annotation ambiguity. We briefly review the Student’s-𝑡 formulation that underpins our proposed loss function:

2.3.1 Problem setting

Given an input image x3×H×W, a segmentation model f(x) predicts a probability map p[0,1]1×H×W, which is compared against a binary ground-truth mask y{0,1}1×H×W. The training objective is to minimize a loss function (ŷ,y) that encourages pixel-wise agreement between predictions and labels, even in the presence of noise or ambiguous boundaries (41, 42).

Given an input image x3×H×W, a segmentation model f is the mean, and Σis the covariance matrix. Let δ=yμ; then δ2=i(yiμi)2corresponds to the mean squared error.

2.3.2 Multi-scale dynamic-Nu

In our DNA-TLoss, we introduce three key extensions to this basic T-Loss:

Adaptive 𝜈. Rather than using a fixed degrees-of-freedom parameter, our model predicts a single scalar 𝜈 for each input sample. This global adaptation allows the loss to modulate its robustness based on overall image characteristics: low ν produces heavy-tailed distributions (robust to outliers), while high ν yields Gaussian-like behavior (sensitive to errors). The NuPredictor receives features extracted via three convolutional layers (kernel size = 3 × 3, stride = 1, padding = 1) with ReLU activations. These features are pooled via adaptive average pooling and fed into a two-layer MLP (hidden size = 32, Softplus output) to predict a per-image ν. These layers extract high-level features from the input image, which are then passed through adaptive average pooling to yield a fixed-length feature vector per image. This vector is fed into a two-layer MLP (NuPredictor) with a hidden dimension of 32 and a Softplus output activation to ensure positivity. The hidden dimension of 32 was chosen to balance representational capacity and computational cost, large enough to model relevant variations, yet compact enough to avoid overfitting. We use Softplus to ensure positive but non-zero ν values, as extremely small ν leads to instability. The predicted 𝜈 thus varies across samples, enabling the loss function to modulate robustness based on image content, such as polyp size or image complexity. Importantly, while our architecture processes each image through the NuPredictor during training, empirical analysis reveals that the predicted ν values converge to a highly consistent dataset-specific optimum. After training completion, this optimized ν value becomes effectively fixed for the entire dataset during inference.

Per-pixel precision map. We introduce a learnable precision (inverse-variance) parameter λi for each pixel. Equivalently, we define Λ=diag(eλ1,,,,eλD) so that Σ1=Λ. These parameters are initialized to zero, corresponding to unit precision before training. During optimization, λs,i>0 increases the loss penalty for errors at pixel i, while λs,i<0 reduces it, allowing the model to learn spatially adaptive error sensitivity. These λ parameters are trained alongside the network. When λ=0, the precision is 1 (unit variance). The log determinant term becomes12iλi, and the quadratic term becomes iδi2eλi. In practice, we add a small constant inside exp/log for stability (omitted below for clarity).

Multi-scale aggregation. To effectively capture both coarse and fine details, the proposed loss is computed at multiple spatial resolutions. Specifically, let the original prediction and ground-truth mask be of size (H,W). For each scale s1.0,0.5,0.25,with associated weight ws{1.0,0.5,0.3}), we resize the predicted probability map p and ground-truth mask y to (hs,ws)=(sH,sW). Bilinear interpolation is used for resizing the probability map p, while nearest-neighbor interpolation is applied to the ground-truth mask y to preserve categorical labels. At each resolution s, we maintain a separate λ-parameter matrix ΛsRhs×ws, as illustrated in Figure 1. This allows each scale to have its own spatial precision map.

The specific scale factors (1.0, 0.5, 0.25) and weights (1.0, 0.5, 0.3) were determined through empirical evaluation on polyp segmentation datasets. We tested various configurations and found this combination optimal for capturing both fine polyp details and contextual information while maintaining computational efficiency suitable for real-time clinical applications.

Combining these components, for one scaled output, let Ds=hsws denote the number of pixels at scale s, δi=ps,iys,i represent the pixel-wise residual, and λs,i denote the corresponding log-precision value. The scale-specific loss is then defined as shown in Equation 3:

Ls=logΓ(ν+Ds2)+logΓ(ν2)12i=1Dsλs,i+Ds2log(π)+Ds2log(ν)+ν+Ds2log(1+i=1Dsδi2eλs,iν)    (3)

This matches the Student’s-𝑡 NLL with Σ1=diag(eλs,i), except for a small constant (ϵ=106) added inside the logarithmic and exponential terms when implementing the loss. Finally, the total loss is a weighted sum over scales, as defined in Equation 4:

LTotal=s{1,0,0.5,0.25}wsLs    (4)

All loss parameters (ν and λs,i) are updated by backpropagation along with the network weights.

The feature extractor and ν-predictor network, as well as the λ parameters, are trained jointly with the segmentation model. In summary, our dynamic-Nu T-Loss automatically learns a per-image scale parameter ν and spatial precision weights λ, while aggregating loss at multiple scales.

3 Experiments

3.1 Implementation details

All experiments are conducted using the PyTorch framework (v1.X) on an NVIDIA A100 GPU. Both the training and evaluation pipelines utilized standard PyTorch modules and custom components for the proposed loss function. The UNet architecture with a ResNet-34 encoder pre-trained on ImageNet (43) was used as the base segmentation model.

The model was trained using the Adam optimizer with an initial learning rate of 0.001, weight decay of 1e−4, and no warm-up schedule (40). A polynomial decay strategy was applied to reduce the learning rate over time, defined as ηt=η0(1tT)p, where η0 is the initial learning rate, tis the current training step, T is the total number of training steps, and p=0.9 is the polynomial power.

The model was trained for 300 epochs with a batch size of 16. The batch size was chosen based on GPU memory limits while maintaining stable gradient estimates, and 300 epochs were sufficient to ensure convergence without overfitting. All models were trained on pre-separated training and validation splits without cross-validation or ensemble methods. Cross-validation and ensembling were omitted to maintain computational feasibility and isolate the impact of the proposed loss function. Final probability maps from the sigmoid output were thresholded at 0.5 to obtain binary segmentation masks. Performance was evaluated using the Dice coefficient (Dice, higher values indicate better overlap, Equation 5), intersection-over-union (IoU, higher values show better segmentation accuracy, Equation 6), false discovery rate (FDR, lower values indicate fewer false positives, Equation 7), Hausdorff distance (HD, lower values reflect better boundary alignment, Equation 8), average symmetric surface distance (ASSD, lower values demonstrate superior boundary precision, Equation 9), expected calibration error (ECE, lower values represent better model calibration, Equation 10), and mutual information (MI, higher values reveal stronger statistical dependence between predictions and ground truth, Equation 11). These metrics collectively assess the model’s segmentation accuracy, boundary detection reliability, and predictive consistency.

Dice=2PGP+G    (5)
IoU=PGPG    (6)
FDR=FPTP+FP    (7)
HD=max{suppPinfgGd(p,g),supgGinfpPd(g,p)}    (8)
ASSD=1P+G(pPd(p,G)+gGd(g,P))    (9)
ECE=i=1BbiNacc(bi)conf(bi)    (10)
MI=p{0,1}g{0,1}p(p,g)logp(p,g)p(p)p(g)    (11)

where P and G denote predicted and ground truth masks, and P and G refer to their corresponding boundaries. TP (true positives) is the number of correctly predicted foreground pixels, FP (false positives) is the number of pixels incorrectly predicted as foreground, and FN (false negatives) is the number of foreground pixels missed by the prediction. The d(·,·) denotes Euclidean distance. For ECE, we used B=10 confidence bins, where bi is bin cardinality, acc(bi) and conf(bi) are bin accuracy/confidence, and p(·) is empirical probability. HD and ASSD were computed using boundary point clouds, while ECE and MI were computed using pixel-wise predicted probabilities and ground truth labels. All metrics, except ECE, were averaged per image; ECE was aggregated across all test pixels.

4 Results and discussion

We evaluated our method on five public polyp segmentation benchmarks: CVC-300, CVC-ClinicDB, ETIS-LaribPolypDB, Kvasir, and CVC-ColonDB. Table 1 presents a comprehensive comparison between our approach and established baselines. The results indicate that our loss function demonstrates consistently competitive and balanced performance across a range of evaluation metrics.

Table 1
www.frontiersin.org

Table 1. Comparative performance of different loss functions across five polyp segmentation datasets.

On CVC-300, our proposed loss function outperformed all baseline methods with a Dice of 89.84% (vs. 86.90% for T-Loss) and IoU of 83.54% (vs. 80.06%), delivering a clear boost in overall overlap. In terms of boundary precision, the HD decreased to 19.09 pixels (a 46% relative reduction from 35.33 pixels) and ASSD to 4.27 pixels (vs. 9.67 pixels), reflecting tighter contour adherence. Moreover, ECE dropped to 0.44%, and mutual information increased to 0.1175 bits, reflecting enhanced calibration and confidence reliability. The method also maintained an FDR of 9.60%, highlighting improved suppression of spurious segmentations.

On the high-quality CVC-ClinicDB images, our loss secured Dice of 88.78% and IoU of 83.78%, marginally ahead of T-Loss by 0.17 and 0.86%, respectively. Its boundary delineation is among the best, with an HD of 32.82 pixels (vs. 32.85 pixels) and an ASSD of 4.39 pixels, demonstrating consistency even in well-contrasted scenes. The ECE of 1.43% matched top-performing losses, ensuring trustworthy probability outputs. Notably, the FDR of 9.43% was the lowest across all methods, underscoring its precise suppression of false positives in varied lighting and texture conditions.

ETIS-LaribPolypDB presents a significant challenge due to its small, low-contrast polyps. Despite this, the proposed method demonstrated improved performance across multiple metrics. It achieved a Dice score of 68.60% (vs. 64.46% for T-Loss) and IoU of 62.20% (vs. 58.91%), marking a > 4% absolute improvement in overlap. Boundary metrics also improved, with the HD reduced to 50.79 pixels (vs. 60.03 pixels) and the ASSD to 16.96 pixels (vs. 25.78 pixels), which helps capture fine, faint edges. With ECE = 1.47%, our model maintained more reliable confidence estimates, while the reduced FDR of 30.06% indicates an enhanced robustness to noise and imaging artefacts relative to baseline methods.

On the diverse Kvasir collection, our loss function produced a Dice of 89.56% and IoU of 83.59%, outperforming T-Loss by ∼0.8% in both metrics. It refined boundary localization to HD = 73.95 pixels (vs. 79.98 pixels) and ASSD = 11.55 pixels (vs. 12.68 pixels), crucial for accurately segmenting varied polyp shapes and sizes. While its ECE of 3.01% paralleled the best methods, the standout achievement was an FDR of only 5.12%, the lowest among all losses, demonstrating comparable precision in suppressing false positives across a heterogeneous dataset.

Even on CVC-ColonDB’s noisy, artifact-prone frames, our loss achieved the highest Dice of 74.74% (vs. 70.19% for T-Loss) and IoU of 66.82% (vs. 63.33%). It delivered the smallest HD of 60.07 pixels and the lowest FDR of 20.36%, reflecting robust detection under adverse conditions. Although ASSD was 19.15 pixels (slightly above T-Loss’s 19.71 pixels), the overall gains in overlap, boundary error, and calibration (ECE = 3.18%) confirm its resilience and reliability for challenging real-world colonoscopy data.

DNA-TLoss demonstrated unparalleled, cross-dataset performance: it secured the highest Dice and IoU on all five benchmarks, achieved the lowest FDR in every dataset, and delivered the smallest HD across the board. Moreover, it provided the most reliable probability calibration, registering the lowest ECE on CVC-300, ETIS-LaribPolypDB, and Kvasir, while remaining highly competitive on the remaining two. These consistent gains in segmentation accuracy, over-segmentation control, boundary precision, and confidence estimation render our method ideally suited for real-time, high-stakes polyp delineation in colorectal cancer screening, where both precision and trustworthiness are paramount. To facilitate a clearer understanding and visual comparison of the performance differences among the loss functions, we present the results in radar chart format in Figure 2. This graphical representation allows intuitive evaluation of multiple metrics simultaneously, showing the strengths and weaknesses of each method.

Figure 2
Five radar charts labeled a) to e) compare different loss functions: GCE, MAE, RCE, SCE, NGCE, T-Loss, and DNATloss. Each chart shows performance across metrics: Dice, IoU, HD, ASSD, ECE, MI, and FDR. The graphs illustrate variations in performance with each loss function highlighted in a different color per the legend.

Figure 2. Normalized performance radar charts for different loss functions on five polyp segmentation datasets, including (a) CVC-300, (b) CVC-ClinicDB, (c) ETIS-LaribPolypDB, (d) Kvasir, and (e) CVC-ColonDB. Each axis represents a performance metric normalized to the [0,1] range, where higher values indicate better performance. Metrics where higher values indicate better performance (Dice, IoU, MI) are plotted directly, while metrics where lower values are preferred (FDR, HD, ASSD, ECE) are inverted during normalization. This normalization scheme ensures that larger values uniformly reflect better performance across all metrics, facilitating intuitive visual comparison of the overall effectiveness of each loss function.

Figure 3 also presents a visual comparison of predicted segmentation masks generated using different loss functions, including DNA-TLoss-2, alongside the corresponding ground truth annotations. For each of the five polyp segmentation datasets, two representative samples are shown to illustrate qualitative differences in model performance.

Figure 3
Comparison of predicted masks across five polyp detection datasets, each with two samples. For each dataset, input images are compared to ground truth and various loss functions: SCE, RCE, NGCE, MAE, GCE, T-Loss, and DNA-TLoss. Predicted masks are visualized, highlighting differences in accuracy and performance across loss functions.

Figure 3. Qualitative comparison of predicted segmentation masks obtained using different loss functions across five colon polyp segmentation datasets: (a) CVC-300, (b) CVC-ClinicDB, (c) ETIS-LaribPolypDB, (d) Kvasir, and (e) CVC-ColonDB. For each dataset, two representative samples are shown. Ground truth boundaries are indicated in green. DNA-TLoss exhibits improved boundary adherence and more accurate polyp delineation compared to competing loss functions.

Finally, to systematically compare the optimization dynamics of all loss functions, Figure 4 plots normalized training loss paths with Z-score normalization applied (Z=(xμ)/σ) where x is the original training loss, μ and σ are the mean and standard deviation of the loss values. This normalization procedure eliminates inherent scale disparities between loss functions by centering each curve at zero and normalizing variance, thereby allowing for a fair and direct comparison of convergence behavior. Above all, DNA-TLoss has not just the lowest end-normalized loss value but also has the smoothest, stably decreasing profile.

Figure 4
Line chart depicting the Z-score normalized loss value over 300 epochs for various loss functions: RCE, NGCE, SCE, GCE, MAE, T-Loss, and DNATLoss. Each loss function is represented by a different colored line. The main graph shows a downward trend, indicating decreasing loss values. An inset graph zooms in on epochs 200 to 300, revealing fluctuations in the loss values.

Figure 4. Z-score normalized training loss at epochs with different loss functions. A zoomed-in inset highlights the range from epoch 200–300, highlighting the final phase stability and convergence variance between methods.

4.1 Computational efficiency and deployment feasibility analysis

Table 2 provides a comprehensive analysis of computational resource utilization across loss functions. The results suggest that DNA-TLoss offers competitive segmentation performance while maintaining reasonable computational efficiency. Three key observations can be noted from the analysis:

1. Minimal memory overhead. DNA-TLoss requires only 0.102 gigabytes (GB) total video RAM (VRAM), comparable to lightweight GCE (0.092 GB) and 10 × lower than MAE (1.021 GB). The peak VRAM consumption (4.613 GB) remains below all baselines except T-Loss (4.593 GB), confirming efficient memory management despite adaptive components.

2. Real-time inference capability. With an interference speed of 46.7 frames per second (FPS) (21.40 ms per image), DNA-TLoss incurs only a 10.1% speed reduction compared to the fastest baseline, MAE, which achieves 51.9 FPS.

3. Optimized training efficiency. The average epoch duration of 4.7 s for DNA-TLoss represents a 33.8% improvement over T-Loss (7.1 s) and remains within 11.9% of the most efficient baselines (GCE/MAE: 4.2 s).

Table 2
www.frontiersin.org

Table 2. Computational resource utilization during training and inference.

These results confirm that DNA-TLoss is computationally feasible for real-world deployment. The minimal increase in model size (0.026 million parameters) enables robust uncertainty modeling without compromising clinical usability, satisfying both accuracy and efficiency requirements for computer-aided diagnosis systems.

4.2 Statistical validation of performance improvements

To confirm that the performance gains of our proposed loss are statistically robust and not attributable to random variation, we conducted 20 independent runs (using different random seeds) on each dataset. For every run, we recorded Dice, IoU, HD, and ASSD for our method and the best-performing baseline (T-Loss). After verifying normality of the paired differences with the Shapiro–Wilk test, we conducted paired two-tailed t-tests between methods for each metric on each dataset, using a family-wise significance threshold of α = 0.05, with a Bonferroni correction for five comparisons (adjusted α = 0.01).

As shown in Table 3, our proposed loss significantly outperforms T-Loss on Dice and IoU, HD, and ASSD across all five benchmarks (p < 0.01 after correction), confirming that the observed improvements in overlap metrics are statistically robust.

Table 3
www.frontiersin.org

Table 3. Paired t-test results over 20 randomized training runs comparing DNA-TLoss vs. T-Loss, with Bonferroni-corrected significance (α = 0.01).

4.3 Ablation study

To rigorously assess the impact of each component within our proposed DNA-TLoss, we conducted a comprehensive ablation study across all five datasets, averaging performance over 5 independent runs with fixed hyperparameters. This study systematically evaluated the contributions of three core elements: (1) adaptive degrees-of-freedom prediction (ν), (2) per-pixel precision mapping (λ), and (3) multi-scale loss aggregation. As summarized in Table 4, while each individual component provided measurable improvements, their combination demonstrated synergistic effects that significantly enhanced segmentation performance.

Table 4
www.frontiersin.org

Table 4. Ablation study results across five polyp segmentation datasets evaluating the impact of key components in the proposed DNA-TLoss.

The adaptive ν component consistently improved robustness across all datasets, particularly on the challenging ETIS dataset, where it reduced HD by 11.2%. The per-pixel λ mapping showed the strongest benefits for boundary refinement, reducing ASSD by up to 23.1% on CVC-300. Multi-scale aggregation contributed notably to overall accuracy, with the most substantial gains observed on ColonDB and ETIS datasets.

The full DNA-TLoss framework, integrating all three components, achieved the best performance across all evaluation metrics and datasets. Most notably, it achieved 2.94% Dice improvement on CVC-300, 4.14% on ETIS, and 4.55% on ColonDB compared to the baseline T-Loss, while simultaneously reducing boundary error metrics by 30–46% and false detection rates by 19–40%. These consistent improvements across diverse datasets demonstrate that our components work complementarily to enhance segmentation accuracy, boundary precision, and clinical reliability in various colonoscopic imaging conditions.

5 Conclusion

This work introduced DNA-TLoss, an adaptive loss function that fundamentally advances polyp segmentation by systematically addressing three persistent challenges in colonoscopy imaging: annotation noise, domain variability across clinical settings, and boundary uncertainty exacerbated by reflections, motion artifacts, and fluid occlusions. DNA-TLoss integrates three synergistic components: (1) a lightweight NuPredictor module that dynamically tunes per-image robustness using a heavy-tailed Student’s 𝑡-distribution; (2) learnable per-pixel precision weights enabling spatial adaptation to ambiguous regions; and (3) multi-scale loss aggregation capturing both structural contours and fine-grained detail. Collectively, DNA-TLoss established new state-of-the-art performance across five public benchmarks, CVC-300, CVC-ClinicDB, ETIS-Larib, Kvasir, and CVC-ColonDB. Quantitatively, our method achieved the highest Dice and IoU on all five datasets; it reduced the Hausdorff distance by an average of 14.2% versus T-Loss, peaking at a 45.96% reduction on CVC-300, demonstrating unprecedented boundary precision critical for polyp size estimation, a key factor in colorectal cancer risk stratification. It simultaneously lowered false discovery rates by up to 38.8% (CVC-300) and 24.4% (Kvasir), and achieved best-in-class calibration, with expected calibration error as low as 0.44% on CVC-300, ensuring reliable probabilistic outputs under label noise and domain shifts. Crucially, these accuracy gains came without compromising clinical deployability: DNA-TLoss maintains real-time inference at 46.7 FPS (exceeding the 30 FPS clinical threshold) with minimal computational overhead, adding only 0.026 M parameters (≈0.12% of U-Net’s footprint) and 0.102 GB VRAM during training. These advancements directly addressed the 17–28% polyp miss rates in conventional colonoscopy by providing endoscopists with real-time, trustworthy segmentation guidance. DNA-TLoss establishes a new paradigm for robust, real-time AI assistance in gastrointestinal endoscopy, potentially transforming early cancer detection while serving as a blueprint for adaptive loss design in other noisy-label medical imaging domains.

Data availability statement

The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

AN: Investigation, Writing – original draft, Conceptualization, Visualization, Validation, Formal analysis, Writing – review & editing, Methodology, Data curation. MaF: Writing – original draft, Visualization, Validation, Formal analysis, Conceptualization, Methodology, Data curation, Writing – review & editing. FE: Formal analysis, Writing – original draft, Visualization, Writing – review & editing. MeF: Formal analysis, Writing – review & editing, Writing – original draft. RS: Writing – review & editing, Supervision, Funding acquisition, Investigation, Conceptualization, Resources, Data curation.

Funding

The author(s) declared that financial support was not received for this work and/or its publication. This work was supported by York University, the Mitacs Accelerate Awards (No. IT34882, and No. IT29450), The Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant (No. RGPIN-2023-05091 and No. DGECR-2023-00314), The Ontario Centre of Innovation (OCI) Collaborate 2 Commercialize (C2C) Grant (No. 36591), and Mitacs Business Strategy Internship (BSI) (No. IT47085 and No. IT47072).

Acknowledgments

The authors acknowledge York University, the Natural Sciences and Engineering Research Council of Canada (NSERC), Connected Minds, Lassonde Innovation Fund (LIF), Ontario Centre of Innovation_Collaborate 2 Commercialize (OCI_C2C), and MITACS Accelerate for supporting this research.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that Generative AI was used in the creation of this manuscript. The authors used ChatGPT solely for grammatical editing of the manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmed.2025.1657123/full#supplementary-material

References

1. Winawer, SJ, Zauber, AG, Ho, MN, O'Brien, MJ, Gottlieb, LS, Sternberg, SS, et al. Prevention of colorectal cancer by colonoscopic polypectomy. N Engl J Med. (1993) 329:1977–81. doi: 10.1056/NEJM199312303292701,

PubMed Abstract | Crossref Full Text | Google Scholar

2. Ma, J, He, Y, Li, F, Han, L, You, C, and Wang, B. Segment anything in medical images. Nat Commun. (2024) 15:654. doi: 10.1038/s41467-024-44824-z,

PubMed Abstract | Crossref Full Text | Google Scholar

3. Qayoom, A, Xie, J, and Ali, H. Polyp segmentation in medical imaging: challenges, approaches and future directions. Artif Intell Rev. (2025) 58:169. doi: 10.1007/s10462-025-11173-2

Crossref Full Text | Google Scholar

4. Kim, NH, Jung, YS, Jeong, WS, Yang, H-J, Park, S-K, Choi, K, et al. Miss rate of colorectal neoplastic polyps and risk factors for missed polyps in consecutive colonoscopies. Intest Res. (2017) 15:411. doi: 10.5217/ir.2017.15.3.411,

PubMed Abstract | Crossref Full Text | Google Scholar

5. Wang, Y-P, Jheng, Y-C, Hou, M-C, and Lu, C-L. The optimal labelling method for artificial intelligence-assisted polyp detection in colonoscopy. J Formos Med Assoc. (2024). doi: 10.1016/j.jfma.2024.12.022,

PubMed Abstract | Crossref Full Text | Google Scholar

6. Zhou, J-X, Yang, Z, Xi, D-H, Dai, S-J, Feng, Z-Q, Li, J-Y, et al. Enhanced segmentation of gastrointestinal polyps from capsule endoscopy images with artifacts using ensemble learning. World J Gastroenterol. (2022) 28:5931–43. doi: 10.3748/wjg.v28.i41.5931,

PubMed Abstract | Crossref Full Text | Google Scholar

7. Wang, S, Li, C, Wang, R, Liu, Z, Wang, M, Tan, H, et al. Annotation-efficient deep learning for automatic medical image segmentation. Nat Commun. (2021) 12:5915. doi: 10.1038/s41467-021-26216-9,

PubMed Abstract | Crossref Full Text | Google Scholar

8. Maydanchi, M, Ziaei, A, Basiri, M, Azad, AN, Pouya, S, Ziaei, M, et al. eds. Comparative study of decision tree, adaboost, random forest, naïve bayes, KNN, and perceptron for heart disease prediction In: SoutheastCon 2023. Orlando, FL, USA: IEEE (2023)

Google Scholar

9. Singh, O, and Sengar, SS. (2024). BetterNet: an efficient CNN architecture with residual learning and attention for precision polyp segmentation. arXiv [Preprint] arXiv:240504288

Google Scholar

10. Lin, T, Wang, M, Lin, A, Mai, X, Liang, H, Tham, Y-C, et al. Efficiency and safety of automated label cleaning on multimodal retinal images. NPJ Digit Med. (2025) 8:10. doi: 10.1038/s41746-024-01424-x,

PubMed Abstract | Crossref Full Text | Google Scholar

11. Zhao, X, Vemulapalli, R, Mansfield, PA, Gong, B, Green, B, Shapira, L, et al., editors. Contrastive learning for label efficient semantic segmentation. Proceedings of the IEEE/CVF international conference on computer vision; 2021.

Google Scholar

12. Karimi, D, Dou, H, Warfield, SK, and Gholipour, A. Deep learning with noisy labels: exploring techniques and remedies in medical image analysis. Med Image Anal. (2020) 65:101759. doi: 10.1016/j.media.2020.101759,

PubMed Abstract | Crossref Full Text | Google Scholar

13. Rädsch, T, Reinke, A, Weru, V, Tizabi, MD, Schreck, N, Kavur, AE, et al. Labelling instructions matter in biomedical image analysis. Nat Mach Intell. (2023) 5:273–83. doi: 10.1038/s42256-023-00625-5

Crossref Full Text | Google Scholar

14. Boreiri, Z, Azad, AN, and Ghodousian, A, editors. A convolutional neuro-fuzzy network using fuzzy image segmentation for acute leukemia classification. 2022 27th international computer conference, Computer Society of Iran (CSICC); 2022: IEEE.

Google Scholar

15. Fang, C, He, H, Long, Q, and Su, WJ. Exploring deep neural networks via layer-peeled model: minority collapse in imbalanced training. Proc Natl Acad Sci. (2021) 118:e2103091118. doi: 10.1073/pnas.2103091118,

PubMed Abstract | Crossref Full Text | Google Scholar

16. Siemers, FM, and Bajorath, J. Differences in learning characteristics between support vector machine and random forest models for compound classification revealed by Shapley value analysis. Sci Rep. (2023) 13:5983. doi: 10.1038/s41598-023-33215-x,

PubMed Abstract | Crossref Full Text | Google Scholar

17. Wang, Y-Q. An analysis of the Viola-Jones face detection algorithm. Image Process On Line. (2014) 4:128–48. doi: 10.5201/ipol.2014.104

Crossref Full Text | Google Scholar

18. Kim, J, Lee, JK, and Lee, KM, editors. Accurate image super-resolution using very deep convolutional networks. Proceedings of the IEEE conference on computer vision and pattern recognition; 2016.

Google Scholar

19. Yeung, M, Sala, E, Schönlieb, C-B, and Rundo, L. Unified focal loss: generalising dice and cross entropy-based losses to handle class imbalanced medical image segmentation. Comput Med Imaging Graph. (2022) 95:102026. doi: 10.1016/j.compmedimag.2021.102026,

PubMed Abstract | Crossref Full Text | Google Scholar

20. Lin, T-Y, Goyal, P, Girshick, R, He, K, and Dollár, P, editors. Focal loss for dense object detection. Proceedings of the IEEE international conference on computer vision; 2017.

Google Scholar

21. Rezatofighi, H, Tsoi, N, Gwak, J, Sadeghian, A, Reid, I, and Savarese, S, editors. Generalized intersection over union: a metric and a loss for bounding box regression. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition; 2019.

Google Scholar

22. Deng, J, Guo, J, Xue, N, and Zafeiriou, S, editors. Arcface: additive angular margin loss for deep face recognition. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition; 2019.

Google Scholar

23. Azizi, S, Culp, L, Freyberg, J, Mustafa, B, Baur, S, Kornblith, S, et al. Robust and data-efficient generalization of self-supervised machine learning for diagnostic imaging. Nat Biomed Eng. (2023) 7:756–79. doi: 10.1038/s41551-023-01049-7,

PubMed Abstract | Crossref Full Text | Google Scholar

24. Han, B, Yao, Q, Yu, X, Niu, G, Xu, M, Hu, W, et al. Co-teaching: robust training of deep neural networks with extremely noisy labels. Adv Neural Inf Process Syst. (2018) 31:8535–8545.

Google Scholar

25. Patrini, G, Rozza, A, Krishna Menon, A, Nock, R, and Qu, L, editors. Making deep neural networks robust to label noise: a loss correction approach. Proceedings of the IEEE conference on computer vision and pattern recognition; 2017.

Google Scholar

26. Yao, Y, Liu, T, Han, B, Gong, M, Deng, J, Niu, G, et al. Dual t: reducing estimation error for transition matrix in label-noise learning. Adv Neural Inf Process Syst. (2020) 33:7260–71. doi: 10.48550/arXiv.2006.07805

Crossref Full Text | Google Scholar

27. Zhang, Z, and Sabuncu, M. Generalized cross entropy loss for training deep neural networks with noisy labels. Adv Neural Inf Process Syst. (2018) 31:8792–802. doi: 10.48550/arXiv.1805.07836

Crossref Full Text | Google Scholar

28. Wang, Y, Ma, X, Chen, Z, Luo, Y, Yi, J, and Bailey, J, editors. Symmetric cross entropy for robust learning with noisy labels. Proceedings of the IEEE/CVF international conference on computer vision; 2019.

Google Scholar

29. Barron, JT, editor A general and adaptive robust loss function. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition; 2019.

Google Scholar

30. Ma, X, Huang, H, Wang, Y, Romano, S, Erfani, S, and Bailey, J, editors. Normalized loss functions for deep learning with noisy labels. International conference on machine learning; 2020: PMLR.

Google Scholar

31. Gonzalez-Jimenez, A, Lionetti, S, Gottfrois, P, Gröger, F, Pouly, M, and Navarini, AA, editors. Robust T-loss for medical image segmentation. International conference on medical image computing and computer-assisted intervention; 2023: Springer.

Google Scholar

32. Jha, D, Smedsrud, PH, Riegler, MA, Halvorsen, P, De Lange, T, Johansen, D, et al., editors. Kvasir-seg: a segmented polyp dataset. International conference on multimedia modeling; 2019: Springer.

Google Scholar

33. Bernal, J, Sánchez, FJ, Fernández-Esparrach, G, Gil, D, Rodríguez, C, and Vilariño, F. WM-DOVA maps for accurate polyp highlighting in colonoscopy: validation vs. saliency maps from physicians. Comput Med Imaging Graph. (2015) 43:99–111. doi: 10.1016/j.compmedimag.2015.02.007,

PubMed Abstract | Crossref Full Text | Google Scholar

34. Tajbakhsh, N, Gurudu, SR, and Liang, J. Automated polyp detection in colonoscopy videos using shape and context information. IEEE Trans Med Imaging. (2015) 35:630–44. doi: 10.1109/TMI.2015.2487997

Crossref Full Text | Google Scholar

35. Silva, J, Histace, A, Romain, O, Dray, X, and Granado, B. Toward embedded detection of polyps in wce images for early diagnosis of colorectal cancer. Int J Comput Assist Radiol Surg. (2014) 9:283–93. doi: 10.1007/s11548-013-0926-3,

PubMed Abstract | Crossref Full Text | Google Scholar

36. Vázquez, D, Bernal, J, Sánchez, FJ, Fernández-Esparrach, G, López, AM, Romero, A, et al. A benchmark for endoluminal scene segmentation of colonoscopy images. J Healthc Eng. (2017) 2017:1–9. doi: 10.1155/2017/4037190,

PubMed Abstract | Crossref Full Text | Google Scholar

37. Ronneberger, O, Fischer, P, and Brox, T, editors. U-net: convolutional networks for biomedical image segmentation. Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5–9, 2015, proceedings, part III 18; 2015: Springer.

Google Scholar

38. He, K, Zhang, X, Ren, S, and Sun, J, editors. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition; 2016.

Google Scholar

39. Ding, H, Jiang, X, Shuai, B, Liu, AQ, and Wang, G. Semantic segmentation with context encoding and multi-path decoding. IEEE Trans Image Process. (2020) 29:3520–33. doi: 10.1109/TIP.2019.2962685,

PubMed Abstract | Crossref Full Text | Google Scholar

40. Kingma, DP, and Ba, J. (2014). Adam: a method for stochastic optimization. arXiv [Preprint] arXiv:14126980.

Google Scholar

41. Ghodousian, A, Amiri, H, and Azad, AN. (2022). On the resolution and linear programming problems subjected by Aczel-Alsina fuzzy relational equations. arXiv [Preprint] arXiv:220411273.

Google Scholar

42. Ghodousian, A, Azad, AN, and Amiri, H. (2022). Log-sum-exp optimization problem subjected to Lukasiewicz fuzzy relational inequalities. arXiv [Preprint] arXiv:220609716.

Google Scholar

43. Deng, J, Dong, W, Socher, R, Li, L-J, Li, K, and Fei-Fei, L, editors. Imagenet: a large-scale hierarchical image database. 2009 IEEE conference on computer vision and pattern recognition; 2009: IEEE.

Google Scholar

Keywords: colonoscopy image analysis, deep learning in medical imaging, dynamic-Nu T-Loss, multi-scale aggregation, polyp segmentation, robust loss function

Citation: Norouziazad A, Fardshad MNG, Esmaeildoost F, Fardshad MNG and Salahandish R (2026) Robust colonoscopy polyp segmentation using dynamic-Nu T-Loss with multi-scale and uncertainty-aware adaptation. Front. Med. 12:1657123. doi: 10.3389/fmed.2025.1657123

Received: 30 June 2025; Revised: 17 December 2025; Accepted: 18 December 2025;
Published: 12 January 2026.

Edited by:

Lewei Zhao, MedStar Georgetown University Hospital, United States

Reviewed by:

Sandeep Singh Sengar, Cardiff Metropolitan University, United Kingdom
Ruohan Wang, Stanford University, United States

Copyright © 2026 Norouziazad, Fardshad, Esmaeildoost, Fardshad and Salahandish. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Razieh Salahandish, cmF6aWVoc0B5b3JrdS5jYQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.