- School of Computer Science, Vellore Institute of Technology, Chennai, India
Low-dose Computed Tomography (CT) imaging minimizes radiation exposure but often results in degraded image quality, making diagnosis challenging. Image Quality Assessment (IQA) is a process of quantitatively evaluating the visual quality of images and plays a crucial role in determining whether these CT scans meet the necessary standards for accurate diagnosis. IQA methods help identify issues such as noise, blurriness, or artifacts that may compromise the diagnostic value of the scans. Traditional quality assessment measures how closely an image matches an ideal or reference image. Since obtaining a high-quality reference image is often challenging, an automated quality assessment framework (diagnosis based IQA) using No-Reference Image Quality Assessment (NRIQA) techniques is proposed, allowing quality evaluation and eliminating the need for a high-quality reference image. In this approach, various statistical and structural features are extracted from low-dose CT scans and mapped to radiologist-assigned quality scores, which are subjective evaluations given by experts to train and compare various predictive models. The framework undergoes 100-fold validation, to ensure the reliability of the proposed model. CT images with predicted quality scores of 2 and below undergo spatial domain enhancement to improve their diagnostic value. These enhanced images are then reassessed using the diagnosis based IQA (trained Support Vector Regression) model, demonstrating an improvement in predicted quality scores. In addition, the enhanced images were verified by a radiologist, confirming the effectiveness of the enhancement process. This two-stage approach, automated NRIQA-based quality prediction and selective enhancement provides a reliable, and objective method for assessing and improving low-dose CT image quality.
1 Introduction
Computed Tomography (CT) imaging is a critical tool in modern medicine, offering high-resolution, cross-sectional images that aid in disease diagnosis and monitoring. However, radiation exposure associated with conventional high-dose CT (HDCT) scans has raised concerns, particularly for patients requiring frequent imaging. To address this, Low-Dose CT (LDCT) imaging has gained prominence, utilizing reduced radiation doses to minimize health risks. While LDCT significantly lowers radiation exposure, at times it comes at the cost of reduced image quality, manifesting as increased noise and artifacts. These limitations can affect the accuracy of diagnoses from LDCT images, making it essential to develop methods to objectively evaluate and improve their quality, ensuring they remain useful for medical diagnosis. To ensure the diagnostic reliability of LDCT images, an objective image quality assessment (IQA) system is required. Traditionally, radiologists assign quality scores based on subjective evaluation, but this method is human-dependent, prone to bias and lacks standardization. As a result, there is a growing need for automated and objective assessment methods. This has led to the development of various IQA techniques, many of which rely on reference-based methods that compare an image against a high-quality reference. However, in practical medical imaging scenarios, a perfect reference is often unavailable, making such approaches unsuitable for LDCT evaluation. To overcome this limitation, this research employs a No-Reference Image Quality Assessment (NRIQA) approach, in which image quality is assessed without needing a pristine reference. NRIQA plays a crucial role in medical image quality assessment, particularly in modalities like Magnetic Resonance Imaging (MRI), CT, and ultrasonic imaging. This method provides an objective evaluation of image quality, which is critical for ensuring that images are of sufficient quality for accurate diagnosis and treatment planning. NRIQA approach adapts well to the variability in imaging conditions, such as differences in patient sizes, movement artifacts, or imaging protocols, by assessing image quality based on inherent features. With the increasing use of advanced imaging techniques and machine learning algorithms in medical imaging, NRIQA is vital for evaluating their performance without relying on traditional reference-based metrics. Ultimately, high-quality images are essential for clinical decision-making, and NRIQA helps ensure that images used in clinical settings are evaluated consistently, thereby supporting better patient care. This study proposes a diagnosis based NRIQA model for LDCT imaging, leveraging statistical and structural features to predict quality scores without the need for a high-quality reference image. This model integrates feature selection, machine learning, and targeted image enhancement, providing an automated, consistent, and reliable solution for LDCT quality assessment, reducing human dependency and enhancing diagnostic reliability. The rest of the article is organised as follows. Section 2 describes the literature survey and the design of proposed model is discussed in Section 3. Experimental results are discussed in the Section 4, followed by Section 5, discussed the performance of proposed model in clinical evaluation. Finally, in Section 6 conclusion and future work is discussed.
2 Related works
No-reference image quality assessment (NR-IQA) has been widely explored in medical and non-medical imaging. Several studies focus on NR-IQA for MRI and CT scans, deep learning-based assessments, feature extraction, and medical image enhancement. Enhancement techniques improve contrast, sharpness, and noise reduction, complementing NR-IQA for better diagnostics.
The research by Wang et al. (1) and Stepień and Oszust (2) developed NR-IQA methods for MRI scans by incorporating automated distortion recognition and deep learning-based fusion of multiple architectures, respectively. The study by Wang et al. (1) employed grayscale normalization, gamma correction, and Gaussian filtering, followed by feature extraction using MSCN coefficients and AGGD analysis, achieving a 79.6x improvement in Quality Index (QI) over Contrast-to-Noise Ratio (CNR). Meanwhile, Stepień and Oszust (2) utilized ResNet18 and ResNet50 for feature extraction and applied PCA and Support Vector Regression (SVR), outperforming state-of-the-art techniques with high correlation to radiologists’ assessments. Similarly, the work by Shen et al. (3) introduced an NR-IQA method for low-dose CT (LDCT) images using sparse representation and a Just Noticeable Distortion (JND) model. This method combined JND maps and sparse features into a regression model for quality prediction, demonstrating high correlation with radiologists’ subjective scores.
Deep learning techniques have played a significant role in advancing NR-IQA. The research by Golestaneh et al. (4) introduced TReS, a hybrid NR-IQA model integrating Convolutional Neural Networks (CNNs) and Transformers to extract both local and non-local features. Their approach improved accuracy using relative ranking and self-consistency mechanisms. Likewise, Yang et al. (5) presented MANIQA, which leveraged Vision Transformers (ViT) with attention mechanisms to enhance GAN-based distortion assessment. This method outperformed existing techniques across four datasets and won the NTIRE 2022 Perceptual Image Quality Assessment Challenge. The work by Smith et al. (6) proposed an NR-IQA guided cut-off point selection strategy to refine dataset selection for fine-grained classification, improving model accuracy by filtering out low-quality images. Additionally, the research by Varga et al. (7) proposed a decision fusion approach using multiple pre-trained CNNs, such as VGG16, ResNet50, and InceptionV3, to enhance perceptual image quality estimation.
Efforts to reduce computational complexity in NR-IQA have been explored by Huang and Fang (8), which introduced a lightweight parallel framework (LPF) utilizing a pre-trained VGG16 network, a Feature Embedding Network (FEN), and a Distortion-Aware Quality Regression Network (DaQRN). The model’s parallel training improved convergence speed while maintaining superior performance compared to traditional BIQA methods. Similarly, the approach by Bagade et al. (9) proposed a machine learning-based NR-IQA metric for JPEG images, extracting Image Gradient (IG), Spatial Frequency Measure (SFM), and Spectral Activity Measure (SAM) features, and employing Artificial Neural Networks (ANN) and Neuro Fuzzy Classifiers (NFC). ANN demonstrated higher accuracy than NFC, highlighting its computational efficiency.
The research by Shi et al. (10) proposed BMEFIQA, a blind quality assessment method for multi-exposure fused (MEF) images, evaluating structural integrity, naturalness, and colorfulness using gradient similarity, exposure weighting, and entropy-based features. Random Forest regression was used for quality prediction, outperforming several existing methods on public MEF databases.
Several studies have integrated feature extraction techniques with classification models for disease detection in medical imaging. The study by Chowdhary and Acharjya (11) reviewed various segmentation and feature extraction techniques for medical images, highlighting the effectiveness of clustering-based segmentation and Haralick texture features. Sharma et al. (12) developed an automated bone cancer detection system using Histogram of Oriented Gradients (HOG) and Gray-Level Co-occurrence Matrix (GLCM) features, achieving an F1-score of 92.68% with Support Vector Machine (SVM). Likewise, (13) combined handcrafted Local Binary Pattern (LBP) and deep learning features from pre-trained CNNs to detect COVID-19 in CT scans. Their hybrid approach using VGG19+LBP achieved a classification accuracy of 99.4%, demonstrating the effectiveness of feature fusion.
COVID-19 detection using machine learning was explored by Imani (14), where contextual features were extracted using morphological filters, Gabor filter banks, and attribute filters. A shallow CNN was used for feature reduction, with SVM and Random Forest achieving 94% accuracy for X-ray images and 76% for CT scans. In lung cancer detection, the work by Alzubaidi et al. (15) compared global and local feature extraction approaches using CT scans. The best results were achieved with local feature extraction, where SVM combined with Gabor Filter features obtained 97% accuracy, 96% sensitivity, and 97% specificity, demonstrating the superiority of localized learning.
The research by Salem et al. (16) explores various histogram-based contrast enhancement techniques for medical images. The researchers implemented the methods using MATLAB and evaluated their performance on different medical images based on Peak Signal-to-Noise Ratio (PSNR), Mean Square Error (MSE), and Standard Deviation (SD). Their results demonstrated that Quadratic Dynamic Histogram Equalization and Contrast-Limited Adaptive Histogram Equalization were the most effective methods.
The sharpening enhancement technique for MR images proposed by Jeevakala (17) uses Laplacian Pyramid (LP) and Singular Value Decomposition (SVD) to improve edge visibility and segmentation. The LP decomposition preserves shape and texture information, while SVD enhances contrast at object boundaries. The method applies one level of LP decomposition to extract edge details, followed by SVD-based contrast enhancement.
The reviewed studies highlight advancements in NR-IQA, deep learning-based assessments, feature extraction, and enhancement techniques. Hybrid CNN-Transformer models, lightweight frameworks, and contrast enhancement methods have significantly improved image quality assessment and disease detection. These findings support developing an NR-IQA approach for LDCT scans by integrating feature extraction, quality estimation, and enhancement for optimal diagnostics.
3 Methodology
The process of assessing and enhancing the quality of LDCT images plays a vital role in ensuring accurate diagnostics while maintaining reduced radiation exposure. This study presents an integrated framework that combines NRIQA and targeted enhancement for LDCT imaging. Utilizing a robust feature extraction mechanism, including texture, edge, and frequency-based metrics, the framework employs SVR to predict quality scores and identify low-quality images. To address diagnostic usability, Laplacian sharpening is applied to images with low predicted quality scores, verified through collaboration with radiologists. Figure 1 shows the workflow of the methodology. This approach not only bridges the gap between quality assessment and enhancement but also aims to improve the reliability and applicability of LDCT imaging in clinical practice.
3.1 Dataset description
The dataset used in this study consists of 1,000 LDCT scan images, obtained from Lee et al. (18), each assigned a continuous quality score from 0 to 4 by experienced radiologists. Sourced from the LDCTIQAC2023 repository, the dataset contains real-world annotations, making it highly relevant for clinical applications. The images, stored in .tif format, were acquired using an abdominal soft-tissue window (width/level: 350/40) to ensure consistency. Figure 2 illustrates a sample of the raw dataset used for this study. To introduce variability, the dataset includes four noise levels, four artifact levels, and 16 combinations of noise and artifacts, generated by adjusting noise levels, projection stack size, and angular increments. The quality assessment was performed by five radiologists, each with over 10 years of experience, from Seoul Metropolitan Government Seoul National University Boramae Medical Center and Veterans Health Service Medical Center in South Korea. A five-point Likert scale was used to evaluate image noise, anatomical structure clarity, and diagnostic interpretability. The final subjective quality score was obtained by averaging the individual ratings to minimize bias.
This dataset provides a robust foundation for training models to assess image quality and predict scores for unseen CT scans. Table 1 represents the scoring criteria for the CT images present in the dataset. In order to prepare the dataset for analysis, a series of pre-processing steps are performed to ensure the images are in a suitable format for machine learning model training. All images are resized to a standard size of 224 224 pixels to maintain consistency and ensure compatibility with the chosen models. The images were then normalized to a range of [0,1], ensuring uniform intensity values across the entire dataset. Additionally, each image was converted to an 8-bit format to optimize processing. Finally, after processing, the images are converted back into an appropriate format for further analysis, typically a PIL image format, allowing for seamless integration with Python-based libraries. Figure 3 depicts a sample of the dataset after pre-processing.
Figure 3. Sample images after preprocessing shown (images shown in Figure 2.
3.2 Feature extraction
To effectively capture the various characteristics of the CT scans, a set of features is extracted from each image. These features include intensity-based metrics, histogram based metrics, texture-based metrics using the Gray-Level Co-Occurrence Matrix (GLCM) and Local Binary Pattern (LBP), frequency-based features obtained through fast fourier transform, edge-based and shape-based features.
The intensity based features, namely min, max, mean, median, standard deviation, range, skewness, kurtosis, 10th, and 90th percentile values are extracted from the image. From the distribution of the pixel values, the entropy, energy, and mode are calculated. By generating the edge map from the image, the mean, variance, edge count, area, perimeter, circularity, and compactness values are generated. The sharpness score, tissue density, texture heterogeneity, signal to noise, and contrast to noise ratio values are calculated because they play an major role in visualisation. From the frequency domain of an image the sum, max, mean, power spectrum flatness, and sum of frequencies higher than the mean are calculated. Finally, to analyze the texture properties of the CT image, the features are extraced from gray level co-occurrence matrix (with the distance of 1 and the angle of 0 degree) and local binary pattern (for the radius of 1 and circular neighborhood of 8 points). The mean and variance are computed from the local binary pattern of an image. The contrast, correlation, energy, homogeneity, dissimilarity, entropy, and max probability are computed using gray level co-occurrence matrix. After extracting these features, they are mapped to the corresponding quality scores (provided in the dataset) and stored in a CSV file for further analysis.
3.3 Model training and comparative study
Several machine learning models are evaluated to determine the most effective approach for predicting the quality scores of CT images. The models tested include Random Forest, Multi-Layer Perceptron (MLP) Regressor, XGBoost, and SVR. The dataset is split into 80% training and 20% testing to ensure reliable performance evaluation. Further, each model is trained using the extracted features to learn the relationship between image characteristics and quality scores. Hyperparameter tuning is performed using Random Search and Grid Search for the SVR model to fine-tune model parameters. The predicted quality scores are compared against the ground truth values using error metrics and correlation coefficients. Ultimately, SVR tuned with Grid Search yields the best results.
3.4 Feature selection and validation
To improve the model’s performance, feature selection is performed using SHAP analysis as shown in Figure 4 and permutation importance as shown in Figure 5. From the extracted features, the 25 most relevant ones are selected, and the SVR model is then retrained using this optimized feature subset. A 100-fold cross-validation approach is employed on SVR with Grid Search (SVR-GS) to evaluate the model’s performance, ensuring model robustness and minimizing overfitting.
3.5 Image enhancement for low-quality images
This study applies a Laplacian-based sharpening technique to enhance low-quality abdominal CT images, specifically those with predicted quality scores of 2 and below. Laplacian sharpening is implemented by applying the Laplacian operator to detect edges and enhance them by subtracting a fraction of the Laplacian response from the original image. The enhanced images as depicted in Figure 6 are validated by the radiologist and re-evaluated using the trained SVR model, confirming improved quality scores.
3.6 Experimental set-up
The entire methodology is implemented using Python and several key libraries, including Scikit-learn, OpenCV, and PIL. Model training and evaluation are conducted on high-performance computing resources with GPU acceleration. Google Cloud Platform (GCP) was utilized with a T4 GPU core for training, providing robust computational power for handling extensive image processing tasks. This implementation ensures scalability and efficiency for future applications. This methodology provides a systematic, data-driven approach to evaluating and improving the quality of LDCT images, minimizing human dependence on traditional quality assessment methods and enhancing diagnostic reliability.
4 Results and discussion
Several machine learning algorithms, Random Forest, XGBoost, MLP Regressor, SVR with Random Search (SVR-RS), SVR-GS, and SVR with 100-fold cross-validation (SVR - 100CV) are evaluated on various error and correlation metrics. Table 2 depicts the comparative study between various models. Among these, SVR-GS achieves the best results, demonstrating superior accuracy in predicting image quality. This model showed lower error values and higher correlation scores, making it more reliable for assessing image quality.
4.1 Evaluation of correlation metrics
Pearson Linear Correlation Coefficient (PLCC) measures the linear relationship between predicted and actual scores. A PLCC value close to 1 indicates a strong linear correlation. It is computed as:
where and are the predicted and actual scores, and and are their respective means. For the SVR-100CV model, the PLCC score is 0.9671.
Spearman Rank-Order Correlation Coefficient (SROCC) is a non-parametric measure that evaluates the monotonic relationship between predicted and actual scores. A SROCC value close to 1 indicates a stronger ordinal relationship. It is given by:
where is the rank difference between corresponding pairs, and is the total number of samples. For the SVR-100CV model, the SROCC score is 0.9394.
Kendall’s Tau-b (KROCC) assesses the strength and direction of association between two ranked variables while adjusting for ties. A KROCC value closer to 1 signifies better agreement between rankings. It is defined as:
where and represent the number of concordant and discordant pairs, respectively, while and account for tied ranks in each variable. For the SVR-100CV model, the KROCC score is 0.8671.
These correlation metrics provide a robust assessment of the model performance by evaluating both linear and rank-based relationships between predicted and actual quality scores. The similarity between the actual and predicted scores can be seen in Figure 7. This figure represents the scores of 25% of the test images for better visualization.
Figure 7. Actual (plotted as blue circle) and Predicted (plotted as red circle) quality scores for the sample test images.
4.2 Evaluation of error metrics
To assess the predictive accuracy of the models, we evaluated six error-based metrics:
Mean Squared Error (MSE) calculates the average squared difference between actual values () and predicted values (). A lower MSE indicates better model performance. represents the total number of samples, while and denote the actual and predicted values for the sample. For the SVR-100CV, MSE value is 0.0651.
R-Squared () represents the proportion of variance in the dependent variable () that is predictable from the independent variables. A higher value (closer to 1) indicates a better model fit. represents the mean of actual values, and the summations compute the total variance explained by the model. For the SVR-100CV, value is 0.9221.
Mean Absolute Error (MAE) calculates the average absolute difference between actual values () and predicted values (), making it less sensitive to outliers than MSE. is the total number of samples, and absolute differences are computed for each prediction. For SVR-100CV, value is 0.1996.
Root Mean Squared Error (RMSE) measures the standard deviation of residuals (prediction errors), providing a more interpretable error metric than MSE. is the number of samples, and residuals represent the differences between actual and predicted values. For SVR-100CV, value is 0.2475
Explained Variance Score (EVS) evaluates how well a model captures variance in the target variable (), with a value closer to 1 indicating a better model. represents the variance of actual values, while measures the variance of residuals. For SVR-100CV, value is 0.9284
Median Absolute Error (MedAE) computes the median of absolute differences between actual () and predicted values (), offering robustness to outliers. The median operation is applied to the absolute differences between actual and predicted values. For SVR-100CV, value is 0.1692.
Among the evaluated models, SVR-GS achieved the best results across all error metrics, as depicted in Table 2. However, to mitigate overfitting and ensure generalizability when comparing results with algorithms from other studies, SVR-100CV scores are employed as the primary benchmark for comparison.
Table 3 presents a comparative analysis of various models used for quality assessment, highlighting the superior performance of the proposed SVR-100 CV model. The Just Noticeable Difference (JND)-based models, from Shen et al. (3) including (19–23), use sparse representation to predict image quality and follow an 80-20 data split. These models achieve high correlation metrics, with PLCC values ranging from 0.923 to 0.955, SROCC between 0.919 and 0.953, and KLCC between 0.771 and 0.827. The No-Reference Image Quality Assessment (NRIQA) algorithms, such as SNR, CHO, NPWE, BRISQUE, and NIQE, operate with a 70-30 data split exhibit lower correlation values, with PLCC scores between 0.7278 and 0.8226.
In contrast, the proposed SVR-100 CV model, which utilizes feature-based extraction and follows an 80-20 data split, outperforms both the mentioned JND models as well as the approaches proposed in (18), achieving the highest correlation metrics in PLCC at 0.9671 and KLCC at 0.8671. This demonstrates the effectiveness of the feature-based extraction approach in accurately predicting image quality scores.
4.3 Evaluation of enhancement
The effectiveness of Laplacian sharpening was evaluated by re-testing the enhanced images using the trained SVR-GS model. The results showed an improvement in predicted quality scores as shown in Figure 8, indicating a measurable enhancement in image quality. The graph demonstrates a significant improvement in predicted quality scores for abdominal CT images after enhancement, as indicated by the consistent higher values in red compared to the fluctuating lower scores before enhancement in blue. By incorporating objective re-evaluation and expert validation, this enhancement approach not only improves perceptual image quality but also ensures that low-qualit LDCT images become more reliable for clinical interpretation.
Figure 8. Predicted scores before (plotted as blue circle) and after enhancement (plotted as red circle).
5 Clinical evaluation
To ensure an unbiased assessment, the dataset was randomly shuffled before being presented to radiologist Dr. Inthulan Thiraviaraj, a Consultant and Head at Athipathi Scan Centre with extensive experience in neuroradiology, diagnostic imaging, and fetal medicine. The dataset consisted of three categories: unenhanced low-quality images(score less than 2), the same low-quality images enhanced using Laplacian sharpening, and unenhanced high-quality images. Without prior knowledge of each image’s category, the radiologist evaluated 130 images on a screen with a native display resolution of 2,880 by 1864. Dr. Inthulan assigned scores ranging from 0 to 4 based on visual clarity and diagnostic usability. This blinded evaluation method ensured that any observed improvements in image quality were solely attributed to the enhancement process rather than preconceived biases.
During the assessment, the radiologist carefully examined anatomical structures, edge definition, and noise levels to determine the diagnostic usability of each image. Among the enhanced images, 96% were correctly identified as enhanced, demonstrating a clear visual distinction from the unenhanced low-quality images. Furthermore, 88% of the enhanced images were assigned a high-quality score of 3, confirming that Laplacian sharpening significantly improved image clarity. These findings underscore the effectiveness of the enhancement process in refining the perceptual quality of low-dose CT scans, making them more interpretable for clinical diagnosis.
The enhanced images exhibit variations in enhancement based on the tissue type. Some images display improvements in the soft tissue CT window as shown in Figure 9, while others show enhancements in the bone tissue window as shown in Figure 10. Soft tissues, including muscles, organs, and fat, require contrast enhancement for better visualization, whereas bone tissues, such as the skeletal structure, benefit from sharper edge definition. The radiologist confirms that the enhancement process effectively improves the clarity of soft tissues in some cases and bone structures in others, depending on the image characteristics.
6 Conclusion and future work
This study systematically evaluates multiple models for automated image quality assessment in LDCT imaging. While models such as Random Forest and MLP Regressor demonstrated reasonable performance, they exhibited higher error rates and lower correlation scores compared to SVR. Random Forest tended to overfit due to its reliance on complex decision boundaries, while MLP Regressor showed inconsistent performance across different validation folds. SVR, optimized using Grid Search, was selected due to its superior generalization ability, achieving the lowest error values and highest correlation scores across all metrics.
To further enhance diagnostic reliability, Laplacian sharpening is applied to images with predicted scores of 2 and below. The enhanced images were retested using the trained SVR model, showing improved quality scores, confirming the effectiveness of this enhancement approach. Additionally, radiologist Dr. Inthulan Thiraviaraj verified the enhanced images, further reinforcing their improved diagnostic value. Despite the promising results, a small percentage of enhanced images were not correctly identified as enhanced, highlighting the need for further refinement in the enhancement process.
This suggests that while Laplacian sharpening significantly improves image clarity, there may be cases where the enhancement is insufficiently pronounced or indistinguishable from unenhanced images. Future work will focus on addressing these limitations by exploring advanced enhancement techniques and optimization strategies to ensure consistent and universally recognizable improvements in image quality, thereby further enhancing the diagnostic utility of low-dose CT scans.
Data availability statement
Publicly available datasets were analyzed in this study. This data can be found here: https://ldctiqac2023.grand-challenge.org/.
Author contributions
NB: Methodology, Software, Visualization, Writing – original draft. DS: Formal analysis, Validation, Writing – original draft. SU: Data curation, Methodology, Software, Writing – original draft. JJ: Investigation, Project administration, Resources, Writing – review & editing. KS: Conceptualization, Investigation, Methodology, Validation, Writing – review & editing.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Acknowledgments
We extend our sincere gratitude to Dr. Inthulan Thiraviaraj for his invaluable contribution to this study. His extensive experience in neuroradiology, diagnostic imaging, and fetal medicine was instrumental in the clinical evaluation of the dataset. Dr. Thiraviaraj has previously served as a Consultant Neuro-radiologist at University Hospital Coventry and Warwickshire, a Consultant Radiologist at Athipathi Hospital, Chennai, and a Visiting Sonologist at K.S. Hospital and Bloom Hospital. Holding an FRCR from the Royal College of Radiologists and an MD in Radiodiagnosis from King Edward Memorial Hospital, he has received multiple accolades, including gold and silver medals in medical education. We deeply appreciate his time and expertise in ensuring a thorough and unbiased assessment of the imaging enhancement process.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1. Wang Z, Chen S, Dai J, Qin S, Cao Y, Zhao R, et al. A no-reference medical image quality assessment method based on automated distortion recognition technology: application to preprocessing in MRI-guided radiotherapy. arXiv [Preprint]. arXiv:2412.06599 (2024).
2. Stepień I, Oszust M. No-reference image quality assessment of magnetic resonance images with multi-level and multi-model representations based on fusion of deep architectures. Eng Appl Artif Intell. (2023) 123:106283. doi: 10.1016/j.engappai.2023.106283
3. Shen M, Sun R, Ye W. Sparse representation-based LDCT image quality assessment using the JND model. IEEE Access. (2025) 13:10422–31. doi: 10.1109/ACCESS.2025.3528882
4. Golestaneh SA, Dadsetan S, Kitani KM. No-reference image quality assessment via transformers, relative ranking, and self-consistency. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. (2022). p. 1220–30.
5. Yang S, Wu T, Shi S, Lao S, Gong Y, Cao M, et al. MANIQA: multi-dimension attention network for no-reference image quality assessment. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. (2022). p. 1191–1200.
6. Smith J, Zuo Z, Stonehouse J, Obara B. How quality affects deep neural networks in fine-grained image classification. arXiv [Preprint]. arXiv:2405.05742 (2024).
7. Varga D. No-reference image quality assessment with convolutional neural networks and decision fusion. Appl Sci. (2022) 12:101. doi: 10.3390/app12010101
8. Huang Q, Fang B. A lightweight parallel framework for blind image quality assessment. arXiv [Preprint]. arXiv:2402.12043 (2024).
9. Bagade JV, Dandawate YH, Singh K. No reference image quality assessment for jpeg images using machine learning approach. In: 2021 6th International Conference for Convergence in Technology (I2CT). IEEE (2021). p. 1–5.
10. Shi J, Li H, Zhong C, He Z, Ma Y. BMEFIQA: blind quality assessment of multi-exposure fused images based on several characteristics. Entropy. (2022) 24:285. doi: 10.3390/e24020285
11. Chowdhary CL, Acharjya DP. Segmentation and feature extraction in medical imaging: a systematic review. Procedia Comput Sci. (2020) 167:26–36. doi: 10.1016/j.procs.2020.03.179
12. Sharma A, Yadav DP, Garg H, Kumar M, Sharma B, Koundal D. Bone cancer detection using feature extraction based machine learning model. Comput Math Methods Med. (2021) 2021:7433186. doi: 10.1155/2021/7433186
13. Mubarak AS, Serte S, Al-Turjman F, Ameen ZSI, Ozsoz M. Local binary pattern and deep learning feature extraction fusion for covid-19 detection on computed tomography images. Expert Syst. (2022) 39:e12842. doi: 10.1111/exsy.12842
14. Imani M. Automatic diagnosis of coronavirus (covid-19) using shape and texture characteristics extracted from x-ray and ct-scan images. Biomed Signal Process Control. (2021) 68:102602. doi: 10.1016/j.bspc.2021.102602
15. Alzubaidi MA, Otoom M, Jaradat H. Comprehensive and comparative global and local feature extraction framework for lung cancer detection using ct scan images. IEEE Access. (2021) 9:158140–54. doi: 10.1109/ACCESS.2021.3129597
16. Salem N, Malik H, Shams A. Medical image enhancement based on histogram algorithms. Procedia Comput Sci. (2019) 163:300–11. doi: 10.1016/j.procs.2019.12.112
17. Jeevakala S, Brintha Therese A. Sharpening enhancement technique for mr images to enhance the segmentation. Biomed Signal Process Control. (2018) 41:21–30. doi: 10.1016/j.bspc.2017.11.007
18. Lee W, Wagner F, Maier A, Wang A, Baek J, Hsieh SS, et al. Low-dose computed tomography perceptual image quality assessment grand challenge dataset (MICCAI 2023) (Tech. rep.). (2023).
19. Chou C-H, Li Y-C. A perceptually tuned subband image coder based on the measure of just-noticeable-distortion profile. IEEE Trans Circuits Syst Video Technol. (1995) 5:467–76. doi: 10.1109/76.475889
20. Liu A, Lin W, Paul M, Deng C, Zhang F. Just noticeable difference for images with decomposition model for separating edge and textured regions. IEEE Trans Circuits Syst Video Technol. (2010) 20:1648–52. doi: 10.1109/TCSVT.2010.2087432
21. Wan W, Wu J, Xie X, Shi G. A novel just noticeable difference model via orientation regularity in DCT domain. IEEE Access. (2017) 5:22953–64. doi: 10.1109/ACCESS.2017.2699858
22. Wu J, Shi G, Lin W, Liu A, Qi F. Just noticeable difference estimation for images with free-energy principle. IEEE Trans Multimed. (2013) 15:1705–10. doi: 10.1109/TMM.2013.2268053
23. Yang X, Lin W, Lu Z, Ong E, Yao S. Motion-compensated residue preprocessing in video coding based on just-noticeable-distortion profile. IEEE Trans Circuits Syst Video Technol. (2005) 15:742–52. doi: 10.5555/2322566.2323744
24. Zhang X, Lin W, Xue P. Just-noticeable difference estimation with pixels in images. J Vis Commun Image Represent. (2008) 19:30–41. doi: 10.1016/j.jvcir.2007.06.001
25. Monnin P, Bochud FO, Verdun FR. Using a npwe model observer to assess suitable image quality for a digital mammography quality assurance programme. Radiat Prot Dosimetry. (2010) 139:459–62. doi: 10.1093/rpd/ncq010
26. Mittal A, Moorthy AK, Bovik AC. No-reference image quality assessment in the spatial domain. IEEE Trans Image Process. (2012) 21:4695–708. doi: 10.1109/TIP.2012.2214050
Keywords: automated quality assessment, feature extraction, image enhancement, Laplacian sharpening, low-dose CT, medical image analysis, no-reference image quality assessment (NRIQA), quality score prediction
Citation: Nirupama B, Dhevdharsan BS, Shreya Reddy U, Joshan Athanesious J and Kiruthika S (2025) Diagnosis based image quality assessment and enhancement for low dose CT image. Front. Radiol. 5:1704113. doi: 10.3389/fradi.2025.1704113
Received: 16 September 2025; Revised: 26 November 2025;
Accepted: 28 November 2025;
Published: 17 December 2025.
Edited by:
Maria Evelina Fantacci, University of Pisa, ItalyReviewed by:
Shuozhen Bao, Yale University, United StatesJing Zhang, Stanford University, United States
Copyright: © 2025 Nirupama, Dhevdharsan, Shreya Reddy, Joshan Athanesious and Kiruthika. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: S. Kiruthika, a2lydXRoaWthLnNAdml0LmFjLmlu
B. Nirupama