Skip to main content

EDITORIAL article

Front. Med., 31 January 2022
Sec. Nuclear Medicine
Volume 9 - 2022 | https://doi.org/10.3389/fmed.2022.848336

Editorial: Artificial Intelligence in Positron Emission Tomography

  • 1Department of Nuclear Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
  • 2Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
  • 3Department of Nuclear Medicine, University of Bern, Bern, Switzerland
  • 4Department of Informatics, Technical University of Munich, Munich, Germany
  • 5School of Computer Science, The University of Sydney, Sydney, NSW, Australia
  • 6PET Center and National Clinical Research Center for Aging and Medicine, Huashan Hospital, Fudan University, Shanghai, China

Smartphones, smart homes, and intelligent navigation are all examples of important applications of artificial intelligence (AI) in our daily life. AI was initially introduced in the 1950s, with the development of understanding and redefinition. AI is currently defined as a new technological science that studies and develops theorems, methods, technologies, and application systems that are used to simulate, extend, and enhance human intelligence (1).

We have witnessed the rapid advancement of AI, and its research and application in medical care, especially processing and analysizing the medical images is in the ascendant. In comparison to computed tomography (CT) and magnetic resonance imaging (MRI), which are more accessible and easier to standardize the acquisition processes, the positron emission tomography (PET) is more expensive and less broadly accessible, and its more complicated technical operation process poses difficulty on standardizing the image acquisition. Though the research and application of AI in PET is relatively slower, since PET is such an essential field of molecular imaging, AI in PET imaging is attracting substantial research attention and becoming a research hotspot. At the level of technology, image post-processing, including image standardization, normalization, wavelet transformation, Gaussian transformation, and feature preprocessing, have been studied with aims to solve the challenges posed by the parameter and quality variations and differences when imaging with PET scanners from different manufacturers, instrument models, and imaging technologies. The AI-empowered segmentation techniques have further improved the stability of AI features and the repeatability of AI researches (2, 3). To address the needs of clinical applications, by mining deeply into image features, combining population and clinical evidence, and constructing machine learning models, AI in PET has been developed for lesion detection and boundary delineation, diagnosis and differential diagnosis, risk prediction and prognostic evaluation, and even the prediction of clinical gene or molecular typing (1, 47).

This Research Topic comprises 11 publications that emphasized how AI supports PET image processing and analysis. Recently, numerous research groups have been focusing on the use of AI in PET image interpretation, such as lesion detection. Kawakami et al. applied an object deep learning (DL) detection model, You Only Look Once Version 2 (YOLOv2), to detect the physiological and abnormal uptake in 18F-FDG PET. Results showed that the physiological uptake on MIP images was recognized quickly and precisely (Kawakami et al.). The abnormal uptake detected by YOLOv2 was with a high coverage rate of that manually identified (Kawakami et al.). The precise detection and fast response would be a useful tool in disease diagnosis. The maximal standardized uptake value (SUVmax) is the most commonly used parameter to interpret images and evaluate lesions in the daily diagnostic reports. In a preliminary study, Hirata et al. sought to define a precise SUVmax as an identifier to locate lesions. Although it is difficult to identify the lesions when the SUVmax < 2 in 18F-FDG PET scans, this approach could help for the construction of the AI training dataset (Hirata et al.).

In oncology, AI has been used for diagnosis, differential diagnosis, and cancer staging. Satoh et al. used texture analysis in a retrospective study to compare the diagnosis ability of breast cancer between the high-resolution dedicated breast PET (dbPET) and whole-body PET/CT. They demonstrated that both PET-based texture analysis of dbPET and whole-body PET/CT had comparable classification power for the diagnosis of breast cancer (Satoh et al.). Fan et al. evaluated the value of texture analysis in the differential diagnosis of spinal metastases of 18F-FDG PET/CT, and indicated that the combination of machine learning and texture parameters was more accurate than manual diagnosis. Zheng et al. applied a radiomics model in 18F-FDG PET/CT to predict pathological mediastinal lymph node (pN) staging in patients with non-small cell lung cancer (NSCLC), and demonstrated an encouraging conclusion, suggesting that the pN staging and prediction have the potential to help with therapeutic planning. Apart from PET/CT, another nuclear medicine imaging method—single-photon emission computed tomography/computed tomography (SPECT/CT) also plays a role in the differentiation of benign and malignant tumors. Jin et al. investigated the feasibility of SPECT/CT images based on radiomics in differentiating bone metastases from benign bone lesions in patients with tumors. Both SPECT and SPECT/CT models showed better diagnostic accuracy than manual classification in the training and validation groups of patients diagnosed with vertebral bone metastases or benign bone lesions, indicating a new way in disease staging and treatment planning (Jin et al.).

In neurology, AI has also been applied to help for the distinguishment and mechanism research of neurodegenerative diseases. Xie et al. revealed a pattern of changes in β-amyloid (Aβ) deposition in cognitively normal healthy aging, which could be used to distinguish physiological changes from pathophysiological changes and help investigate the mechanism of Alzheimer's disease (AD). Zhou et al. presented a new deep learning model based on rate-distortion theory and an extreme learning machine model to distinguish AD, mild cognitive impairment (MCI), and normal controls (NC) in 18F-AV45 PET/MR. This new deep learning model achieved higher accuracy, sensitivity, specificity, and area under curve (AUC) to separate AD, MCI, and NC groups than the previous models assessed (Zhou et al.).

The applications of AI are more than image processing, analysis, and interpretation, and also embrace searching for electronic health records, laboratory tests, and other information related to patients, which could assist physicians make optimal and personalized medical decisions for patients. With such abilities, AI provides more opportunities for assessing therapeutic responses and predicting survival rates. Tang et al. investigated the metabolic profiles of extratemporal in drug-resistant temporal lobe epilepsy (TLE) and efficiently predicted the surgery failure of TLE patients through PET, which could be used as predictive models for epilepsy surgery. Pinochet et al. evaluated the performance of a research prototype called PET Assisted Reporting System (PARS), which is based on a convolutional neural network, in clinical research. The PARS, determined total tumor metabolic volumes (TMTVs) on 18F-FDG PET, was predictive of prognosis in patients with diffuse large B-cell lymphoma (DLBCL), but the evaluation efficacy was still needed to be enhanced and validated in miscellaneous cancers (Pinochet et al.). Yang et al. developed a radiomics score using the least absolute shrinkage and selection operator (LASSO) regression analysis in 18F-FDG PET/CT-derived radiomic features, which proved to be a useful tool for predicting overall survival (OS) in adult hemophagocytic lymphohistiocytosis (HLH). Combining the radiomics score with the clinical parameters even performed better for predicting 6-month survival (Yang et al.). Apart from the above studies, AI can also assist to evaluate therapeutic responses and predict prognosis in other emerging treatments, including immunotherapy (8) and peptide radio receptor therapy (PRRT) (9).

Furthermore, with the powerful searching ability, AI can optimize the workflow and provide more detailed and organized information of patients, making it convenient for doctors to assess patient status more efficiently and precisely. Improtantly, AI plays a supportive role to relieve the physicians from labor-intensive but less cognitively demanding routine tasks, allowing them to focus on more mental work, such as patient care and image interpretation (1). Another step that is inseparable from AI in PET is imaging data processing, especially when it comes to standardizing imaging acquisition and reconstruction procedures. The repeatability of AI analysis in multicenter settings is essential to clinical translation. Zwanenburg performed a meta-analysis to evaluate the repeatability of PET imaging biomarkers (10). Based on the results, variations in the image acquisition, reconstruction, segmentation, and processing strongly affect the reliability of image biomarkers in models of different PET centers (10).

Although AI has tremendous potential in PET, it is critically important to be aware of its limitations. First, the reproducibility and reliability of AI algorithms are required. With the modeling becoming more and more complicated, the “black box” nature of AI makes it difficult to understand and explain the results of many AI models, especially in some DL models (11). The current trustworthy AI for health care prefers the explainability and stability of diverse and unknown data (12). Second, AI needs massive annotated data for learning and growing, which makes AI less reliable in small datasets. Currently, standard datasets, such as the Alzheimer's Disease Neuroimaging Initiative (ADNI) that is available to the public, are still relatively rare and in high demands. More open standard datasets will promote the development of AI. Last but not the least, there are still many ethical issues that need to be discussed, such as who is responsible for the legal and ethical issues if the AI diagnosis turns out to be wrong?

Despite the above limitations, it is obvious that AI plays a unique role in every step of PET, including patient information management, drug synthesis and administration, image acquisition and processing, as well as report interpretation. AI can not only be used for helping make clinical decisions, but also as a research helper for discovering and investigating novel molecular biomarkers and their mechanisms in various diseases (13). With the powerful impact of AI in medical imaging, many physicians, especially radiologists, are concerned to be replaced by AI in the future. In fact, as Nensa et al. said, we should perceive the change as an opportunity rather than a threat (1). In the near future, or even at the moment, medical imaging physicians should not only simply focus on describing what images showing, but also need to pay more attention to all available information and data of patients, and conduct comprehensive analysis and interpretation to diseases diagnosis and therapeutic efficacy assessment or prediction, which greatly assist in providing more precise and personalized medical care for every individual patient.

Author Contributions

HF wrote the manuscript. XL, KS, XW, and CZ edited the manuscript. All authors have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Funding

This work was supported by the Opening Foundation of Hubei Key Laboratory of Molecular Imaging (2020fzyx003).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Nensa F, Demircioglu A, Rischpler C. Artificial intelligence in nuclear medicine. J Nucl Med. (2019) 60:29S−37S. doi: 10.2967/jnumed.118.220590

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Presotto L, Bettinardi V, De Bernardi E, Belli ML, Cattaneo GM, Broggi S, et al. PET textural features stability and pattern discrimination power for radiomics analysis: an “ad-hoc” phantoms study. Phys Med. (2018) 50:66–74. doi: 10.1016/j.ejmp.2018.05.024

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Xu H, Lv W, Zhang H, Ma J, Zhao P, Lu L. Evaluation and optimization of radiomics features stability to respiratory motion in 18F-FDG 3D PET imaging. Med Phys. (2021) 48:5165–78. doi: 10.1002/mp.15022

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Seifert R, Weber M, Kocakavuk E, Rischpler C, Kersting D. Artificial intelligence and machine learning in nuclear medicine: future perspectives. Semin Nucl Med. (2021) 51:170–7. doi: 10.1053/j.semnuclmed.2020.08.003

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Lee LIT, Kanthasamy S, Ayyalaraju RS, Ganatra R. The current state of artificial intelligence in medical imaging and nuclear medicine. BJR Open. (2019) 1:20190037. doi: 10.1259/bjro.20190037

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Ninatti G, Kirienko M, Neri E, Sollini M, Chiti A. Imaging-based prediction of molecular therapy targets in NSCLC by radiogenomics and AI approaches: a systematic review. Diagnostics. (2020) 10:359. doi: 10.3390/diagnostics10060359

PubMed Abstract | CrossRef Full Text | Google Scholar

7. Lv Z, Fan J, Xu J, Wu F, Huang Q, Guo M, et al. Value of 18F-FDG PET/CT for predicting EGFR mutations and positive ALK expression in patients with non-small cell lung cancer: a retrospective analysis of 849 Chinese patients. Eur J Nucl Med Mol Imaging. (2018) 45:735–750. doi: 10.1007/s00259-017-3885-z

PubMed Abstract | CrossRef Full Text | Google Scholar

8. Oriuchi N, Sugawara S, Shiga T. Positron emission tomography for response evaluation in microenvironment-targeted anti-cancer therapy. Biomedicines. (2020) 8:371. doi: 10.3390/biomedicines8090371

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Roll W, Weckesser M, Seifert R, Bodei L, Rahbar K. Imaging and liquid biopsy in the prediction and evaluation of response to PRRT in neuroendocrine tumors: implications for patient management. Eur J Nucl Med Mol Imaging. (2021) 48:4016–27. doi: 10.1007/s00259-021-05359-3

PubMed Abstract | CrossRef Full Text | Google Scholar

10. Zwanenburg A. Radiomics in nuclear medicine: robustness, reproducibility, standardization, and how to avoid data analysis traps and replication crisis. Eur J Nucl Med Mol Imaging. (2019) 46:2638–55. doi: 10.1007/s00259-019-04391-8

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Rosenfeld A, Zemel R, Tsotsos JK. The Elephant in the Room. (2018). arXiv:1808.03305 [cs.CV].

Google Scholar

12. Markus AF, Kors JA, Rijnbeek PR. The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J Biomed Inform. (2021) 113:103655. doi: 10.1016/j.jbi.2020.103655

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Mayerhoefer ME, Materka A, Langs G, Haggstrom I, Szczypinski P, Gibbs P, et al. Introduction to radiomics. J Nucl Med. (2020) 61:488–95. doi: 10.2967/jnumed.118.222893

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: artificial intelligence, molecular imaging, positron emission tomography, oncology, neurology

Citation: Fang H, Shi K, Wang X, Zuo C and Lan X (2022) Editorial: Artificial Intelligence in Positron Emission Tomography. Front. Med. 9:848336. doi: 10.3389/fmed.2022.848336

Received: 04 January 2022; Accepted: 07 January 2022;
Published: 31 January 2022.

Edited and reviewed by: Giorgio Treglia, Ente Ospedaliero Cantonale (EOC), Switzerland

Copyright © 2022 Fang, Shi, Wang, Zuo and Lan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Xiaoli Lan, xiaoli_lan@hust.edu.cn

Download