<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="editorial">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Med. Technol.</journal-id>
<journal-title>Frontiers in Medical Technology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Med. Technol.</abbrev-journal-title>
<issn pub-type="epub">2673-3129</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fmedt.2022.989983</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Medical Technology</subject>
<subj-group>
<subject>Editorial</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Editorial: Artificial intelligence technology and the application in medical imaging</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Liu</surname> <given-names>Shuaiqi</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1201903/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Chen</surname> <given-names>Yewang</given-names></name>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1197734/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Zhang</surname> <given-names>Yu-Dong</given-names></name>
<xref ref-type="aff" rid="aff5"><sup>5</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/212513/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>College of Electronic Information Engineering, Hebei University</institution>, <addr-line>Baoding</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>Machine Vision Technology Innovation Center of Hebei Province</institution>, <addr-line>Baoding</addr-line>, <country>China</country></aff>
<aff id="aff3"><sup>3</sup><institution>National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences</institution>, <addr-line>Beijing</addr-line>, <country>China</country></aff>
<aff id="aff4"><sup>4</sup><institution>School of Computer Science and Technology, Huaqiao University</institution>, <addr-line>Xiamen</addr-line>, <country>China</country></aff>
<aff id="aff5"><sup>5</sup><institution>School of Computing and Mathematical Sciences, University of Leicester</institution>, <addr-line>Leicester</addr-line>, <country>United Kingdom</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited and reviewed by: Nianyin Zeng, Xiamen University, China</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Shuaiqi Liu <email>shdkj-1918&#x00040;163.com</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Medtech Data Analytics, a section of the journal Frontiers in Medical Technology</p></fn></author-notes>
<pub-date pub-type="epub">
<day>29</day>
<month>07</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>4</volume>
<elocation-id>989983</elocation-id>
<history>
<date date-type="received">
<day>09</day>
<month>07</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>13</day>
<month>07</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2022 Liu, Chen and Zhang.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Liu, Chen and Zhang</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license> </permissions>
<related-article id="RA1" related-article-type="commentary-article" xlink:href="https://www.frontiersin.org/research-topics/19080/artificial-intelligence-technology-and-the-application-in-medical-imaging" ext-link-type="uri">Editorial on the Research Topic <article-title>Artificial intelligence technology and the application in medical imaging</article-title></related-article>
<kwd-group>
<kwd>artificial intelligence technology</kwd>
<kwd>medical imaging</kwd>
<kwd>deep learning</kwd>
<kwd>disease screening</kwd>
<kwd>pathological analysis</kwd>
</kwd-group>
<counts>
<fig-count count="0"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="0"/>
<page-count count="0"/>
<word-count count="1070"/>
</counts>
</article-meta>
</front>
<body>
<p>In recent years, Artificial Intelligence (AI) has become a research hotspot in academia and industry. It has been successfully applied in the medical and health fields. Artificial intelligence has advantages over human beings in processing big data, complex and uncertain data, and in-depth mining data potential information. Medical data contains rich human health information, which is an important basis for doctors to make medical diagnoses. In the face of complex medical information and the growing demand for medical diagnosis, doctors&#x00027; artificial interpretation exposes many shortcomings, such as easy to be affected by subjective cognition, low efficiency, and high misdiagnosis rate. With the continuous development of artificial intelligence in the segmentation of lesion area, early diagnosis of disease, anatomical structure, detection of the lesion area, and other aspects, intelligent medical treatment has gradually become possible.</p>
<p>From a technical point of view, medical imaging diagnosis mainly relies on image recognition and deep learning. It is expected that medical image segmentation, feature extraction, quantitative analysis, comparative through artificial intelligence can achieve image diagnosis and treatment. Artificial intelligence can help to interpret medical image data, solve the problems of low diagnostic efficiency and high misdiagnosis rate of doctors to improve the diagnostic efficiency and accuracy and reduce the labor intensity of doctors. In the past few years, major AI medical imaging companies are expanding the business radius with the continuous mature of technology. Breast cancer, stroke, and bone age testing around bone and joint have become the focus of market participants. In novel coronavirus pneumonia, AI medical imaging is involved in quantitative analysis and evaluation of the curative effect of new crown pneumonia, and it has become the key force to improve the diagnostic efficiency and quality of diagnosis.</p>
<p>MRI is a common and widely used medical imaging. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fncom.2021.738885">Lu et al.</ext-link> proposed a new Cerebral microbleeds (CMB) detection approach based on brain magnetic resonance images. The automated microbleed detection approach combining convolutional neural network (CNN), extreme learning machine (ELM), and bat algorithm (BA) which achieved the-state-of-the-art performance.</p>
<p>Electroencephalogram is another effective non-invasive tool which can directly react to the changes of human brain activity state. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fncom.2021.743426">An et al.</ext-link> constructs an EEG emotion recognition algorithm based on 3D feature fusion and convolutional autoencoder. Specifically, the differential entropy (DE) features of EEG signals are fused to construct the 3D features of EEG signals. Then, the constructed 3D features are input into the CAE constructed for emotion recognition.</p>
<p>Considering that various modalities of medical imaging are complementary, medical image fusion has an indispensable value. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fncom.2021.803724">Liu S. et al.</ext-link> proposed a structure preservation-based two-scale multimodal medical image fusion algorithm taking advantage of structure-preserving filter and deep learning.</p>
<p>The above-mentioned articles are based on CNN, however, spiking convolutional neural networks (SCNNs) are energy-saving for sparsity of data flow and event-driven working mechanism compared with CNNs. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fncom.2021.697469">Chen et al.</ext-link> proposed 4 stopping criterions to reduce the inference-latency of SCNNs during inference phase. The simulation results demonstrated that the stopping criterions can significantly save total inference-latency without obvious accuracy loss. What is more, it provides a new direction for medical image processing.</p>
<p>There are two review papers in this Research Topic. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fncom.2021.758212">Liu H. et al.</ext-link> elaborate the common steps of an emotion recognition algorithm based on EEG from data acquisition, preprocessing, feature extraction, feature selection to classifier. They also review the existing EEG-based emotional recognition methods and the performances. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fmedt.2021.767836">Luo et al.</ext-link> systematically reviewed the recent literature on segmentation methods for stomatological images based on deep learning and clinical applications. The review shows that DL has great potential and can further promote the transformation of clinical practice from experientialism to digitization, precision, and individuation.</p>
<p>The ultimate goal of this Research Topic is to promote research and development of deep learning for multimodal biomedical images by publishing high-quality research articles, reviews, or perspectives, among other article types, in this rapidly growing interdisciplinary field. We thank the authors of the papers published in this Research Topic for their valuable contributions and the referees for their rigorous review.</p>
<sec id="s1">
<title>Author contributions</title>
<p>All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.</p></sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec>
<sec sec-type="disclaimer" id="s2">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
</article>