<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="editorial">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neuroinform.</journal-id>
<journal-title>Frontiers in Neuroinformatics</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neuroinform.</abbrev-journal-title>
<issn pub-type="epub">1662-5196</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fninf.2022.1079240</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Editorial</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Editorial: Weakly supervised deep learning-based methods for brain image analysis</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Zhu</surname> <given-names>Hancan</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/683857/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Liu</surname> <given-names>Mingxia</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/696936/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Tang</surname> <given-names>Zhenyu</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1136822/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Wang</surname> <given-names>Shuai</given-names></name>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1478195/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>School of Mathematics Physics and Information, Shaoxing University</institution>, <addr-line>Shaoxing</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>Department of Radiology and BRIC, University of North Carolina at Chapel Hill</institution>, <addr-line>Chapel Hill, NC</addr-line>, <country>United States</country></aff>
<aff id="aff3"><sup>3</sup><institution>School of Computer Science and Engineering, Beihang University</institution>, <addr-line>Beijing</addr-line>, <country>China</country></aff>
<aff id="aff4"><sup>4</sup><institution>School of Mechanical, Electrical and Information Engineering, Shandong University</institution>, <addr-line>Weihai</addr-line>, <country>China</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited and reviewed by: Sean L. Hill, Krembil Centre for Neuroinformatics, CAMH, Canada</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Shuai Wang <email>shuaiwang&#x00040;sdu.edu.cn</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>11</day>
<month>11</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>16</volume>
<elocation-id>1079240</elocation-id>
<history>
<date date-type="received">
<day>25</day>
<month>10</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>01</day>
<month>11</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2022 Zhu, Liu, Tang and Wang.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Zhu, Liu, Tang and Wang</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license> </permissions>
<related-article id="RA1" related-article-type="commentary-article" xlink:href="https://www.frontiersin.org/research-topics/25984/weakly-supervised-deep-learning-based-methods-for-brain-image-analysis" ext-link-type="uri">Editorial on the Research Topic <article-title>Weakly supervised deep learning-based methods for brain image analysis</article-title></related-article>
<kwd-group>
<kwd>brain image analysis</kwd>
<kwd>deep learning</kwd>
<kwd>weakly supervised learning</kwd>
<kwd>semi-supervised learning</kwd>
<kwd>image segmentation</kwd>
<kwd>image reconstruction</kwd>
</kwd-group>
<counts>
<fig-count count="0"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="3"/>
<page-count count="2"/>
<word-count count="1240"/>
</counts>
</article-meta>
</front>
<body>
<p>In recent years, deep learning-based methods have been widely used in the fields of brain image analysis and achieved excellent performance in many tasks, including image segmentation, image reconstruction, and disease classification (Shen et al., <xref ref-type="bibr" rid="B3">2017</xref>). Most of the existing deep learning-based methods rely on large-scale datasets with high-quality full annotations. However, it is usually time-consuming and requires rich expert experience to acquire such data. Moreover, because of individual differences in observer experience and understanding, large-scale and full annotated datasets may suffer from large intra- and inter-observer variability, which could hinder their application in brain image analysis. In contrast, weak yet low-cost annotations (such as coarse annotations, partial annotations, or small sample annotations) are much easier to collect than high-quality full detailed annotations. As a result, there is a strong desire for innovative deep learning-based methodologies that can efficiently learn from weakly-annotated data and achieve competitive performance compared with using full annotation data (Campanella et al., <xref ref-type="bibr" rid="B1">2019</xref>).</p>
<p>The aim of this Research Topic is to develop effective deep learning-based methods and techniques under the supervision of different forms of weakly-annotated data in different brain image analysis tasks. When dealing with weakly-annotated data, the biggest challenge comes from &#x0201C;Label Ambiguity&#x0201D; which leads to much noise information in model learning. Nowadays, massive research has been devoted to address this problem but is still not well-studied. Fortunately, the rapid development of advanced deep learning techniques promote the continuous emergence of new solutions (Laleh et al., <xref ref-type="bibr" rid="B2">2022</xref>).</p>
<p>The present Research Topic includes 10 original research papers on machine learning-based algorithms (most of them are deep learning methods) and applications using limited data for brain image analysis. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fninf.2021.782262">Liu, Zhang, et al.</ext-link> proposed a novel co-optimization learning network (COL-Net) to train a segmentation model using limited labeled samples and unlabeled samples. Experiment results on the segmentation of penumbra tissues show that COL-Net outperforms most supervised segmentation methods. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fninf.2021.789295">Shao et al.</ext-link> compared two radiomic models in predicting the progression of white matter hyperintensity (WMH) and the speed of progression using limited conventional magnetic resonance images. For rs-fMRI based autism spectrum disorder autism (ASD) diagnosis with limited data, <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fninf.2021.802305">Chu et al.</ext-link> developed a multi-scale graph representation learning (MGRL) framework to capture the potential complementary topology information of functional connectivity networks at different spatial scales. Based on limited functional and structural MRIs, <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fninf.2022.856175">Wang et al.</ext-link> proposed an adaptive multimodal neuroimage integration (AMNI) framework for automated major depressive disorder detection, with evaluation on 533 subjects with resting-state functional and T1-weighted magnetic resonance imaging (MRI) data. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fninf.2022.859973">Liu, Wang, et al.</ext-link> developed a local-long range hybrid features network (LLRHNet) for medical image segmentation, which inherited the merits of the iterative aggregation mechanism and the transformer technology.</p>
<p>Limited by hardware conditions and other factors, high-resolution (HR) images are usually hard to obtain. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fninf.2022.880301">Jia et al.</ext-link> proposed a novel super-resolution model for single 3D medical images, by using non-local low-rank tensor Tucker decomposition to exploit the non-local self-similarity prior knowledge of data. Alzheimer&#x00027;s disease (AD) is one of the most prevalent health threats to the elderly. Early and accurate diagnoses are significant for effective prevention and treatment of AD. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fninf.2022.886365">Li et al.</ext-link> proposed to build a brain network for each subject for AD prediction and analysis, which assembled several commonly used neuroimaging data simply and reasonably, including structural MRI, diffusion-weighted imaging (DWI), and amyloid positron emission tomography (PET). Accurate labeling is essential for supervised deep learning methods. However, it is almost impossible to accurately and manually annotate thousands of images, which results in many labeling errors for most datasets. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fninf.2022.895290">Liu, Fan, et al.</ext-link> proposed a local label point correction (LLPC) method to improve annotation quality for edge detection and image segmentation tasks. U-Net is widely used in medical image segmentation in recent years. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fninf.2022.911679">Lu et al.</ext-link> analyzed the effects of different parts of the U-Net in the experiment of image segmentation, and proposed a more efficient architecture, called Half-UNet. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fninf.2022.973698">Shi et al.</ext-link> proposed a deep learning network with subregion partition for predicting metastatic origins and EGFR/HER2 status in patients with brain metastasis.</p>
<p>In conclusion, these 10 papers included in this Research Topic provide new ideas on how to effectively use limited data to build reliable machine learning models for brain image analysis.</p>
<sec id="s1">
<title>Author contributions</title>
<p>All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec>
<sec sec-type="disclaimer" id="s2">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p></sec>
</body>
<back>
<ack><p>We want to thank all the authors who participate in this Research Topic and contribute their manuscripts to the topic, all the reviewers who spend their valuable time for improving the quality of the manuscripts. We also would like to thank the Frontiers team, who help us to organize this topic.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Campanella</surname> <given-names>G.</given-names></name> <name><surname>Hanna</surname> <given-names>M. G.</given-names></name> <name><surname>Geneslaw</surname> <given-names>L.</given-names></name> <name><surname>Miraflor</surname> <given-names>A.</given-names></name> <name><surname>Werneck Krauss Silva</surname> <given-names>V.</given-names></name> <name><surname>Busam</surname> <given-names>K. J.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Clinical-grade computational pathology using weakly supervised deep learning on whole slide images</article-title>. <source>Nat. Med</source><underline>. </underline><volume>25</volume>, <fpage>1301</fpage>&#x02013;<lpage>1309</lpage>. <pub-id pub-id-type="doi">10.1038/s41591-019-0508-1</pub-id><pub-id pub-id-type="pmid">31308507</pub-id></citation></ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Laleh</surname> <given-names>N. G.</given-names></name> <name><surname>Muti</surname> <given-names>H. S.</given-names></name> <name><surname>Loeffler</surname> <given-names>C. M.</given-names></name> <name><surname>Echle</surname> <given-names>A.</given-names></name> <name><surname>Saldanha</surname> <given-names>O. L.</given-names></name> <name><surname>Mahmood</surname> <given-names>F.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>Benchmarking weakly-supervised deep learning pipelines for whole slide classification in computational pathology</article-title>. <source>Med. Image Anal</source>. 79, 102474. <pub-id pub-id-type="doi">10.1016/j.media.2022.102474</pub-id><pub-id pub-id-type="pmid">36130464</pub-id></citation></ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shen</surname> <given-names>D.</given-names></name> <name><surname>Wu</surname> <given-names>G.</given-names></name> <name><surname>Suk</surname> <given-names>H. I.</given-names></name></person-group> (<year>2017</year>). <article-title>Deep learning in medical image analysis</article-title>. <source>Ann. Rev. Biomed. Eng</source>. <volume>19</volume>, <fpage>221</fpage>&#x02013;<lpage>248</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-bioeng-071516-044442</pub-id><pub-id pub-id-type="pmid">28301734</pub-id></citation></ref>
</ref-list> 
</back>
</article>