<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="editorial">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Med.</journal-id>
<journal-title>Frontiers in Medicine</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Med.</abbrev-journal-title>
<issn pub-type="epub">2296-858X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fmed.2023.1342374</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Medicine</subject>
<subj-group>
<subject>Editorial</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Editorial: Multi-modal learning and its application for biomedical data</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Liu</surname> <given-names>Jin</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/803405/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Zhang</surname> <given-names>Yu-Dong</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/212513/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Cai</surname> <given-names>Hongming</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/142613/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>School of Computer Science and Engineering, Central South University</institution>, <addr-line>Changsha</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>University of Leicester</institution>, <addr-line>Leicester</addr-line>, <country>United Kingdom</country></aff>
<aff id="aff3"><sup>3</sup><institution>South China University of Technology</institution>, <addr-line>Guangzhou</addr-line>, <country>China</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited and reviewed by: Alice Chen, Consultant, Potomac, MD, United States</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Jin Liu <email>liujin06&#x00040;csu.edu.cn</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>16</day>
<month>01</month>
<year>2024</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>10</volume>
<elocation-id>1342374</elocation-id>
<history>
<date date-type="received">
<day>21</day>
<month>11</month>
<year>2023</year>
</date>
<date date-type="accepted">
<day>26</day>
<month>12</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2024 Liu, Zhang and Cai.</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Liu, Zhang and Cai</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<related-article id="RA1" related-article-type="commentary-article" xlink:href="https://www.frontiersin.org/research-topics/33861/multi-modal-learning-and-its-application-for-biomedical-data" ext-link-type="uri">Editorial on the Research Topic <article-title>Multi-modal learning and its application for biomedical data</article-title></related-article>
<kwd-group>
<kwd>multi-modal learning</kwd>
<kwd>biomedical data</kwd>
<kwd>machine learning</kwd>
<kwd>deep learning</kwd>
<kwd>precision medicine</kwd>
</kwd-group>
<counts>
<fig-count count="0"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="0"/>
<page-count count="2"/>
<word-count count="1098"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Precision Medicine</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<p>With the rapid development of biomedical testing methods and the explosive growth of biomedical data, multimodal data can better meet the precise diagnosis of diseases, such as medical images and histological information can reflect the condition of the person more comprehensively. This provides a rare opportunity for researchers to carry out multimodal learning of biomedical data, deep mining and fusion of data, and discoveries on medical research.</p>
<p>Among the received articles, <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fmed.2022.1025887">Asim et al.</ext-link> use multimodal learning to predict key miRNAs from miRNA sequences and <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fmed.2022.1015278">Yan et al.</ext-link> improve the prediction of host-virus inter-protein interactions. These articles demonstrated the broad prospect of multimodal learning in molecular biology research. Meanwhile, the analysis of medical images also plays an important role in clinical applications. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fmed.2022.915243">Refaee et al.</ext-link> differentiate between idiopathic pulmonary fibrosis (IPF) and non-IPF interstitial lung diseases (ILDs) in patients by multimodal analysis. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fmed.2023.1038348">Sato et al.</ext-link> use multimodal learning to improve the quality of assessment for predicting the dose range of proton therapy. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fmed.2021.771607">Jovel and Greiner</ext-link> discuss the application of machine learning methods in biomedical research. These articles point to the fact that the development of multimodal learning techniques will do well in biomedical data analysis. All of these articles demonstrate the broad prospects of artificial intelligence techniques such as multimodal learning, deep learning, and machine learning in the biomedical field.</p>
<p>Although multimodal learning has a promising application on biomedical data, there are many challenges to face when dealing with multimodal medical datasets, such as <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fmed.2022.1036974">Park et al.</ext-link> improve algorithmic robustness to different imaging devices through adversarial generative networks. How to explore the advantageous features of different modality data, the intrinsic correlation between different data, the over-reliance on a certain single modality data and the problems of model interpretability and robustness still need to be involved by a wide range of researchers.</p>
<p>In summary, these articles are an exploration of the rapidly growing field of artificial intelligence (AI) in biomedical research. These studies utilize multimodal learning techniques to exploit the wealth of information from different biomedical data sources, including molecular sequences, medical images, and clinical information. The breadth of applications, from predicting molecular interactions to improving diagnostic accuracy and treatment planning, highlights the potential of AI in healthcare.</p>
<p>The articles by <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fmed.2022.1025887">Asim et al.</ext-link> and <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fmed.2022.1015278">Yan et al.</ext-link> demonstrate the broad promise of multimodal learning in molecular biology, demonstrating its ability to predict key miRNAs and improve understanding of host-virus protein interactions. These studies help to investigate molecular complexity and provide new ways for further treatment and intervention.</p>
<p><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fmed.2022.915243">Refaee et al.</ext-link> distinguished lung diseases through multimodal analysis. By combining medical imaging and other data from different modalities, this study helps to accurately diagnose the disease, with potential implications for intervention and treatment options.</p>
<p><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fmed.2023.1038348">Sato et al.</ext-link> used multimodal learning to improve the quality of assessment of dose range prediction for proton therapy. This study shows the potential of multimodal learning in practical applications for advances in AI-driven medicine, especially in the field of radiation oncology.</p>
<p><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fmed.2021.771607">Jovel and Greiner</ext-link> discussed the application of machine learning methods to biomedical research in a comprehensive manner and is a valuable resource for highlighting the diversity of AI approaches. It provides many different ideas for researchers exploring the integration of AI into different biomedical fields.</p>
<p>However, numerous challenges remain in today&#x00027;s development, as <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fmed.2022.1036974">Park et al.</ext-link> improve the robustness of algorithms to different imaging devices through adversarial generative networks. Issues that need to be addressed include modal data exploration, intrinsic relevance understanding, over-reliance on specific data modalities, and challenges related to model interpretability and robustness, highlighting the relentless effort required to fully utilize the potential of multimodal learning in biomedical data analysis.</p>
<p>These articles advance the field of AI in biomedical data, not only demonstrating direct applications in biomedical research, but also outlining the challenges that require continued innovation. As multimodal learning evolves, methods that consider advanced technologies and clinical applications will drive precision medicine forward. Multimodal learning will drive biomedical research to the next level.</p>
<sec sec-type="author-contributions" id="s1">
<title>Author contributions</title>
<p>JL: Writing&#x02014;original draft, Writing&#x02014;review &#x00026; editing. Y-DZ: Writing&#x02014;review &#x00026; editing. HC: Writing&#x02014;review &#x00026; editing.</p></sec>
</body>
<back>
<sec sec-type="funding-information" id="s2">
<title>Funding</title>
<p>The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This work was supported in part by the National Natural Science Foundation of China under Grant 62172444, in part by the Natural Science Foundation of Hunan Province under Grant 2022JJ30753, and in part by the Central South University Innovation-Driven Research Programme under Grant 2023CXQD018. This article was also supported by the Scientific Research Fund of Hunan Provincial Education Department (23A0020).</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s3">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</back>
</article>