Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Syst. Biol.

Sec. Data and Model Integration

This article is part of the Research TopicCurated Articles in Systems Biology ResearchView all 3 articles

BioMedKG: Multimodal Contrastive Representation Learning in Augmented BioMedical Knowledge Graphs

Provisionally accepted
  • 1The University of Alabama at Birmingham, Birmingham, United States
  • 2Washington University in St Louis, St. Louis, United States

The final, formatted version of the article will be published soon.

ABSTRACT Biomedical Knowledge Graphs (BKGs) integrate diverse datasets to elucidate complex relationships within the biomedical field. Effective link prediction on these graphs can uncover valuable connections, such as potential new drug-disease relations. We introduce a novel multimodal approach that unifies embeddings from specialized Language Models (LMs) with Graph Contrastive Learning (GCL) to enhance intra-entity relationships while employing a Knowledge Graph Embedding (KGE) model to capture inter-entity relationships for effective link prediction. To address limitations in existing BKGs, we present PrimeKG++, an enriched knowledge graph incorporating multimodal data, including biological sequences and textual descriptions for each entity type. By combining semantic and relational information in a unified representation, our approach demonstrates strong generalizability, enabling accurate link predictions even for unseen nodes. Experimental results in PrimeKG++ and the DrugBank drug-target interaction dataset demonstrate the effectiveness and robustness of our method in diverse biomedical datasets. Our source code, pre-trained models, and data are publicly available at https://github.com/HySonLab/BioMedKG.

Keywords: Biomedical knowledge graphs, multimodal, graph representation learning, Graph contrastive learning, Medical languagemodels, Data augmentation, Link prediction, drug repurposing

Received: 22 Jun 2025; Accepted: 13 Nov 2025.

Copyright: © 2025 Dang, Nguyen, Le and Hy. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Truong-Son Hy, sonpascal93@gmail.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.