ORIGINAL RESEARCH article
Front. Pharmacol.
Sec. Translational Pharmacology
Volume 16 - 2025 | doi: 10.3389/fphar.2025.1589788
This article is part of the Research TopicArtificial Intelligence in Traditional Medicine Research and ApplicationView all 12 articles
Improving Drug-Drug Interaction Prediction via In-Context Learning and Judging with Large Language Models
Provisionally accepted- 1Center for Drug Evaluation and Inspection for Heilongjiang Province, Harbin, Jilin Province, China
- 2School of Medicine and Health, Harbin Institute of Technology, Harbin, Jilin Province, China
- 3Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences (CAS), Suzhou, Jiangsu Province, China
- 4Faculty of Computing, Harbin Institute of Technology, Harbin, Heilongjiang Province, China
- 5Harbin Institute of Technology Zhengzhou Research Institute, Zhengzhou, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Large Language Models (LLMs), recognized for their advanced capabilities in natural language processing, have been successfully employed across various domains. However, their effectiveness in addressing challenges related to drug discovery has yet to be fully elucidated. In this paper, we propose a novel LLM-based method for drug-drug interaction (DDI) prediction, named DDI-JUDGE, achieved through the integration of judging and ICL prompts. The proposed method outperforms existing LLM approaches, demonstrating the potential of LLMs for predicting DDIs. We introduce a novel in-context learning (ICL) prompt paradigm that selects high-similarity samples as positive and negative prompts, enabling the model to effectively learn and generalize knowledge. Additionally, we present an ICL-based prompt template that structures inputs, prediction tasks, relevant factors, and examples, leveraging the pre-trained knowledge and contextual understanding of LLMs to enhance DDI prediction capabilities. To further refine predictions, we employ GPT-4 as a discriminator to assess the relevance of predictions generated by multiple LLMs. These individual results are then combined using a weighted fusion method to improve overall prediction accuracy. We compared with five other state-of-the-art LLMs respectively in zero-shot and few-shot scenarios, and achieved the best performance in both cases. Our approach demonstrates promising results and highlights the potential of LLMs in advancing drug discovery. The code for DDI-JUDGE is available at https://github.com/zcc1203/ddi-judge.
Keywords: Large language models, drug-drug interactions, in-context learning, Zero-shot, fewshot
Received: 08 Mar 2025; Accepted: 20 May 2025.
Copyright: © 2025 Qi, Li, Zhang and Zhao. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Chengcheng Zhang, Faculty of Computing, Harbin Institute of Technology, Harbin, 150001, Heilongjiang Province, China
Tianyi Zhao, School of Medicine and Health, Harbin Institute of Technology, Harbin, 150000, Jilin Province, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.