METHODS article
Front. Big Data
Sec. Machine Learning and Artificial Intelligence
Volume 8 - 2025 | doi: 10.3389/fdata.2025.1505877
This article is part of the Research TopicInterdisciplinary Approaches to Complex Systems: Highlights from FRCCS 2023/24View all 6 articles
Fine-Tuning or Prompting on LLMs: evaluating Knowledge Graph Construction task
Provisionally accepted- UMR6303 Laboratoire Interdisciplinaire Carnot de Bourgogne (ICB), Dijon, France
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
This paper explores Text-to-Knowledge Graph (T2KG) construction, assessing Zero-Shot Prompting, Few-Shot Prompting, and Fine-Tuning methods with Large Language Models. Through comprehensive experimentation with Llama2, Mistral, and Starling, we highlight the strengths of FT, emphasize dataset size's role, and introduce nuanced evaluation metrics. Promising perspectives include synonym-aware metric refinement, and data augmentation with Large Language Models. The study contributes valuable insights to KG construction methodologies, setting the stage for further advancements.
Keywords: Text-to-Knowledge Graph, Large language models, Zero-Shot Prompting, Few-Shot Prompting, Fine-tuning
Received: 03 Oct 2024; Accepted: 03 Jun 2025.
Copyright: © 2025 Ghanem and Cruz. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Hussam Ghanem, UMR6303 Laboratoire Interdisciplinaire Carnot de Bourgogne (ICB), Dijon, France
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.