Advancing Pre-training and Fine-tuning in Foundation Models

  • 1,115

    Total views and downloads

About this Research Topic

Submission deadlines

  1. Manuscript Submission Deadline 30 April 2026

  2. This Research Topic is currently accepting articles.

Background

Foundation models have emerged as a transformative paradigm in artificial intelligence, achieving state-of-the-art performance across natural language processing, computer vision, speech, and multimodal tasks. Their success is built upon a two-stage lifecycle: pre-training, where models learn general, high-capacity representations from massive and diverse datasets, and fine-tuning, where these representations are adapted to specific tasks, domains, or modalities. While pre-training captures broad patterns and foundational knowledge, fine-tuning ensures task-specific effectiveness, efficiency, and adaptability. Despite rapid advances, critical challenges remain, including high computational costs, limited cross-domain generalization, inefficient adaptation methods, and underexplored ethical and societal considerations such as fairness, transparency, and governance. Addressing these issues is essential to realize foundation models that are both practically impactful and responsible for diverse real-world applications.

Foundation models have revolutionized AI, yet their full potential is limited by several pressing challenges across both pre-training and fine-tuning stages. Pre-training large models is computationally intensive and environmentally costly, while fine-tuning often struggles with efficiency, cross-domain generalization, and ethical concerns such as bias, fairness, and transparency. This Research Topic aims to address these challenges by advancing methods that make foundation models more adaptable, efficient, and responsible.

We invite contributions that span the complete lifecycle of foundation models—from scalable and sustainable pre-training strategies to parameter-efficient, robust fine-tuning approaches. Research that bridges theory and practice, improves interpretability, or demonstrates impact in real-world applications (e.g., precision healthcare, autonomous robotics, personalized education, and scientific discovery) is particularly encouraged. By fostering methodological innovation and ethical awareness, this collection seeks to enable foundation models that are versatile, transparent, and aligned with human values.

This article collection invites original research, reviews, and perspectives that advance both the theoretical foundations and practical methodologies of pre-training and fine-tuning in foundation models. We particularly encourage work emphasizing parameter-efficient fine-tuning, sustainable and energy-conscious training, and robust cross-domain adaptation. Relevant topics include novel architectures, optimization strategies, multimodal learning, interpretability, fairness, governance, and transparency.

Authors are encouraged to highlight how their work connects pre-training and fine-tuning, addresses real-world challenges, and demonstrates impact through concrete applications, such as precision healthcare, safe autonomous robotics, personalized education, and accelerated scientific discovery. Submissions should clearly articulate methodological contributions, practical significance, and ethical considerations. All manuscripts will undergo rigorous peer review according to the journal’s standard policies, ensuring the highest quality and relevance to the Research Topic’s objectives.

Article types and fees

This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:

  • Brief Research Report
  • Clinical Trial
  • Community Case Study
  • Conceptual Analysis
  • Data Report
  • Editorial
  • FAIR² Data
  • FAIR² DATA Direct Submission
  • General Commentary

Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.

Keywords: Foundation models, parameter-efficient adaptation, Pre-training and Fine-tuning, Cross-domain Generalization, Sustainable AI, Interpretability and Fairness

Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic editors

Manuscripts can be submitted to this Research Topic via the main journal or any other participating journal.

Impact

  • 1,115Topic views
View impact