CORRECTION article

Front. Artif. Intell., 16 December 2025

Sec. Machine Learning and Artificial Intelligence

Volume 8 - 2025 | https://doi.org/10.3389/frai.2025.1758660

Correction: Synchronizing LLM-based semantic knowledge bases via secure federated fine-tuning in semantic communication

  • 1. Shanghai Key Laboratory of Integrated Administration Technologies for Information Security, School of Computer Science, Shanghai Jiao Tong University, Shanghai, China

  • 2. National Key Laboratory of Security Communication, Chengdu, China

Article metrics

View details

683

Views

42

Downloads

The funder: National Key Laboratory of Security Communication Foundation [2023, 6142103042310] for Jianhua Li was erroneously omitted.

The correct Funding statement appears below.

The author(s) declared that financial support was received for this work and/or its publication. This work was funded by National Key Laboratory of Security Communication Foundation [2023, 6142103042310] for Jianhua Li.

The original version of this article has been updated.

Statements

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Summary

Keywords

semantic communication, large language model, semantic knowledge bases, homomorphic encryption, federated fine-tuning

Citation

Li L, He Y, Xu R, Chen B, Han B, Zhao Y and Li J (2025) Correction: Synchronizing LLM-based semantic knowledge bases via secure federated fine-tuning in semantic communication. Front. Artif. Intell. 8:1758660. doi: 10.3389/frai.2025.1758660

Received

02 December 2025

Revised

03 December 2025

Accepted

03 December 2025

Published

16 December 2025

Approved by

Frontiers Editorial Office, Frontiers Media SA, Switzerland

Volume

8 - 2025

Updates

Copyright

*Correspondence: Rui Xu,

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics