Your new experience awaits. Try the new design now and help us make it even better

CORRECTION article

Front. Artif. Intell., 16 December 2025

Sec. Machine Learning and Artificial Intelligence

Volume 8 - 2025 | https://doi.org/10.3389/frai.2025.1758660

This article is part of the Research TopicAI-Driven Architectures and Algorithms for Secure and Scalable Big Data SystemsView all 9 articles

Correction: Synchronizing LLM-based semantic knowledge bases via secure federated fine-tuning in semantic communication


Long LiLong Li1Yuanhang HeYuanhang He2Rui Xu
Rui Xu1*Bei ChenBei Chen1Boyu HanBoyu Han1Yuanyuan ZhaoYuanyuan Zhao1Jianhua LiJianhua Li1
  • 1Shanghai Key Laboratory of Integrated Administration Technologies for Information Security, School of Computer Science, Shanghai Jiao Tong University, Shanghai, China
  • 2National Key Laboratory of Security Communication, Chengdu, China

A Correction on
Synchronizing LLM-based semantic knowledge bases via secure federated fine-tuning in semantic communication

by Li, L., He, Y., Xu, R., Chen, B., Han, B., Zhao, Y., and Li, J. (2025). Front. Artif. Intell. 8:1690950. doi: 10.3389/frai.2025.1690950

The funder: National Key Laboratory of Security Communication Foundation [2023, 6142103042310] for Jianhua Li was erroneously omitted.

The correct Funding statement appears below.

The author(s) declared that financial support was received for this work and/or its publication. This work was funded by National Key Laboratory of Security Communication Foundation [2023, 6142103042310] for Jianhua Li.

The original version of this article has been updated.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Keywords: semantic communication, large language model, semantic knowledge bases, homomorphic encryption, federated fine-tuning

Citation: Li L, He Y, Xu R, Chen B, Han B, Zhao Y and Li J (2025) Correction: Synchronizing LLM-based semantic knowledge bases via secure federated fine-tuning in semantic communication. Front. Artif. Intell. 8:1758660. doi: 10.3389/frai.2025.1758660

Received: 02 December 2025; Revised: 03 December 2025;
Accepted: 03 December 2025; Published: 16 December 2025.

Approved by:

Frontiers Editorial Office, Frontiers Media SA, Switzerland

Copyright © 2025 Li, He, Xu, Chen, Han, Zhao and Li. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Rui Xu, RGllZ28xOTk4QHNqdHUuZWR1LmNu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.