Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Phys.

Sec. Social Physics

This article is part of the Research TopicSecurity, Governance, and Challenges of the New Generation of Cyber-Physical-Social Systems, Volume IIView all 18 articles

Federated semi-supervised learning based on Feature alignment and knowledge distillation

Provisionally accepted
Zhe  DingZhe Ding1,2Hao  YiHao Yi3,4*Wenrui  XieWenrui Xie3,4Yuxuan  XiaoYuxuan Xiao1Qixu  WangQixu Wang1,2Qing  ChenQing Chen5Zhiguang  QinZhiguang Qin1Dajiang  ChenDajiang Chen1
  • 1University of Electronic Science and Technology of China, Chengdu, China
  • 2Chengdu University of Information Technology, Chengdu, China
  • 3China Electronic Products Reliability and Environmental Testing Research Institute, Guangzhou, China
  • 4Key Laboratory of the Ministry of Industry and Information Technology for Performance and Reliability Evaluation of Software and Hardware for Information Technology Application Innovation Foundation, Guangzhou, China
  • 5Accelink technologies co., LTD., Wuhan, China

The final, formatted version of the article will be published soon.

Nowadays, federated learning (FL) has been successfully applied in fields related to cyber-physical-3 social systems (CPSS), owing to its ability to harness decentralized clients for training a global 4 model while ensuring data privacy. The existing methods encounter two main obstacles, namely 5 the statistical distribution heterogeneity (non-IID) among clients and the scarcity of labeled data. 6 In this paper, we propose a federated semi-supervised learning (FSSL) model under the label-at-7 server scenario, denoted as FedAlign, which is tailored for distributed CPSS. FedAlign adopts a dual 8 knowledge distillation framework to train the global model. On the client side, FedAlign integrates 9 contrastive learning, knowledge distillation and pseudo-labeling technology to train local models. 10 The goal is to ensure that global knowledge is not overlooked while enabling clients to learn local 11 knowledge. Meanwhile, on the server side, FedAlign utilizes maximum mean discrepancy (MMD) 12 to generate a global feature space. Based on the generated feature space, FedAlign employs a 13 knowledge distillation mechanism and supervised learning to aggregate local knowledge and update 14 the global model. Finally, two classic datasets are used to evaluate the performance of FedAlign. 15 The experimental results demonstrate that FedAlign outperforms traditional FSSL models.

Keywords: CPSS, Feature space alignment, Federated semi-supervised Learning, Knowledge distillation, Maximum mean discrepancy

Received: 14 Oct 2025; Accepted: 29 Nov 2025.

Copyright: © 2025 Ding, Yi, Xie, Xiao, Wang, Chen, Qin and Chen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Hao Yi

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.