Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Plant Sci.

Sec. Sustainable and Intelligent Phytoprotection

This article is part of the Research TopicAdvanced Imaging and Phenotyping for Sustainable Plant Science and Precision Agriculture 4.0View all 11 articles

A Federated Learning with Large-Small Kernel Attention Network for Image Classification

Provisionally accepted
Tianzhe  LiuTianzhe Liu1Jing  XieJing Xie2Heng  DongHeng Dong3*
  • 1Fujian Police College, Fuzhou, China
  • 2Administration Logistics Management Center of Fuzhou Customs District, Fuzhou, China
  • 3Fuzhou Institute of Technology, Fuzhou, China

The final, formatted version of the article will be published soon.

Image data acquisition often involves cross-platform, cross-device, and multi-source heterogeneous data issues, posing challenges for data security and privacy protection in collaborative learning. Traditional centralized learning paradigms struggle to balance multi-institutional collaboration needs with stringent data security requirements, while existing Federated Learning (FL) frameworks frequently exhibit significant performance degradation when handling the complex features inherent in images. To address these gaps, this study introduces FL-LSNet, a novel federated learning framework integrated with a lightweight Large-Small Network (LSNet). Built upon a robust client-server architecture, FL-LSNet safeguards local data privacy through decentralized preprocessing while addressing the challenges of long-tailed data via dynamic weight adjustment mechanisms within the server-side aggregator. The core of the framework, LSNet, implements a "See Large, Focus Small" strategy: (1) Large Kernel Perceptrons (LKP): Capture global contextual dependencies. (2) Small Kernel Attention (SKA): Facilitate fine-grained local feature fusion. Empirical results demonstrate that LSNet reduces computational overhead by 7% compared with Swin Transformer, while enhancing feature representation capability by 19% relative to the baseline model. Extensive evaluations across three diverse datasets reveal that FL-LSNet consistently outperforms state-of-the-art federated algorithms, including FedAvg and MOON, achieving an accuracy range of 84.32% to 98.92%. Ablation studies further validate the efficacy of the FedAvg-LSNet integration, which surpassed the baseline by 6.15%, achieving performance metrics exceeding 98%. This research establishes a scalable paradigm for multi-stakeholder data collaboration and offers new insights into the lightweight vertical adaptation of federated learning in public safety, dynamic monitoring, risk early warning, intelligent agriculture and medical diagnosis.

Keywords: Attention network, Federated learning, image classification, Large-Scale Kernel Attention, Lightweight

Received: 08 Jan 2026; Accepted: 30 Jan 2026.

Copyright: © 2026 Liu, Xie and Dong. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Heng Dong

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.