Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Neurorobot.

Volume 19 - 2025 | doi: 10.3389/fnbot.2025.1705970

This article is part of the Research TopicTowards a Novel Paradigm in Brain-Inspired Computer Vision Volume IIView all articles

Effective and Efficient Self-supervised Masked Model Based on Mixed Feature Training

Provisionally accepted
  • 1Guangzhou University, Guangzhou, China
  • 2Scientific Research Department, Guangzhou Preschool Teachers College, Guangzhou, China
  • 3Library, Guangdong University of Foreign Studies, Guangzhou, China

The final, formatted version of the article will be published soon.

Under the influence of Masked Language Modeling (MLM), Masked Image Modeling (MIM) employs an attention mechanism to perform masked training on images. However, processing a single image requires numerous iterations and substantial computational resources to reconstruct the masked regions, resulting in high computational complexity and significant time costs. To address this issue, we propose an Effective and Efficient self-supervised Masked model based on Mixed feature training (EESMM). First, we stack two images for encoding and input the fused features into the network, which not only reduces computational complexity but also enables the learning of more features. Second, during decoding, we obtain the decoding features corresponding to the original images based on the decoding features of the two input original images and the mixed images, and then construct a corresponding loss function to enhance feature representation. EESMM significantly reduces pre-training time without sacrificing accuracy, achieving 83% accuracy on ImageNet in just 363 hours using four V100 GPUs—only one-tenth of the training time required by SimMIM. This validates that the method can substantially accelerate the pre-training process without noticeable performance degradation.

Keywords: masked image modeling, Self-Supervise Learning, neuromorphic computing, swin transformer, attention mechanism

Received: 15 Sep 2025; Accepted: 11 Oct 2025.

Copyright: © 2025 Kang, Liu and Cai. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Chunliu Cai, hoopallen@126.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.