Machine Unlearning in Brain-Inspired Neural Network Paradigms Provisionally Accepted
- 1City University of Macau, Macao, SAR China
Machine unlearning, essential for data privacy and regulatory compliance, involves selectively removing specific information from a machine learning model. This study introduces an innovative method for machine unlearning in Spiking Neuron Models (SNMs), which more accurately replicate biological neural network behaviors. We adopt a hybrid approach that integrates selective synaptic retraining, synaptic pruning, and adaptive neuron thresholding. This method effectively eliminates targeted information while maintaining the neural network's overall integrity and performance. We conducted extensive experiments on various computer vision datasets to evaluate the impact of machine unlearning on critical performance metrics such as accuracy, precision, recall, and ROC AUC. Our findings confirm the practicality and efficiency of our approach, underscoring its applicability in real-world AI systems. This research contributes significantly to understanding machine unlearning in intricate neural architectures and paves the way for further developments in creating flexible and ethical AI models.
Keywords: machine learning, spiking neural networks, data security, Privacy protection, Computer Vision, Brain-inspired ANNs
Received: 26 Dec 2023;
Accepted: 11 Mar 2024.
Copyright: © 2024 Wang, Ying and Pan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Dr. Zuobin Ying, City University of Macau, Macao, Macao, SAR China
Mr. Zijie Pan, City University of Macau, Macao, Macao, SAR China