AUTHOR=Konishi Masataka , Igarashi Kei M. , Miura Keiji TITLE=Biologically plausible local synaptic learning rules robustly implement deep supervised learning JOURNAL=Frontiers in Neuroscience VOLUME=Volume 17 - 2023 YEAR=2023 URL=https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2023.1160899 DOI=10.3389/fnins.2023.1160899 ISSN=1662-453X ABSTRACT=In deep neural networks, representational learning in the middle layer is essential to learn efficiently. However, currently-prevailing backpropagation learning rule (BP) is not necessarily biologically plausible and not implementable in the brain in its current form. Therefore, to elucidate the possible learning rules used in the brain, it is critical to establish biologically plausible learning rules that learn realistic memory tasks. For example, learning rules that achieve learning performances worse than animals observed in experimental studies may not be computations used in real brains and should be ruled out. Using numerical simulations, here we developed biologically plausible learning rules that solve a task mimicking a laboratory experiment in which mice learn to predict correct reward amount. While the extreme learning machine (ELM) and the weight perturbation (WP) learning rules performed worse than mice, the feedback alignment (FA) rule achieved a performance equal to BP. To further obtain more biologically-plausible models, we developed a variant of FA termed FA_Ex-100%, which implements direct dopamine inputs that provide error signals locally in the layer of focus, as found in the mouse entorhinal cortex (Lee et al., 2021). The performance of FA_Ex-100% was also comparable to that of conventional BP. Finally, we tested if FA_Ex-100% is robust against rule perturbations and biologically inevitable noises. FA_Ex-100% worked even with perturbation, presumably because the correct prediction error (e.g. dopaminergic signals) can calibrate in the next step as a teaching signal if a perturbation creates a deviation. These results suggest that simplified and biologically plausible learning rules like FA_Ex-100% can robustly realize deep supervised learning when the error signal, possibly conveyed by dopaminergic neurons, is accurate.