Attention deficit hyperactivity disorder (ADHD) is the most common neurodevelopmental disorder in children. Although the involvement of dopamine in this disorder seems to be established, the nature of dopaminergic dysfunction remains controversial. The purpose of this study was to test whether the key response characteristics of ADHD could be simulated by a mechanistic model that combines a decrease in tonic dopaminergic activity with an increase in phasic responses in cortical-striatal loops during learning reinforcement. To this end, we combined a dynamic model of dopamine with a neurocomputational model of the basal ganglia with multiple action channels. We also included a dynamic model of tonic and phasic dopamine release and control, and a learning procedure driven by tonic and phasic dopamine levels. In the model, the dopamine imbalance is the result of impaired presynaptic regulation of dopamine at the terminal level. Using this model, virtual individuals from a dopamine imbalance group and a control group were trained to associate four stimuli with four actions with fully informative reinforcement feedback. In a second phase, they were tested without feedback. Subjects in the dopamine imbalance group showed poorer performance with more variable reaction times due to the presence of fast and very slow responses, difficulty in choosing between stimuli even when they were of high intensity, and greater sensitivity to noise. Learning history was also significantly more variable in the dopamine imbalance group, explaining 75% of the variability in reaction time using quadratic regression. The response profile of the virtual subjects varied as a function of the learning history variability index to produce increasingly severe impairment, beginning with an increase in response variability alone, then accumulating a decrease in performance and finally a learning deficit. Although ADHD is certainly a heterogeneous disorder, these results suggest that typical features of ADHD can be explained by a phasic/tonic imbalance in dopaminergic activity alone.
The design of modern convolutional artificial neural networks (ANNs) composed of formal neurons copies the architecture of the visual cortex. Signals proceed through a hierarchy, where receptive fields become increasingly more complex and coding sparse. Nowadays, ANNs outperform humans in controlled pattern recognition tasks yet remain far behind in cognition. In part, it happens due to limited knowledge about the higher echelons of the brain hierarchy, where neurons actively generate predictions about what will happen next, i.e., the information processing jumps from reflex to reflection. In this study, we forecast that spiking neural networks (SNNs) can achieve the next qualitative leap. Reflective SNNs may take advantage of their intrinsic dynamics and mimic complex, not reflex-based, brain actions. They also enable a significant reduction in energy consumption. However, the training of SNNs is a challenging problem, strongly limiting their deployment. We then briefly overview new insights provided by the concept of a high-dimensional brain, which has been put forward to explain the potential power of single neurons in higher brain stations and deep SNN layers. Finally, we discuss the prospect of implementing neural networks in memristive systems. Such systems can densely pack on a chip 2D or 3D arrays of plastic synaptic contacts directly processing analog information. Thus, memristive devices are a good candidate for implementing in-memory and in-sensor computing. Then, memristive SNNs can diverge from the development of ANNs and build their niche, cognitive, or reflective computations.
Computational neuroscience has come a long way from its humble origins in the pioneering work of Hodgkin and Huxley. Contemporary computational models of the brain span multiple spatiotemporal scales, from single neuronal compartments to models of social cognition. Each spatial scale comes with its own unique set of promises and challenges. Here, we review models of large-scale neural communication facilitated by white matter tracts, also known as whole-brain models (WBMs). Whole-brain approaches employ inputs from neuroimaging data and insights from graph theory and non-linear systems theory to model brain-wide dynamics. Over the years, WBM models have shown promise in providing predictive insights into various facets of neuropathologies such as Alzheimer's disease, Schizophrenia, Epilepsy, Traumatic brain injury, while also offering mechanistic insights into large-scale cortical communication. First, we briefly trace the history of WBMs, leading up to the state-of-the-art. We discuss various methodological considerations for implementing a whole-brain modeling pipeline, such as choice of node dynamics, model fitting and appropriate parcellations. We then demonstrate the applicability of WBMs toward understanding various neuropathologies. We conclude by discussing ways of augmenting the biological and clinical validity of whole-brain models.