- 1International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo Institutes for Advanced Study (UTIAS), The University of Tokyo, Tokyo, Japan
- 2Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
Dissociated neuronal cultures provide a powerful, simplified model for investigating self-organized prediction and information processing in neural networks. This review synthesizes and critically examines research demonstrating their fundamental computational abilities, including predictive coding, adaptive learning, goal-directed behavior, and deviance detection. A unique contribution of this work is the integration of findings on network self-organization, such as the development of critical dynamics optimized for information processing, with emergent predictive capabilities, the mechanisms of learning and memory, and the relevance of the free energy principle within these systems. Building on this, we discuss how insights from these cultures inform the design of neuromorphic and reservoir computing architectures, aiming to enhance energy efficiency and adaptive functionality in artificial intelligence. Finally, this review outlines promising future directions, including advancements in three-dimensional cultures, multi-compartment models, and brain organoids, to deepen our understanding of hierarchical predictive processes in both biological and artificial systems, thereby paving the way for novel, biologically inspired computing solutions.
1 Introduction
1.1 The challenge of neural computation and in vitro models
The brain's remarkable ability to process information, learn from experience, and adapt to changing environments emerges from the dynamic interactions of billions of neurons. Understanding how these capabilities arise from neural network organization represents a fundamental challenge in neuroscience (Friston et al., 2006; Friston, 2010; Bastos et al., 2012; Keller and Mrsic-Flogel, 2018). Dissociated neuronal cultures—simplified systems where neurons are isolated from their native environment and allowed to self-organize—provide a powerful experimental platform for investigating these processes. These cultures retain core capabilities for network formation, information processing, and adaptation while offering unprecedented access for manipulation and observation (Maeda et al., 1995; Kamioka et al., 1996; Potter and DeMarse, 2001; Marom and Shahaf, 2002).
1.2 Unique contributions of this review
Despite significant advances in understanding isolated aspects of neuronal culture function, a comprehensive synthesis that specifically focuses on self-organized prediction and its implications has been lacking. This review makes several unique contributions by: (1) integrating findings across previously disconnected research domains spanning network development, learning, prediction, and goal-directed behavior in dissociated cultures; (2) providing a critical framework for understanding how predictive capabilities emerge from self-organization in the absence of explicit design; and (3) establishing conceptual bridges between fundamental neuroscience findings in these simplified systems and their applications for neuromorphic computing and artificial intelligence.
1.3 Evolution of methodologies: from early cultures to advanced MEAs
The study of neuronal cultures has evolved dramatically since Ross Granville Harrison first demonstrated nerve fiber growth in vitro in 1910 (Harrison, 1910). Harrison's pioneering work established the foundation for modern neurobiology by enabling direct observation of neural development. A transformative advance came with the introduction of microelectrode array (MEA) technology (Figure 1A; Thomas et al., 1972; Gross et al., 1977; Pine, 1980). MEAs revolutionized the field by enabling long-term, non-invasive recording from multiple neurons simultaneously, providing unprecedented insight into network dynamics and development (Pine, 2006; Bakkum et al., 2013; Müller et al., 2015; Obien et al., 2015).

Figure 1. Evolution of microelectrode array (MEA) technology for studying neuronal networks. (A) The MEA system, featuring a transparent glass substrate with 60 microelectrodes spaced at 200 μm. This design provides sufficient spatial resolution for capturing network-level neuronal activity and allows for optical imaging of the culture. The system is capable of both extracellular recording and stimulation for long-term culture studies. (B) Bright-field microscopy of dissociated neuronal cultures grown on the MEA platform. The electrode array beneath the neuronal layer supports the self-organization of functional networks while enabling the simultaneous observation of culture morphology and recording of extracellular signals. Scale bar = 100 μm. (C) Extracellular spike recordings from MEA, demonstrating its capacity to capture neuronal activity from multiple electrodes simultaneously. The recording resolution and electrode layout enable the analysis of network activity patterns and dynamic behaviors. (D) High-dense CMOS-based MEA system (MaxOne), incorporating 26,400 platinum electrodes with a 17.5 μm pitch. This CMOS-MEA provides subcellular spatial resolution for recording and stimulation, enabling the detailed investigation of localized neuronal activity and network interactions. (E) Schematic overlay of a neuron (green) interacting with electrodes (red) on a CMOS-MEA. The figure illustrates how neuronal somas and processes align with the electrode array. The red electrodes in close proximity to the soma demonstrate the ability of high-density CMOS arrays to monitor and stimulate activity at a single-cell resolution. The scale bar indicates the high spatial resolution provided by this system, with electrodes spaced at ~17.5 μm. (F) CMOS-MEA monitoring an action potential generated from the soma. Immunostaining image of a neuron on the CMOS MEA is overlaid with spatially localized extracellular spike sources. The high-density electrode array enables the resolution of neuronal activity at subcellular precision, revealing fine-scale functional properties of single neurons and their interactions with the network. Scale bar = 30 μm.
Early MEA platforms allowed researchers to monitor network formation in dissociated cultures, revealing spontaneous activity and plasticity (Figures 1B, C). High-density CMOS microelectrode arrays now enable recording from thousands of neurons with unprecedented spatial and temporal resolution (Berdondini et al., 2005; Frey et al., 2007; Ballini et al., 2014; Müller et al., 2015). These systems (Figure 1D) facilitate detailed investigations of both localized interactions and long-range network dynamics. They provide subcellular resolution, as illustrated by the precise alignment of neurons with individual electrodes (Figure 1E) and allow spatial mapping of extracellular spikes overlaid on neuronal morphology to track activity sources and connectivity (Figure 1F).
1.4 Insights from calcium imaging
Complementary to MEA technology, fluorescence calcium imaging provides another powerful lens for observing neuronal activity. Early studies, such as Murphy et al. (1992), using Fura-2, demonstrated the utility of this approach by revealing spontaneous synchronous calcium transients in cultured cortical neurons, linking these network events to synaptic mechanisms. Building on such foundational work, the technique now employs a range of fluorescent indicators—from chemical dyes to advanced genetically encoded calcium indicators (GECIs) like GCaMP6—to visualize the transient intracellular calcium increases that accompany action potentials. Calcium imaging offers distinct advantages, notably the capacity to monitor large neuronal populations (often thousands of cells) with single-cell resolution and to target specific cell types through genetic strategies (Montalà-Flaquer et al., 2022; Soriano, 2023). It is particularly valuable for investigating the spatial organization of network activity and how structural features, like engineered anisotropies, shape functional dynamics. Furthermore, the evolution of GECIs has enabled long-term tracking of network development and plasticity over weeks (Estévez-Priego et al., 2023). Thus, MEAs and calcium imaging offer synergistic insights: MEAs provide superior temporal resolution for direct electrical events, while calcium imaging excels in spatial coverage and cellular-level detail, with ongoing advancements continually improving its temporal capabilities.
1.5 Strengths and limitations of in vitro models
While these in vitro systems offer unparalleled control, accessibility for high-resolution recording and stimulation, and a simplified environment to study fundamental principles of self-organization and computation, it is crucial to acknowledge their inherent limitations. These include the absence of native brain architecture, the lack of structured sensory input experienced in vivo, and patterns of spontaneous activity that can differ from those in intact brains. A careful consideration of these factors is essential when translating findings from dissociated cultures to more complex biological systems, a theme that will be revisited throughout this review.
1.6 Observed capabilities of neuronal cultures
Research using these systems has revealed several fundamental properties of neural network organization and function. As cultures develop, they demonstrate a remarkable capacity for self-organization, evolving from random collections of cells into functional networks that exhibit critical dynamics optimized for information (Beggs and Plenz, 2003; Levina et al., 2007; Millman et al., 2010; Friedman et al., 2012; Yada et al., 2017; Kossio et al., 2018). These critical phenomena have been observed using both electrophysiological approaches and calcium imaging techniques (Yaghoubi et al., 2024) with the latter providing complementary evidence through optical measurements of population activity, though careful consideration of data processing methods is necessary to avoid potential artifacts in inferring spike dynamics from calcium signals (Soriano, 2023). These networks show robust capabilities for learning and memory formation, as demonstrated through studies of synaptic plasticity and adaptive responses to electrical stimulation (Jimbo et al., 1998, 1999; Shahaf and Marom, 2001; Le Feber et al., 2010, 2014, 2015; Dranias et al., 2013; Dias et al., 2021).
Neuronal cultures have proven effective for studying goal-directed behavior in closed-loop systems. Potter et al. (1997) introduced the “Animat in a Petri Dish” concept, establishing a paradigm where network activity controlled a simulated animal (“animat”) while receiving sensory feedback through electrical stimulation. This foundational work led to numerous studies demonstrating that cultured networks can adapt to control external devices (DeMarse et al., 2001; Potter et al., 2003; Bakkum et al., 2008b; Chao et al., 2008; Tessadori et al., 2012; Masumori et al., 2020; Yada et al., 2021; Kagan et al., 2022), advancing our understanding of neural adaptation and control while suggesting new approaches for brain-machine interfaces and neuroprosthetics.
The computational capabilities of neuronal cultures extend to more sophisticated information processing tasks. These networks exhibit predictive coding and deviance detection, supporting theoretical frameworks such as the free energy principle (Rao and Ballard, 1999; Friston, 2010; Huang and Rao, 2011; Isomura et al., 2015; Isomura and Friston, 2018; Lamberti et al., 2023). Their ability to perform complex computations while maintaining remarkable energy efficiency has important implications for neuromorphic computing and artificial intelligence (Marković et al., 2020; Smirnova et al., 2023). Insights from neuronal cultures can influence the development of new computing architectures, particularly in areas such as reservoir computing and adaptive neural networks (Dockendorf et al., 2009; Kubota et al., 2019; Tanaka et al., 2019; Kubota et al., 2021b; Subramoney et al., 2021; Cai et al., 2023; Sumi et al., 2023).
1.7 Emerging frontiers
The development of three-dimensional culture techniques and brain organoids offers new opportunities to study neural organization in more physiologically relevant contexts (Hogberg et al., 2013; Lancaster et al., 2013; Clevers, 2016; Smirnova and Hartung, 2024). These advances, combined with sophisticated analysis techniques and theoretical frameworks, provide new insights into how neural networks self-organize for efficient information processing and adaptation.
1.8 Scope and structure of this review
This review synthesizes current research on dissociated neuronal cultures, examining their contributions to our understanding of neural network organization and function. Specifically, it aims to: (i) trace the development of these cultures toward complex, critical dynamics suitable for information processing; (ii) detail their capacity for adaptive learning, memory formation, and predictive processing; (iii) explore their utility in modeling goal-directed behavior within embodied systems; (iv) connect these empirical findings to overarching theoretical frameworks like the free energy principle; and (v) discuss the implications of this research for designing next-generation neuromorphic computing systems.
We begin by exploring network development and the emergence of critical dynamics (Chapter 2), followed by detailed analysis of learning and memory formation in these systems (Chapter 3). We then examine how neuronal cultures exhibit deviance detection and predictive processing, including the relevance of theoretical frameworks like the free energy principle (Chapter 4), and their remarkable capacity for goal-directed behavior when coupled with external systems (Chapter 5). Chapter 6 discusses how insights from neuronal cultures inform the development of artificial neural networks and neuromorphic computing systems. Finally, we consider future directions for the field, including advances in three-dimensional culture techniques, brain organoids, and their implications for both neuroscience and artificial intelligence (Chapter 7).
By examining how these simplified neural systems self-organize for prediction and adaptation, we aim to illuminate fundamental principles of neural computation while highlighting their practical applications in bio-inspired computing and neuroprosthetics. This understanding may ultimately guide the development of more efficient and adaptive artificial systems while deepening our knowledge of biological neural network function.
2 Network development and self-organized criticality
The transformation of dissociated neuronal cultures from random collections of neurons into sophisticated, functionally organized systems is a remarkable feat of biological self-organization. This chapter explores the key processes and principles underlying network development in these cultures, followed by an introduction to the concept of Self-Organized Criticality (SOC) and its relevance to understanding network maturation.
2.1 Early network development and activity patterns
Network development in neuronal cultures progresses through several distinct stages, each characterized by increasingly complex patterns of activity. In the earliest stages, neurons exhibit seemingly chaotic, independent firing patterns. Kamioka et al. (1996) observed that this apparent randomness quickly gives way to more organized activity as the culture matures. The transition from independent firing to coordinated activity is heavily dependent on NMDA receptor activation and is influenced by external factors such as calcium concentrations (Segev et al., 2001). As development continues, the network establishes stable, recurring patterns of synchronized activity. Van Pelt et al. (2004) documented the emergence of network bursting as a hallmark of culture maturation. Early calcium imaging studies, such as those by Murphy et al. (1992), using Fura-2 and Opitz et al. (2002) using Fluo-3, provided crucial visualizations of these emerging spontaneous synchronous calcium transients and their developmental timeline, linking them to underlying synaptic mechanisms and the developmental GABA shift. Further refinement of connections leads to more sophisticated firing patterns, including what Wagenaar et al. (2006) termed “superbursts”—periods of intense, coordinated activity that reflect the increasing complexity of network interactions.
As shown by Yada et al. (2017), these patterns exhibit state-dependent properties, with different spatiotemporal patterns appearing successively and periodically, suggesting organized fluctuations in neural activity propagation. Figure 2 illustrates these developmental transitions using data from high-density CMOS microelectrode arrays. Figure 2A displays spatial maps of action potential amplitudes recorded at different developmental stages, while Figure 2B highlights changes in spike waveforms at selected electrodes over time. Figure 2C depicts the progression of spontaneous spiking activity, showcasing the emergence of synchronized bursts. Figure 2D visualizes the shift in neuronal avalanche size distributions, from exponential at early stages (4 DIV) to power-law distributions indicative of SOC by 16 DIV. Lastly, Figure 2E presents the integration-fragmentation model explaining SOC emergence, highlighting the role of synaptic pruning and balanced excitation-inhibition dynamics in this transition.

Figure 2. Developmental transition toward self-organized criticality (SoC) in dissociated neuronal cultures. (A) Spatial maps of action potential amplitudes recorded using high-density CMOS MEAs at different developmental stages: 4 days in vitro (DIV), 7 DIV, and 16 DIV. Black circles mark recording sites, and the heatmap represents voltage amplitudes (color scale: −400 to 100 μV). Scale bar = 200 μm. (B) Representative spike waveforms recorded at selected electrodes [indicated by black circles in (A)] across developmental stages. Gray lines depict raw spike traces, while red lines indicate averaged spike waveforms. Scale bars = 1 ms, 100 μV. (C) Raster plots of spontaneous spiking activity from 120 s of recorded data for the same cultures at 4, 7, and 16 DIV, illustrating the emergence of synchronized bursts over time. (D) Log-log plots of neuronal avalanche size distributions at 4, 7, and 16 DIV. Exponential distributions dominate early development (4 DIV), while bimodal distributions emerge at 7 DIV, and power-law distributions characteristic of SoC appear by 16 DIV. Fitted red lines represent power-law distributions, and blue lines indicate exponential fits. (E) Schematic representation of the integration-fragmentation model for SoC emergence. Initially, neurons form weak excitatory connections, generating exponential distributions. Large-scale avalanches emerge as connectivity strengthens, leading to a bimodal distribution. Finally, synaptic pruning and the balance of excitation and inhibition result in diverse avalanche sizes distributed according to a power-law. Figure reproduced from Yada et al. (2017).
2.2 Emergence of structural and functional organization
The structural and functional organization of the network evolves in parallel with these changes in activity patterns. Over time, synaptic connections become more stable, as evidenced by metrics like conditional firing probabilities (Le Feber et al., 2007). Early studies on network development, such as Soriano et al. (2008), used percolation theory to track the formation of global connectivity in cultures, showing the emergence of a “giant connected component” that integrates the network as it matures, a process paralleled by the development of spontaneous network-wide bursting. The importance of more structured architectures, such as modularity, has since been highlighted.
Engineered in vitro systems have shown that modular organization, achieved through topographical patterning or microfabrication, can lead to richer dynamical repertoires and a balance between functional segregation and integration (Yamamoto et al., 2018; Montalà-Flaquer et al., 2022). Further work by Yamamoto et al. (2018, 2023) demonstrated that an optimal level of sparse coupling between modules enhances dynamical richness and allows asynchronous noise to effectively desynchronize network activity. Theoretical work also supports that modularity, often alongside interconnected hub structures forming “rich-clubs,” is fundamental for enabling high “functional complexity” in neural networks (Zamora-López et al., 2016), and indeed, such rich-club organization has been shown to emerge early in developing hippocampal cultures, with hub neurons brokering activity flow (Schroeter et al., 2015), while in cortical slice cultures, “information-rich” hub neurons form similar rich-clubs that dominate information transfer (Nigam et al., 2016). Baruchi et al. (2008) characterized how mutual synchronization emerges between coupled networks, demonstrating that despite engineering similarity, spontaneous asymmetries emerge in both activity propagation and functional organization.
The emergence of coherent, network-wide activity from seemingly random spontaneous neuronal firing (intrinsic noise) has been explained by mechanisms such as “noise focusing.” Orlandi et al. (2013) proposed that this effect arises from a combination of dynamical and topological amplification of spontaneous activity, where metric correlations in the network structure play a key role in concentrating noise to specific nucleation sites, thereby triggering global bursts without requiring external pacemakers. Building on this, Hernández-Navarro et al. (2021) further detailed how noise-driven amplification mechanisms, dependent on network topology forming “amplifying cores,” govern the emergence of such coherent events. The interplay between spontaneous activity and network formation is also crucial; Okujeni and Egert (2019) demonstrated that activity-dependent neuronal migration and neurite outgrowth can lead to self-organized modular architectures (clustering), which in turn shape the characteristics of spontaneous network bursts. Earlier work by Okujeni et al. (2017) showed that such mesoscale architectures, like neuronal clustering, significantly influence the initiation sites and richness of spontaneous activity patterns.
2.3 Developmental considerations and in vitro limitations
While dissociated cultures provide invaluable insights into self-organization, it is important to consider factors that differentiate their development and activity from in vivo brain circuits. For instance, the level of external input significantly shapes network dynamics. Zierenberg et al. (2018) proposed that the prevalent bursting in standard in vitro cultures results from their low-input environment, which contrasts with the continuous afferent drive in vivo that promotes more stable, reverberating activity. Homeostatic plasticity mechanisms adapt the network to these input levels, suggesting that the “default” state of cultures can be tuned by providing appropriate weak, long-term external stimulation. Furthermore, species-specific developmental trajectories are evident. Studies comparing human iPSC-derived cortical networks to rodent primary cultures have revealed that while general developmental stages are similar, human-derived networks often exhibit more gradual maturation, more variable bursting patterns, and different levels of synchrony (Hyvärinen et al., 2019; Estévez-Priego et al., 2023). These differences underscore the importance of model selection based on the specific research question and highlight considerations for translating findings across species or to the in vivo context.
2.4 Self-organized criticality (SOC) in developing networks
As researchers sought to understand the principles governing these complex developmental dynamics, the concept of Self-Organized Criticality (SOC) emerged as a powerful explanatory framework. Introduced by Bak et al. (1987) and Bak (1996) in the context of physical systems, SOC describes how complex systems naturally evolve toward a critical state characterized by scale-invariant behavior. Further theoretical work has expanded our understanding of SOC in neural systems, highlighting its ubiquity across different scales of brain organization and its functional implications (Muñoz, 2018; Plenz et al., 2021). Networks at criticality exhibit maximized dynamic range, optimally responding to the broadest range of stimulus intensities (Shew et al., 2009). This concept has been widely applied to neural systems, offering insights into network development and function (Chialvo, 2010; Beggs and Timme, 2012; Shew and Plenz, 2013; Bilder and Knudsen, 2014).
In the context of neuronal networks, SOC is most notably manifested in the phenomenon of neuronal avalanches—cascades of spontaneous activity that follow power-law size distributions. Beggs and Plenz (2003) were among the first to observe and characterize these avalanches in neuronal cultures using MEAs, followed by confirmations in various neural systems (Mazzoni et al., 2007; Pasquale et al., 2008; Petermann et al., 2009; Friedman et al., 2012).
Calcium imaging has also been employed to investigate criticality across different network activity states. For instance, Yaghoubi et al. (2018) found that critical exponents for neuronal avalanches are not universal and can be shaped by culture conditions altering network topology. More recently, Yaghoubi et al. (2024) showed that by adjusting temporal binning according to the intrinsic timescales of network “up” and “down” states, scale-free avalanche statistics could be observed in both activity regimes in cultures monitored with calcium imaging. Indeed, interpreting such optical data requires careful consideration of spike inference challenges due to slow indicator kinetics (Soriano, 2023), often necessitating deconvolution algorithms like OASIS (Friedrich et al., 2017) which have their own limitations regarding temporal precision and algorithmic assumptions. Consequently, alternative analytical approaches such as state-dependent Transfer Entropy for connectivity reconstruction (Stetter et al., 2012; Tibau et al., 2020) or spectral analysis of population signals (Tibau et al., 2013) are also employed to characterize network properties from calcium imaging, underscoring the need for context-aware analysis.
Experimental evidence for the development of SOC in cultures has been provided by studies using advanced recording techniques. Yada et al. (2017) used high-density CMOS microelectrode arrays to capture the progression of avalanche dynamics across three distinct phases: an initial exponential distribution, a transitional bimodal distribution, and a final power-law distribution characteristic of a critical state. This observed sequence supports a gradual expansion model of network development, where neural connections are extended incrementally over time. Kayama et al. (2019) revealed the formation of functional clusters within maturing cultures, showing how these clusters exhibit diverse and repeatable patterns of synchronized firing, indicating the development of specialized subnetworks within the larger network structure. These findings complement earlier observations (Baruchi et al., 2008) about the emergence of mutual synchronization in coupled networks, demonstrating how spontaneous asymmetries arise in both activity propagation and functional organization.
2.5 Methodological considerations in assessing criticality
While power-law distributions of neuronal avalanche sizes are a key signature of SOC, it is crucial to consider potential methodological artifacts in their detection and interpretation. Neto et al. (2022) have demonstrated that aspects of data acquisition and analysis, such as “measurement overlap” in spatially coarse recordings (where signals from multiple underlying neurons might contribute to a single electrode or ROI) and the choice of parameters like signal thresholding for event definition and temporal binning, can significantly bias avalanche statistics. Such factors can even lead to the appearance of power-law distributions in systems that are not genuinely critical, or mask true differences between dynamic states. These findings emphasize the need for careful methodological choices and critical interpretation of avalanche data when assessing evidence for SOC in neuronal cultures and other neural systems.
2.6 Mechanisms underlying self-organized criticality
The emergence of SOC in neuronal cultures involves multiple mechanisms developing over time. Van Vreeswijk and Sompolinsky (1996) demonstrated the importance of balanced excitation and inhibition in neural networks for achieving stable yet complex dynamics. Abbott and Rohrkemper (2007) proposed a growth-based mechanism where neurons add or remove synapses based on their activity levels. Both short-term and long-term plasticity contribute to the network's evolution toward criticality (Levina et al., 2007; Millman et al., 2010). Vogels et al. (2011) showed how inhibitory plasticity maintains excitation-inhibition balance in memory networks, and Hennequin et al. (2017) synthesized how inhibitory synaptic plasticity acts as a crucial control mechanism for network stability and computation.
The interaction between plasticity mechanisms is particularly important: excitatory STDP with an asymmetric time window destabilizes the network toward a bursty state, while inhibitory STDP with a symmetric time window stabilizes the network toward a critical state (Sadeh and Clopath, 2020). Structural changes, such as axonal elongation and synaptic pruning, also shape the network's critical dynamics (Tetzlaff et al., 2010; Kossio et al., 2018). Kuśmierz et al. (2020) demonstrated that networks with power-law distributed synaptic strengths exhibit a continuous transition to chaos.
The relationship between criticality and the edge of chaos represents another important regulatory point in neural networks, associated with the balance between excitation and inhibition. SOC, the edge of chaos, and excitation-inhibition balance serve as complementary homeostatic set points in well-tuned networks, each contributing to the optimization of computation and memory formation. Ikeda et al. (2023) have shown how the interplay between environmental noise and spike-timing-dependent plasticity can drive networks toward criticality, emphasizing the importance of optimal noise levels in this process. Theoretical modeling by Kern et al. (2024) has emphasized the crucial role of inhibitory circuitry, demonstrating how the density and range of inhibitory synaptic connections significantly influence the development of critical dynamics.
The study of network development through the lens of SOC has provided valuable insights into the fundamental principles governing the maturation of neuronal systems. It offers a framework for understanding how complex, functional network structures emerge from initially disordered collections of neurons, and how these networks maintain a balance between stability and flexibility as they mature. This self-organized development toward criticality, supported by various plasticity mechanisms and carefully regulated by inhibitory circuits, enables neuronal networks to achieve optimal information processing capabilities while maintaining adaptability. The resulting networks exhibit a rich repertoire of dynamics that supports their computational functions while preserving the ability to respond to changing environmental demands.
3 Adaptive learning and memory formation
Dissociated neuronal cultures offer a simplified system for studying learning and memory, providing insight into how neural networks adapt in response to external stimuli. This chapter reviews key findings demonstrating that these cultures exhibit learning behaviors and explores the mechanisms that enable memory formation and adaptation in these systems. While learning in these reduced preparations may not fully recapitulate the complexity of in vivo cognition, the observed phenomena provide valuable insights into fundamental cellular and network-level adaptive processes.
3.1 Foundational studies on learning and activity-dependent plasticity
Early studies laid the foundation for understanding learning in dissociated cultures. Jimbo et al. (1999) showed that localized tetanic stimulation could induce potentiation and depression in specific pathways, highlighting the network's capacity to modify connections based on stimuli. Shahaf and Marom (2001) demonstrated that networks could be trained to produce specific responses through low-frequency electrical stimulation, without the need for external reward mechanisms, suggesting that learning can emerge from simple, self-organizing principles. Ruaro et al. (2005) further established the computational capabilities of these cultures, showing they could perform pattern recognition tasks through targeted electrical stimulation. Their work demonstrated how biological neurons could be trained to recognize specific spatial patterns, with responses enhanced through long-term potentiation mechanisms.
Later work explored how network dynamics could be controlled and shaped through stimulation. Wagenaar et al. (2005) demonstrated that closed-loop, distributed electrical stimulation could effectively transform burst-dominated activity into dispersed spiking patterns more characteristic of in vivo activity. Le Feber et al. (2010) showed that adaptive electrical stimulation—where stimulation is adjusted based on network feedback—was more effective at inducing long-lasting connectivity changes compared to random stimulation. This highlighted the role of feedback in shaping the learning process.
3.2 Memory mechanisms: from lasting traces to temporal processing
Memory formation in dissociated cultures was further investigated by Le Feber et al. (2015), who found that repeated stimulation could create multiple parallel memory traces. This indicated that these cultures could handle complex memory storage tasks, with distinct stimuli producing stable patterns of connectivity. Additionally, Bakkum et al. (2008a) demonstrated that even when synaptic transmission was blocked, changes in action potential propagation still occurred, suggesting that non-synaptic mechanisms contribute to network adaptation.
Short-term memory processes were explored by Dranias et al. (2013), who identified two types of STM in these networks: “fading memory,” reliant on reverberating neural activity, and “hidden memory,” which persists through changes in synaptic strength even after neural activity has ceased. Ju et al. (2015) expanded on these findings, demonstrating that dissociated networks possess an intrinsic capacity for spatiotemporal memory lasting several seconds and can classify complex temporal patterns. Their work highlighted the importance of short-term synaptic plasticity and recurrent connections in enabling these computational capabilities. Further elucidating temporal processing capacity, Ferdous and Berdichevsky (2024) demonstrated that dissociated cortical cultures can reliably distinguish spatiotemporal sequences of electrical stimuli with optimal discrimination at 50–200 ms intervals, explicitly linking this behavior to reservoir computing principles where recurrent dynamics create a “fading memory” that enables temporal pattern classification and serves as a foundation for predictive tasks.
3.3 Molecular, network state, and broader contextual influences on learning
Further studies have provided more detail on the molecular and network dynamics underlying memory and learning. Dias et al. (2021) found that memory consolidation in these cultures was influenced by network state, with low cholinergic tone enhancing memory formation. Ikeda and Takahashi (2021) demonstrated the flexibility of dissociated networks, showing that low-frequency stimulation could initially induce depression but later lead to potentiation, revealing the dynamic nature of learning.
While dissociated cultures provide insights into fundamental learning mechanisms, related ex vivo preparations with preserved microarchitecture offer additional context. Liu and Buonomano (2025) showed that organotypic cortical slices could learn to predict stimulus timing, generate prediction errors upon omission, and spontaneously replay learned temporal patterns, suggesting that sophisticated temporal prediction and replay are fundamental computational primitives inherent to local cortical microcircuits that help interpret the adaptive capabilities observed in dissociated cultures.
These findings demonstrate that dissociated neuronal cultures are capable of both learning and memory formation through various mechanisms, including synaptic plasticity, non-synaptic adaptations, and network state-dependent processes. They can learn to associate stimuli, encode temporal patterns, and form lasting memory traces. However, it's important to recognize the context: the learning observed is often tied to specific stimulation paradigms and may reflect fundamental associative capacities rather than the complex, context-rich learning seen in vivo. The absence of a developed organismal framework means that “goals” and “rewards” are externally imposed or emergent from very basic self-organizing principles. Understanding these adaptive behaviors, even in their simplified form, provides an essential foundation for exploring how neural networks manage information and anticipate future events, particularly in the context of predictive processing discussed in the subsequent chapter.
4 Prediction, deviance detection, and the free energy principle
The free energy principle and predictive coding framework propose that neural systems maintain internal models to minimize prediction errors about their sensory inputs. Under this framework, neural responses represent prediction errors—the difference between expected and actual inputs. Organisms actively minimize prediction errors through two complementary processes: updating internal models to better predict sensory inputs and selecting actions that confirm these predictions. This principle helps explain phenomena like mismatch negativity (MMN), where the brain produces enhanced responses to stimuli that violate statistical regularities, representing prediction error signals in sensory processing hierarchies.
4.1 Deviance detection in dissociated cultures: from adaptation to prediction error
In dissociated neuronal cultures, evidence for predictive processing comes from multiple experimental approaches. Early evidence for differential processing of frequent and rare stimuli came from Eytan et al. (2003), who showed that cortical networks could selectively adapt to different stimulation patterns using multi-electrode arrays. Their work demonstrated that neurons attenuated responses to frequent stimuli while enhancing responses to rare events. Through careful pharmacological manipulations, they showed this selective adaptation depended on both excitatory synaptic depression and GABAergic inhibition, though their findings likely primarily reflect stimulus-specific adaptation (SSA) mechanisms rather than true prediction error signaling.
The distinction between SSA and genuine deviance detection became clearer through subsequent work. While SSA reflects passive reduction in responses to repeated stimuli through synaptic depression, true deviance detection requires active comparison between predicted and actual inputs (Kubota et al., 2021a) provided preliminary evidence for genuine prediction error detection using high-density CMOS arrays. By implementing both oddball paradigms and many-standards control conditions, they demonstrated that deviant responses were enhanced beyond what SSA alone would predict. These paradigms and their results are summarized in Figure 3, which illustrates the experimental setup and neuronal responses. Figure 3A shows the electrode map of the high-density CMOS microelectrode array, highlighting the spatial distribution of stimulating and recording sites. Figure 3B details the stimulation protocols used in the oddball and many-standards control paradigms, demonstrating how the alternation of standards and deviants elicits differential responses. Figure 3C compares neural responses to standard and deviant stimuli, with raster plots and population peristimulus time histograms (p-PSTHs) revealing that deviant stimuli elicit stronger and more widespread responses than standards, particularly in the late response phase.

Figure 3. Experimental paradigm and deviance detection responses in neuronal networks. (A) Electrode map from a high-density CMOS microelectrode array showing the spatial distribution of stimulating electrodes (red, blue, and green dots for Stim A, Stim B, and Stim C, respectively) and recording sites (light blue dots). Stimuli were delivered at specific locations to investigate network responses. (B) Stimulation protocols used in the oddball and many standards control (MSC) paradigms. In the oddball paradigm, Stim A and Stim B were alternated as standard (std) and deviant (dev) stimuli. In the MSC paradigm, multiple stimuli (Stim A, Stim B, Stim C, etc.) were presented in random order to eliminate expectations of repetition. (C) Top: Raster plots showing neural responses to standard (top) and deviant (bottom) stimuli. Each row corresponds to a recording site, and black dots indicate spike times relative to the stimulus onset. Deviant stimuli elicited stronger and more widespread responses compared to standards. Bottom: Population peristimulus time histograms (p-PSTHs) comparing the number of spikes per time bin across conditions. Deviant stimuli (red line) evoke higher firing rates and longer-lasting responses than standard (black line) and MSC (blue line) conditions, particularly in the late response phase (30–100 ms). Figure modified from Kubota et al. (2021a).
Recent work (Zhang et al., 2025) has solidified these findings using additional controls and larger sample sizes to confirm that the enhanced mismatch responses are not artifacts of simpler mechanisms like stimulus-specific adaptation. These findings were particularly robust in demonstrating mismatch responses dependent on NMDA receptor function, mirroring their role in MMN generation in intact brains and highlighting the critical role of synaptic plasticity in neural prediction. Additionally, this study showed that cultured networks can detect violations of complex statistical regularities, providing further evidence for their sophisticated mismatch responses and sensitivity to sequence predictability, similar to capabilities previously observed only in intact cortex (Yaron et al., 2012). The findings suggest these basic networks possess intrinsic capabilities for statistical learning and prediction.
4.2 Mechanistic insights into deviance detection
The mechanistic basis for deviance detection has been illuminated through computational modeling. Kern and Chao (2023) demonstrated that the interaction between two forms of short-term plasticity—synaptic short-term depression (STD) and threshold adaptation (TA)—can explain how neural networks achieve deviance detection. Their work showed that threshold adaptation alone enables basic deviance detection by reducing responses to frequent stimuli while maintaining sensitivity to unexpected inputs. However, the combination of TA with synaptic short-term depression produces enhanced deviance detection through synergistic effects: local synaptic fatigue from STD amplifies the global recovery mediated by TA. This mechanism allows networks to effectively encode predictable patterns while maintaining heightened sensitivity to novel stimuli, providing a computational foundation for understanding how neural circuits implement prediction error detection.
4.3 Bayesian inference and free energy minimization in cultures
Strong evidence for predictive processing in cultured networks comes from studies demonstrating Bayesian inference capabilities. Isomura et al. (2015) showed that cortical neurons in culture could perform blind source separation using a microelectrode array (MEA) system. By delivering mixed stimuli containing distinct patterns, they demonstrated that rat cortical neurons could develop selective responses to specific stimulus aspects through Hebbian plasticity, distinguishing individual sources within the mixed inputs. This work provided initial support for free energy minimization in simplified neural circuits. Building on this foundation, Isomura and Friston (2018) explored how neuronal cultures perform inference about hidden causes in their sensory environment. By stimulating cortical neurons with probabilistic input patterns, they observed neurons developing functional specialization—selectively responding to certain hidden sources within mixed stimuli. This selective response pattern aligned with Bayesian inference under the free energy principle, as neurons refined their responses based on accumulated evidence regarding the sources generating their inputs. Recent work Isomura et al. (2023) provided the most direct evidence yet by demonstrating that dissociated neuronal networks perform variational Bayesian inference. Using an MEA to deliver structured stimuli composed of two hidden sources, they observed that neuronal networks adapted their responses through synaptic adjustments, functioning as probabilistic beliefs about the sources. Notably, pharmacological manipulation of network excitability altered these “prior beliefs,” offering direct evidence for variational free energy minimization in simplified neural systems.
4.4 Linking prediction and memory formation
The relationship between prediction and memory formation has been illuminated by Lamberti et al. (2023), who demonstrated that focal electrical stimulation generates more effective long-term memory traces compared to global stimulation. Using detailed analysis of network responses, they showed that spatially specific activation patterns enhance the network's ability to predict future inputs. This suggests that localized stimulation allows networks to build more accurate predictive models through targeted synaptic modifications. Their follow-up study (Lamberti et al., 2024) provided mechanistic insights by revealing that NMDA receptor activity is crucial for stabilizing these memory traces and improving prediction, demonstrating how synaptic plasticity enables networks to build and refine their predictive models.
These findings demonstrate that even simplified neuronal networks can implement core aspects of predictive processing—from basic prediction error detection to sophisticated Bayesian inference. While the exact mechanisms may differ from intact brains, the evidence suggests that prediction is a fundamental feature of neural computation that can be studied effectively in reduced preparations. Understanding how these basic circuits implement prediction may inform both theories of brain function and development of artificial systems incorporating similar principles.
5 Goal-directed behavior
Dissociated neuronal cultures, when integrated with embodied systems, provide a powerful model for studying goal-directed behavior. These paradigms typically rely on closed-loop interactions, where the network's activity influences a virtual or physical environment, and feedback from that environment, in turn, shapes network activity and learning.
5.1 Pioneering embodied systems: the animat concept
Potter et al. (1997) pioneered this field by introducing the “Animat in a Petri Dish” concept, combining cultured neural networks with real-time computing environments. Using multi-electrode arrays (MEAs) and advanced imaging techniques, they established a paradigm where network activity controlled a simulated animal (“animat”) while receiving sensory feedback through electrical stimulation. This groundbreaking work demonstrated the potential for studying learning and memory in simplified neural networks through feedback-driven interaction with their environment.
DeMarse et al. (2001) built upon this foundation by demonstrating that cultured networks could control a simulated aircraft's pitch and roll in a virtual environment, showing that these cultures could learn to maintain flight stability over time. Potter et al. (2004) further advanced the field by introducing “Hybrots” (hybrid neural-robotic systems), where cultured networks served as “brains” for robotic systems. This approach addressed limitations of traditional in vitro systems by providing sensory inputs and motor outputs through closed-loop interaction.
5.2 Systematic training and analysis of goal-directed behavior
A systematic investigation of these systems emerged through a series of complementary studies. Chao et al. (2005) demonstrated that random background stimulation could stabilize synaptic weights after tetanization in both simulated and living networks, preventing spontaneous bursts from disrupting learned patterns. They developed novel analytical tools, further refined in Chao et al. (2007), including the Center of Activity Trajectory (CAT) to better detect and analyze network plasticity. This work provided the methodological foundation for more complex behavioral studies.
Chao et al. (2008) demonstrated how simulated neural networks could be shaped for adaptive, goal-directed behavior. Using leaky integrate-and-fire neurons inspired by cortical cultures, they created a closed-loop system where an animat learned to move and remain within specific target areas. Their work revealed several key principles: random background stimulation was crucial for maintaining network stability, successful adaptation required stimuli that evoked distinct network responses, and long-term plasticity through STDP was essential for learning. Building on these insights, Bakkum et al. (2008b) made the crucial advance of implementing these principles in living neural networks. Using multi-electrode arrays, they showed how real biological networks could be trained to perform goal-directed behavior through a structured combination of context-control probing sequences (CPS), patterned training stimulation (PTS), and random background stimulation (RBS). Their success in training cultures to guide an animat toward predefined areas demonstrated that biological neural circuits could be shaped for adaptive control in real-world applications, establishing a foundation for developing neuroprosthetics and therapeutic interventions. This work provided definitive evidence that living neuronal networks could be systematically trained to perform specific behaviors through carefully designed stimulation protocols.
5.3 Exploring network architecture and advanced computational paradigms
Tessadori et al. (2012) further explored modular network architectures, showing that hippocampal neurons divided into distinct compartments could enhance goal-directed behavior. Their virtual robot avoided obstacles in an arena by interfacing with the neuronal culture, with tetanic stimulation applied to reinforce successful movements. Modular networks exhibited more structured and selective neural activity, improving the robot's performance compared to random networks.
Recent advances have explored new computational paradigms in these systems. Masumori et al. (2020) introduced the concept of “neural autopoiesis,” showing how networks can regulate self-boundaries through stimulus avoidance behaviors. Their work revealed how networks adaptively distinguish between controllable and uncontrollable inputs, providing insights into neural self-organization and adaptation. Yada et al. (2021) demonstrated physical reservoir computing with FORCE learning in living neuronal cultures. Figure 4 illustrates this closed-loop system, where cortical neurons cultured on a microelectrode array (MEA) generate spiking activity processed via FORCE learning to create coherent signals. Figure 4A shows the system's design, including optical stimulation using a digital micromirror device (DMD) for feedback. Figure 4B demonstrates the robot navigation task, where neuronal activity controls a robot navigating through a maze toward a goal (highlighted in yellow), with electrical stimulation applied when obstacles are encountered. Feedback from the environment guides the robot's trajectory, highlighting how intrinsic neural dynamics, coupled with real-time learning algorithms, enable adaptive task performance. This work underscores the potential of embodied neuronal networks for solving goal-directed tasks without additional external learning mechanisms.

Figure 4. Closed-loop system for goal directed behavior using a living neuronal culture. (A) Schematic representation of the closed-loop system. Cortical neurons cultured on a microelectrode array (MEA) generate spiking activity, which is recorded and processed via FORCE learning to create a coherent signal. For FORCE learning, the feedback to the neuronal network is provided via optical stimulation (using a digital micromirror device, DMD). (B) Robot navigation task. Representative trajectories of a robot in a maze with obstacles toward a designated goal (target zone highlighted in yellow) are shown. The robot's movements are controlled by neuronal activity, with FORCE learning enabling adaptive task performance. Electrical stimulation is applied when the robot hit an obstacle. Feedback from the environment—through optical and electrical stimulation—guides the robot's trajectory toward the goal. Figure adapted from Yada et al. (2021).
5.4 Recent advances in adaptive learning and complex task performance
The sophistication of tasks that in vitro networks can learn continues to advance, particularly with the implementation of more adaptive closed-loop feedback and the use of more complex culture systems. Kagan et al. (2022) made a significant advance by demonstrating that dissociated neuronal cultures could rapidly adapt to controlling a paddle in a simplified “Pong” game (often referred to as DishBrain). Using a high-density multi-electrode array (HD-MEA) with 26,400 electrodes, the system provided real-time feedback to neurons, which were able to adjust their firing patterns within minutes. Their latest work (Khajehnejad et al., 2024) compared the learning efficiency of biological neurons with deep reinforcement learning (RL) algorithms, revealing that neurons could learn faster in environments with limited training data, highlighting their unique adaptability. Building on this, Habibollahi et al. (2023), using a similar DishBrain setup, found that networks consistently tuned themselves closer to criticality during active gameplay with structured input compared to rest conditions, and importantly, that task-relevant feedback was crucial for learning, even when near-critical dynamics were present.
Recent preprints further demonstrate the potential for complex adaptive learning in in vitro systems. Chen et al. (2025) developed the “Multi-scale Adaptive In-vitro Sandbox” (MAIS) platform and successfully trained cortical cultures to exhibit strategic behaviors like “tit-for-tat” in simulated games through adaptive stimulation, while Robbins et al. (2024) showed that mouse cortical organoids embodied in closed-loop systems could learn goal-directed control in the “Cartpole” task, demonstrating that even dissociated networks can acquire complex adaptive strategies when embedded in sufficiently interactive environments.
Moving forward, these works open new opportunities for exploring more complex tasks in embodied neural systems, though questions about the intelligence or sentience of these behaviors remain (Balci et al., 2023). Further research could involve more intricate feedback systems and multi-compartment setups, to deepen our understanding of neuronal plasticity and prediction in embodied systems, with potential applications in neuroprosthetics, robotics, and bio-hybrid systems.
6 Insights for artificial neural networks and neuromorphic systems
6.1 The imperative for bio-inspired computing
Research into dissociated neuronal cultures has become increasingly relevant for designing neuromorphic computing systems that address traditional computing limitations. The scale of this challenge is striking: Marković et al. (2020) highlight that training a single state-of-the-art natural language processing model on conventional hardware consumes energy equivalent to running a human brain for 6 years. In contrast, biological neural networks perform complex computations with remarkable energy efficiency, requiring ~20 W for the entire human brain. Beyond energy savings, neuronal cultures offer a paradigm where computation and memory coexist within the same substrate, which may interface directly with biological systems (Gentili et al., 2024). The computational properties of neuronal cultures, detailed in earlier chapters—from their self-organization toward critical states optimizing information flow (Chapter 2), to their demonstrations of adaptability and learning (Chapter 3), deviance detection, and predictive coding (Chapter 4)—display capabilities crucial for efficient information processing, adaptation, and prediction, suggesting principles for artificial system design.
6.2 Temporal processing and reservoir computing principles in neural systems
Early studies revealed fundamental aspects of temporal processing in neural systems. Buonomano and Maass (2009) demonstrated how cortical networks process spatiotemporal information by encoding temporal sequences through transient activity patterns, highlighting how recurrent connections and short-term synaptic plasticity enable sequence recognition and prediction. Nikolić et al. (2009) revealed that neurons in the visual cortex retain fading memories of stimuli for several 100 ms, which supports sequential processing. Later work by Enel et al. (2016) extended this by demonstrating reservoir computing properties in the prefrontal cortex, showing how high-dimensional dynamics allow adaptive decision-making through mixed selectivity, while Seoane (2019) examined reservoir computing from an evolutionary perspective.
Reservoir computing applications in neuronal cultures have revealed increasing sophistication in computational capabilities. Dockendorf et al. (2009) demonstrated that cultured networks could act as liquid state machines, effectively separating input patterns with high-frequency stimulation. Kubota et al. (2019) identified the echo state property in cultured networks, which is crucial for maintaining short-term memory and processing temporal information. Using high-density multielectrode arrays, they systematically tested various inter-pulse intervals (IPIs) and found that the optimal range, particularly between 20 and 30 ms, maximized reproducibility and differentiation of neural responses. Kubota et al. (2021b) expanded on this work by quantifying the networks' information processing capacity (IPC), a comprehensive metric capturing their computational versatility. Suwa et al. (2022) demonstrated that dissociated cortical cultures possess both first-order (linear memory of past inputs) and second-order (interactions of past inputs) IPC, enabling them to perform arithmetic and logical operations on previous stimuli. Ikeda et al. (2023) further refined these insights by investigating the dynamic interaction between evoked and spontaneous activities. These findings collectively underscore the potential of cultured networks to act as robust and adaptable computational substrates, providing critical benchmarks for designing bio-inspired computing architectures.
The capacity of dissociated cultures to act as physical reservoirs for computation has been further solidified by recent work. Ferdous and Berdichevsky (2024) showed that these networks can reliably distinguish different spatiotemporal sequences of electrical stimuli, with this ability being dependent on recurrent dynamics creating a “fading memory,” explicitly linking this to reservoir computing principles. Iannello et al. (2025) introduced a “Biological Reservoir Computing” (BRC) paradigm where cultured hippocampal neurons successfully performed temporal pattern recognition tasks, including classifying spatiotemporal spike patterns and handwritten digits (N-MNIST) with high accuracy. These studies underscore the potential of harnessing living neuronal networks as computational substrates, leveraging their self-organized complexity for time-series data processing.
6.3 Learning rules and neuromorphic hardware design
Various approaches have emerged for implementing neural computation in artificial systems. Abbott et al. (2016) tackled challenges in building functional spiking networks, emphasizing stable excitation-inhibition balance and scalable training mechanisms. Learning strategies in artificial systems have also drawn from these findings: Diehl and Cook (2015) demonstrated unsupervised learning in spiking networks with STDP to classify MNIST digits with competitive accuracy, while Nicola and Clopath (2017) introduced FORCE training, stabilizing chaotic network dynamics to reproduce complex temporal sequences like oscillations and trajectories. Subramoney et al. (2021) proposed the “Learning-to-Learn” framework, enabling spiking neural networks to adapt rapidly to new tasks by leveraging meta-learning strategies. Ishikawa et al. (2024) integrated predictive coding principles with reservoir computing in spiking neural networks, advancing the capacity for dynamic temporal processing.
A critical aspect of developing bio-inspired neuromorphic systems is the validation of computational models and hardware emulations against the complex dynamics observed in living neuronal networks. Pani et al. (2017) developed an FPGA-based platform capable of real-time simulation of large-scale spiking neural networks (Izhikevich models), successfully reproducing key electrophysiological features of in vitro cortical cultures, such as spontaneous bursting and stimulus responses. Such real-time hardware emulations are vital for Hardware-in-the-Loop (HIL) applications, potentially interfacing artificial networks with biological preparations. Furthering this comparative approach, Vallejo-Mancero et al. (2024) provided a study of in vitro recordings, in silico simulations, and real-time FPGA-based in duris silico emulations, demonstrating that computational approaches can be tuned to faithfully replicate biological dynamics. These developments are crucial for creating robust neuromorphic hardware and bio-hybrid systems.
6.4 The constructive role of noise in neural computation
The presence of noise in biological neural systems represents not merely a challenge but often a crucial computational resource that enables energy-efficient processing. Unlike digital computers, biological networks can harness noise for computation. Early studies revealed fundamental principles: Matsumoto and Tsuda (1983) showed that noise can stabilize chaotic systems, while Kirkpatrick et al. (1983) showed how noise-based optimization through simulated annealing could solve complex problems. Gassmann (1997) demonstrated noise-induced transitions between chaos and order, and Gammaitoni et al. (1998) showed how stochastic resonance could enhance weak signal detection. Anderson et al. (2000) revealed noise's role in maintaining visual contrast invariance. A comprehensive review by Faisal et al. (2008) documented noise's pervasive and often beneficial role throughout nervous systems. Subsequent work demonstrated specific computational advantages: Habenschuss et al. (2013) showed how cortical circuits harness noise for stochastic computation, and Maass (2014) established noise as a resource for learning in spiking networks. This noise-harnessing computation represents an evolutionary adaptation.
Recent studies have revealed specific mechanisms by which noise shapes neural computation. Ikeda et al. (2023) demonstrated that noise interacts with STDP to drive self-organized criticality in spiking neural networks. Ikeda et al. (2025) further revealed how noise-driven spontaneous activity serves broader computational functions, such as maintaining criticality and supporting memory consolidation. These findings suggest that incorporating controlled noise in neuromorphic systems might improve their adaptability and computational efficiency.
7 Conclusions and future directions
7.1 Recap: the power and utility of dissociated neuronal cultures
Dissociated neuronal cultures serve as powerful, simplified model systems for examining fundamental neural processes. As detailed in this review, these cultures exhibit complex dynamics characteristic of self-organized criticality and adaptive computation (Chapter 2), demonstrate learning and memory formation through various plasticity mechanisms (Chapter 3), and show predictive processing and deviance detection capabilities consistent with theoretical frameworks like the free energy principle (Chapter 4). They have also demonstrated the capacity for goal-directed behaviors in controlled, closed-loop environments, further illustrating their potential as computational models (Chapter 5). These discoveries not only enhance our understanding of biological neuronal function but also provide insights that could influence the design of future artificial neural networks and computational architectures due to the unique blend of simplicity, adaptability, and controllability found in these in vitro systems (Chapter 6).
7.2 Advancements in interfacing technologies
While our current understanding of dissociated neuronal cultures is robust, several avenues remain open for deepening our knowledge and refining the practical applications of these systems. Continued advancements in microelectrode array (MEA) technology are expected to enable more precise recordings and manipulations of neuronal activity. Improvements in the spatial and temporal resolution of MEAs may further clarify how specific patterns of connectivity and synaptic plasticity underlie adaptive computations and dynamic behavior in neuronal networks. Novel fabrication techniques are also emerging for MEAs designed to interface with more complex, three-dimensional in vitro systems. For instance, electrohydrodynamic inkjet printing allows for the rapid prototyping of 3D microelectrode arrays with micrometer-scale resolution, offering new possibilities for extracellular recording from within 3D cell cultures or organoids (Grob et al., 2021). Furthermore, the combination of high-density MEAs with cell-type-specific or single-neuron resolution optogenetics (Kobayashi et al., 2024) offers unprecedented capabilities to dissect the contributions of individual neurons to network-level phenomena and to understand how network states modulate single-cell information processing.
7.3 Engineered network architectures: compartmentalization and modularity
A significant direction involves engineering more structured in vitro networks to better mimic specific brain circuits and investigate inter-regional communication. Early microfabricated compartmentalized culture systems, exemplified by Taylor et al. (2003) and Ravula et al. (2007), demonstrated key capabilities such as achieving fluidic isolation for targeted drug application while allowing guided axonal growth, often integrated with MEAs. Bisio et al. (2014), for instance, demonstrated how modular networks grown on polydimethylsiloxane (PDMS) structures can exhibit higher firing rates during early development and display unique synchronization properties compared to uniform networks, shedding light on hierarchical organization. Building on this foundation, Joo and Nam (2019) introduced an agarose-based microwell patterning method, enabling the recording of slow-wave activity from micro-sized neural clusters while preserving high-frequency spiking information. Negri et al. (2020) refined protocols for multi-well MEA experiments—providing a spike-sorting pipeline and statistical methodologies to improve reproducibility—while also highlighting the importance of proper experimental design.
More recent work by Gladkov et al. (2017, 2021) and Duru et al. (2022) has further extended the engineering of biological neural networks by integrating microstructures with high-density CMOS arrays. These approaches confine axonal outgrowth to specific channels, creating reproducible unidirectional connectivity; for instance, Dupuit et al. (2023) showed that hippocampal neurons in dual-compartment microfluidic devices exhibited enhanced electrical activity and accelerated maturation. To achieve even more precise control, Ming et al. (2021) developed a device enabling unidirectional “en passant” synapses between micro 3D (μ3D) neuronal cultures. Brofiga et al. (2023) utilized removable polymeric masks to create MEA-based models of multiple interacting neuron clusters, and Brofiga et al. (2025) successfully co-cultured cortical, striatal, and thalamic neurons in a three-compartment system, demonstrating self-organization into a functionally connected Cortical-Striatal-Thalamic (CST) circuit with enhanced dynamic richness and memory properties. Finally, Sumi et al. (2023) revealed how increasing network modularity enhances reservoir computing performance in biological neuronal networks, enabling improved classification accuracy in both spatial and temporal tasks. These approaches allow for the investigation of how defined multi-cluster topologies and inter-regional communication influence emergent network dynamics.
7.4 Bridging to complexity: 3D cultures and brain organoids
Looking ahead, research into three-dimensional neuronal culture systems and brain organoids offers new opportunities to study how increased complexity within these in vitro models affects network organization and computation. By introducing additional layers of structural and functional complexity, researchers can investigate how hierarchical connectivity and layered processing influence predictive coding, learning, and memory. Such 3D cultures and organoids more closely mimic the architecture of in vivo brain tissue, potentially providing deeper insights into complex cognitive functions and developmental processes (Hogberg et al., 2013; Lancaster et al., 2013; Clevers, 2016; Smirnova and Hartung, 2022, 2024). Osaki et al. (2024) demonstrated a system where two human cerebral organoids formed reciprocal axon bundles, developing more complex oscillatory activity and short-term plasticity than single or fused organoids, and also showing maturation toward critical dynamics. Hernandez et al. (2025) combined HD-MEAs with spatial transcriptomics to reveal how human organoids autonomously develop functional modules and hub-like structures. Furthermore, the integration of such advanced 3D models with closed-loop electrophysiology is enabling new paradigms; Robbins et al. (2024) demonstrated goal-directed learning in mouse cortical organoids performing a dynamic control task using reinforcement learning-guided training.
7.5 The synergy of experimentation and computational modeling
Future studies will likely explore how the principles uncovered in dissociated neuronal cultures generalize to more complex neural systems. While introducing 3D structures and organoids adds realism, it is the balance between complexity and controllability that makes these models so valuable. Researchers will need to maintain the simplicity that allows for precise control and manipulation, ensuring that the systems remain tractable for in-depth investigations of network function. By carefully scaling complexity, it is possible to examine how additional layers of organization and connectivity influence predictive processing and adaptive computation without losing the crucial benefits of simplicity.
There is also significant potential for an increased synergy between experimental neuroscience and computational modeling. As our ability to record and manipulate neuronal activity improves, so does our capacity to develop and refine computational models that can predict network behavior. These models can, in turn, guide experimental interventions, allowing researchers to probe network function more systematically. The development of dedicated hardware platforms, like FPGA-based systems for real-time simulation of spiking neural networks that can replicate in vitro dynamics (Pani et al., 2017; Vallejo-Mancero et al., 2024), will be vital for testing theoretical models rapidly and for creating future bio-hybrid systems. This iterative process between experimentation and modeling may help identify the principles underpinning self-organization, learning, and prediction in neural networks and aid in translating these insights into artificial systems. The development of integrated “sandbox” environments, such as the MAIS platform (Chen et al., 2025), which merge high-resolution interfaces with microfluidics and real-time adaptive closed-loop control, and embodied platforms like DishBrain (Kagan et al., 2022; Habibollahi et al., 2023; Khajehnejad et al., 2024), are pushing the boundaries of in vitro neuroscience, allowing for the study of learning algorithms and computational capacities in living networks.
In summary, dissociated neuronal cultures remain an invaluable model system for exploring fundamental aspects of neuronal function and computation, particularly the mechanisms underlying self-organized prediction. They have proven essential in examining how networks self-organize, learn, and adapt, providing a simplified and controllable environment to study complex neural phenomena that underlie predictive processing. As researchers continue to balance the simplicity of these systems with increasing complexity—and leverage advanced interfacing and analytical techniques, including those inspired by Biological Reservoir Computing (Iannello et al., 2025)—our understanding will deepen further. These insights not only elucidate how biological brains function through prediction and adaptation but also inspire the next generation of computational architectures and neurotechnological applications, moving toward systems that may operate synergistically with living neural tissue.
Author contributions
AY: Conceptualization, Writing – original draft, Investigation, Methodology. ZZ: Investigation, Methodology, Writing – review & editing. DA: Investigation, Methodology, Writing – review & editing. TS: Investigation, Methodology, Project administration, Writing – review & editing, Funding acquisition. ZC: Writing – review & editing, Investigation, Methodology. HT: Conceptualization, Funding acquisition, Project administration, Supervision, Validation, Visualization, Writing – review & editing.
Funding
The author(s) declare that financial support was received for the research and/or publication of this article. This work was partly supported by the World Premier International Research Center Initiative (WPI) of MEXT, Japan, JSPS KAKENHI (23H03465, 23H03023, 23H04336, 23K14298, 24H01544, 24K20854, and 25H02600), AMED (24wm0625401h0001), JST (JPMJMS2296, JPMJPR22S8), the Asahi Glass Foundation, and the Secom Science and Technology Foundation.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declare that Gen AI was used in the creation of this manuscript. The author(s) acknowledge the use of ChatGPT (OpenAI, version 4o) for assistance in language editing.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Abbott, L. F., DePasquale, B., and Memmesheimer, R. M. (2016). Building functional networks of spiking model neurons. Nat. Neurosci. 19, 350–355. doi: 10.1038/nn.4241
Abbott, L. F., and Rohrkemper, R. (2007). A simple growth model constructs critical avalanche networks. Prog. Brain Res. 165, 13–19. doi: 10.1016/S0079-6123(06)65002-4
Anderson, J. S., Lampl, I., Gillespie, D. C., and Ferster, D. (2000). The contribution of noise to contrast invariance of orientation tuning in cat visual cortex. Science (1979) 290, 1968–1972. doi: 10.1126/science.290.5498.1968
Bak, P. (1996). How Nature Works: The Science of Self-Organized Criticality. New York, NY: Springer. doi: 10.1007/978-1-4757-5426-1
Bak, P., Tang, C., and Wiesenfeld, K. (1987). Self-organized criticality: an explanation of the 1/f noise. Phys. Rev. Lett. 59, 381–384. doi: 10.1103/PhysRevLett.59.381
Bakkum, D. J., Chao, Z. C., and Potter, S. M. (2008a). Long-term activity-dependent plasticity of action potential propagation delay and amplitude in cortical networks. PLoS ONE 3:e2088. doi: 10.1371/journal.pone.0002088
Bakkum, D. J., Chao, Z. C., and Potter, S. M. (2008b). Spatio-temporal electrical stimuli shape behavior of an embodied cortical network in a goal-directed learning task. J. Neural Eng. 5, 310–323. doi: 10.1088/1741-2560/5/3/004
Bakkum, D. J., Frey, U., Radivojevic, M., Russell, T. L., Müller, J., Fiscella, M., et al. (2013). Tracking axonal action potential propagation on a high-density microelectrode array across hundreds of sites. Nat. Commun. 4:2181. doi: 10.1038/ncomms3181
Balci, F., Ben Hamed, S., Boraud, T., Bouret, S., Brochier, T., Brun, C., et al. (2023). A response to claims of emergent intelligence and sentience in a dish. Neuron 111, 604–605. doi: 10.1016/j.neuron.2023.02.009
Ballini, M., Muller, J., Livi, P., Chen, Y., Frey, U., Stettler, A., et al. (2014). A 1024-channel CMOS microelectrode array with 26,400 electrodes for recording and stimulation of electrogenic cells in vitro. IEEE J. Solid State Circuits 49, 2705–2719. doi: 10.1109/JSSC.2014.2359219
Baruchi, I., Volman, V., Raichman, N., Shein, M., and Ben-Jacob, E. (2008). The emergence and properties of mutual synchronization in in vitro coupled cortical networks. Euro. J. Neurosci. 28, 1825–1835. doi: 10.1111/j.1460-9568.2008.06487.x
Bastos, A. M., Usrey, W. M., Adams, R. A., Mangun, G. R., Fries, P., and Friston, K. J. (2012). Canonical microcircuits for predictive coding. Neuron 76, 695–711. doi: 10.1016/j.neuron.2012.10.038
Beggs, J. M., and Plenz, D. (2003). Neuronal avalanches in neocortical circuits. J. Neurosci. 23, 11167–11177. doi: 10.1523/JNEUROSCI.23-35-11167.2003
Beggs, J. M., and Timme, N. (2012). Being critical of criticality in the brain. Front. Physiol. 3:163. doi: 10.3389/fphys.2012.00163
Berdondini, L., Van Der Wal, P. D., Guenat, O., De Rooij, N. F., Koudelka-Hep, M., Seitz, P., et al. (2005). High-density electrode array for imaging in vitro electrophysiological activity. Biosens. Bioelectron. 21, 167–174. doi: 10.1016/j.bios.2004.08.011
Bilder, R. M., and Knudsen, K. S. (2014). Creative cognition and systems biology on the edge of chaos. Front. Psychol. 5:1104. doi: 10.3389/fpsyg.2014.01104
Bisio, M., Bosca, A., Pasquale, V., Berdondini, L., and Chiappalone, M. (2014). Emergence of bursting activity in connected neuronal sub-populations. PLoS ONE 9:e107400. doi: 10.1371/journal.pone.0107400
Brofiga, M., Callegari, F., Cerutti, L., Tedesco, M., and Massobrio, P. (2025). Cortical, striatal, and thalamic populations self-organize into a functionally connected circuit with long-term memory properties. Biosens. Bioelectron. 267:116840. doi: 10.1016/j.bios.2024.116840
Brofiga, M., Losacco, S., Poggio, F., Zerbo, R. A., Milanese, M., Massobrio, P., et al. (2023). Multiple neuron clusters on micro-electrode arrays as an in vitro model of brain network. Sci. Rep. 13, 1–15. doi: 10.1038/s41598-023-42168-0
Buonomano, D. V., and Maass, W. (2009). State-dependent computations: spatiotemporal processing in cortical networks. Nat. Rev. Neurosci. 10, 113–125. doi: 10.1038/nrn2558
Cai, H., Ao, Z., Tian, C., Wu, Z., Liu, H., Tchieu, J., et al. (2023). Brain organoid reservoir computing for artificial intelligence. Nat Electron 6, 1032–1039. doi: 10.1038/s41928-023-01069-w
Chao, Z. C., Bakkum, D. J., and Potter, S. M. (2007). Region-specific network plasticity in simulated and living cortical networks: comparison of the center of activity trajectory (CAT) with other statistics. J. Neural Eng. 4, 294–308. doi: 10.1088/1741-2560/4/3/015
Chao, Z. C., Bakkum, D. J., and Potter, S. M. (2008). Shaping embodied neural networks for adaptive goal-directed behavior. PLoS Comput. Biol. 4:e1000042. doi: 10.1371/journal.pcbi.1000042
Chao, Z. C., Bakkum, D. J., Wagenaar, D. A., and Potter, S. M. (2005). Effects of random external background stimulation on network synaptic stability after tetanization: a modeling study. Neuroinformatics 3:263. doi: 10.1385/NI:3:3:263
Chen, H., Chen, F., Chen, X., Liu, Y., Xu, J., Li, J., et al. (2025). MAIS: an in-vitro sandbox enables adaptive neuromodulation via scalable neural interfaces. bioRxiv [Preprint]. arXiv:2025.03.15.641656. doi: 10.1101/2025.03.15.641656
Chialvo, D. R. (2010). Emergent complex neural dynamics. Nat. Phys. 6, 744–750. doi: 10.1038/nphys1803
Clevers, H. (2016). Modeling development and disease with organoids. Cell 165, 1586–1597. doi: 10.1016/j.cell.2016.05.082
DeMarse, T. B., Wagenaar, D. A., Blau, A. W., and Potter, S. M. (2001). The neurally controlled animat: biological brains acting with simulated bodies. Auton. Robots 11, 305–310. doi: 10.1023/A:1012407611130
Dias, I., Levers, M. R., Lamberti, M., Hassink, G. C., Van Wezel, R., and Le Feber, J. (2021). Consolidation of memory traces in cultured cortical networks requires low cholinergic tone, synchronized activity and high network excitability. J. Neural Eng. 18:046051. doi: 10.1088/1741-2552/abfb3f
Diehl, P. U., and Cook, M. (2015). Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Front. Comput. Neurosci. 9:149773. doi: 10.3389/fncom.2015.00099
Dockendorf, K. P., Park, I., He, P., Príncipe, J. C., and DeMarse, T. B. (2009). Liquid state machines and cultured cortical networks: the separation property. BioSystems 95, 90–97. doi: 10.1016/j.biosystems.2008.08.001
Dranias, M. R., Ju, H., Rajaram, E., and VanDongen, A. M. J. (2013). Short-term memory in networks of dissociated cortical neurons. J. Neurosci. 33, 1940–1953. doi: 10.1523/JNEUROSCI.2718-12.2013
Dupuit, V., Briançon-Marjollet, A., and Delacour, C. (2023). Portrait of intense communications within microfluidic neural networks. Sci. Rep. 13, 1–13. doi: 10.1038/s41598-023-39477-9
Duru, J., Küchler, J., Ihle, S. J., Forró, C., Bernardi, A., Girardin, S., et al. (2022). Engineered biological neural networks on high density CMOS microelectrode arrays. Front. Neurosci. 16:829884. doi: 10.3389/fnins.2022.829884
Enel, P., Procyk, E., Quilodran, R., and Dominey, P. F. (2016). Reservoir computing properties of neural dynamics in prefrontal cortex. PLoS Comput. Biol. 12:e1004967. doi: 10.1371/journal.pcbi.1004967
Estévez-Priego, E., Moreno-Fina, M., Monni, E., Kokaia, Z., Soriano, J., and Tornero, D. (2023). Long-term calcium imaging reveals functional development in hiPSC-derived cultures comparable to human but not rat primary cultures. Stem Cell Reports 18, 205–219. doi: 10.1016/j.stemcr.2022.11.014
Eytan, D., Brenner, N., and Marom, S. (2003). Selective adaptation in networks of cortical neurons. J. Neurosci. 23, 9349–9356. doi: 10.1523/JNEUROSCI.23-28-09349.2003
Faisal, A. A., Selen, L. P. J., and Wolpert, D. M. (2008). Noise in the nervous system. Nat. Rev. Neurosci. 9, 292–303. doi: 10.1038/nrn2258
Ferdous, Z. I., and Berdichevsky, Y. (2024). Temporal information encoding in isolated cortical networks. bioRxiv [Preprint]. arXiv:2024.09.25.614992. doi: 10.1101/2024.09.25.614992
Frey, U., Sanchez-Bustamante, C. D., Ugniwenko, T., Heer, F., Sedivy, J., Hafizovic, S., et al. (2007). “Cell recordings with a CMOS high-density microelectrode array,” in Annual International Conference of the IEEE Engineering in Medicine and Biology – Proceedings (Piscataway, NJ: IEEE), 167–170. doi: 10.1109/IEMBS.2007.4352249
Friedman, N., Ito, S., Brinkman, B. A. W., Shimono, M., Deville, R. E. L., Dahmen, K. A., et al. (2012). Universal critical dynamics in high resolution neuronal avalanche data. Phys. Rev. Lett. 108:208102. doi: 10.1103/PhysRevLett.108.208102
Friedrich, J., Zhou, P., and Paninski, L. (2017). Fast online deconvolution of calcium imaging data. PLoS Comput. Biol. 13:e1005423. doi: 10.1371/journal.pcbi.1005423
Friston, K. (2010). The free-energy principle: a unified brain theory? Nat. Rev. Neurosci. 11, 127–138. doi: 10.1038/nrn2787
Friston, K., Kilner, J., and Harrison, L. (2006). A free energy principle for the brain. J. Physiol. Paris 100, 70–87. doi: 10.1016/j.jphysparis.2006.10.001
Gammaitoni, L., Hänggi, P., Jung, P., and Marchesoni, F. (1998). Stochastic resonance. Rev. Mod. Phys. 70:223. doi: 10.1103/RevModPhys.70.223
Gassmann, F. (1997). Noise-induced chaos-order transitions. Phys. Rev. E 55:2215. doi: 10.1103/PhysRevE.55.2215
Gentili, P. L., Zurlo, M. P., and Stano, P. (2024). Neuromorphic engineering in wetware: the state of the art and its perspectives. Front. Neurosci. 18:1443121. doi: 10.3389/fnins.2024.1443121
Gladkov, A., Pigareva, Y., Kolpakov, V., Mukhina, I., Bukatin, A., Kazantsev, V., et al. (2021). “Bursting activity interplay in modular neural networks in vitro,” in Proceedings - 3rd International Conference “Neurotechnologies and Neurointerfaces”, CNN 2021 (Piscataway, NJ: IEEE), 23–25. doi: 10.1109/CNN53494.2021.9580356
Gladkov, A., Pigareva, Y., Kutyina, D., Kolpakov, V., Bukatin, A., Mukhina, I., et al. (2017). design of cultured neuron networks in vitro with predefined connectivity using asymmetric microfluidic channels. Sci. Rep. 7, 1–14. doi: 10.1038/s41598-017-15506-2
Grob, L., Rinklin, P., Zips, S., Mayer, D., Weidlich, S., Terkan, K., et al. (2021). Inkjet-printed and electroplated 3d electrodes for recording extracellular signals in cell culture. Sensors 21:3981. doi: 10.3390/s21123981
Gross, G. W., Rieske, E., Kreutzberg, G. W., and Meyer, A. (1977). A new fixed-array multi-microelectrode system designed for long-term monitoring of extracellular single unit neuronal activity in vitro. Neurosci. Lett. 6, 101–105. doi: 10.1016/0304-3940(77)90003-9
Habenschuss, S., Jonke, Z., and Maass, W. (2013). Stochastic computations in cortical microcircuit models. PLoS Comput. Biol. 9:e1003311. doi: 10.1371/journal.pcbi.1003311
Habibollahi, F., Kagan, B. J., Burkitt, A. N., and French, C. (2023). Critical dynamics arise during structured information presentation within embodied in vitro neuronal networks. Nat. Commun. 14, 1–13. doi: 10.1038/s41467-023-41020-3
Harrison, R. G. (1910). The outgrowth of the nerve fiber as a mode of protoplasmic movement. J. Exp. Zool. 9, 787–846. doi: 10.1002/jez.1400090405
Hennequin, G., Agnes, E. J., and Vogels, T. P. (2017). Inhibitory plasticity: balance, control, and codependence. Annu. Rev. Neurosci. 40, 557–579. doi: 10.1146/annurev-neuro-072116-031005
Hernandez, S., Schweiger, H. E., Cline, I., Kaurala, G. A., Robbins, A., Solis, D., et al. (2025). Self-organizing neural networks in organoids reveal principles of forebrain circuit assembly. bioRxiv [Preprint]. arXiv:2025.05.01.651773. doi: 10.1101/2025.05.01.651773
Hernández-Navarro, L., Hermoso-Mendizabal, A., Duque, D., de la Rocha, J., and Hyafil, A. (2021). Proactive and reactive accumulation-to-bound processes compete during perceptual decisions. Nat. Commun. 12, 1–15. doi: 10.1038/s41467-021-27302-8
Hogberg, H. T., Bressler, J., Christian, K. M., Harris, G., Makri, G., O'Driscoll, C., et al. (2013). Toward a 3D model of human brain development for studying gene/environment interactions. Stem Cell Res. Ther. 4 1–7. doi: 10.1186/scrt365
Huang, Y., and Rao, R. P. N. (2011). Predictive coding. Wiley Interdiscip. Rev. Cogn. Sci. 2, 580–593. doi: 10.1002/wcs.142
Hyvärinen, T., Hyysalo, A., Kapucu, F. E., Aarnos, L., Vinogradov, A., Eglen, S. J., et al. (2019). Functional characterization of human pluripotent stem cell-derived cortical networks differentiated on laminin-521 substrate: comparison to rat cortical cultures. Sci. Rep. 9, 1–15. doi: 10.1038/s41598-019-53647-8
Iannello, L., Ciampi, L., Lagani, G., Tonelli, F., Crocco, E., Calcagnile, L. M., et al. (2025). From neurons to computation: biological reservoir computing for pattern recognition. arXiv [Preprint]. arXiv:2505.03510. doi: 10.48550/arXiv.2505.03510
Ikeda, M., Akita, D., and Takahashi, H. (2025). Emergent functions of noise-driven spontaneous activity: homeostatic maintenance of criticality and memory consolidation. arXiv [Preprint]. arXiv:2502.10946. doi: 10.48550/arXiv.2502.10946
Ikeda, N., Akita, D., and Takahashi, H. (2023). Noise and spike-time-dependent plasticity drive self-organized criticality in spiking neural network: toward neuromorphic computing. Appl. Phys. Lett. 123:023701. doi: 10.1063/5.0152633
Ikeda, N., and Takahashi, H. (2021). Learning in dissociated neuronal cultures by low-frequency stimulation. IEEJ Trans. Electron. Inf. Syst. 141, 654–660. doi: 10.1541/ieejeiss.141.654
Ishikawa, Y., Shinkawa, A. T., Sumi, T., Kato, A. H., Yamamoto, H., and Katori, Y. (2024). Integrating predictive coding with reservoir computing in spiking neural network model of cultured neurons. Nonlinear Theory Appl. IEICE 15, 432–442. doi: 10.1587/nolta.15.432
Isomura, T., and Friston, K. (2018). In vitro neural networks minimise variational free energy. Sci. Rep. 8:16926. doi: 10.1038/s41598-018-35221-w
Isomura, T., Kotani, K., and Jimbo, Y. (2015). Cultured cortical neurons can perform blind source separation according to the free-energy principle. PLoS Comput. Biol. 11:e1004643. doi: 10.1371/journal.pcbi.1004643
Isomura, T., Kotani, K., Jimbo, Y., and Friston, K. J. (2023). Experimental validation of the free-energy principle with in vitro neural networks. Nat. Commun. 14, 1–15. doi: 10.1038/s41467-023-40141-z
Jimbo, Y., Robinson, H., and Kawana, A. (1998). Strengthening of synchronized activity by tetanic stimulation in cortical cultures: application of planar electrode arrays. IEEE Trans. Biomed. Eng. 45, 1297–1304. doi: 10.1109/10.725326
Jimbo, Y., Tateno, T., and Robinson, H. P. C. (1999). Simultaneous induction of pathway-specific potentiation and depression in networks of cortical neurons. Biophys. J. 76, 670–678. doi: 10.1016/S0006-3495(99)77234-6
Joo, S., and Nam, Y. (2019). Slow-wave recordings from micro-sized neural clusters using multiwell type microelectrode arrays. IEEE Trans. Biomed. Eng. 66, 403–410. doi: 10.1109/TBME.2018.2843793
Ju, H., Dranias, M. R., Banumurthy, G., and Vandongen, A. M. J. (2015). Spatiotemporal memory is an intrinsic property of networks of dissociated cortical neurons. J. Neurosci. 35, 4040–4051. doi: 10.1523/JNEUROSCI.3793-14.2015
Kagan, B. J., Kitchen, A. C., Tran, N. T., Habibollahi, F., Khajehnejad, M., Parker, B. J., et al. (2022). in vitro neurons learn and exhibit sentience when embodied in a simulated game-world. Neuron 110, 3952–3969.e8. doi: 10.1016/j.neuron.2022.09.001
Kamioka, H., Maeda, E., Jimbo, Y., Robinson, H. P. C., and Kawana, A. (1996). Spontaneous periodic synchronized bursting during formation of mature patterns of connections in cortical cultures. Neurosci. Lett. 206, 109–112. doi: 10.1016/S0304-3940(96)12448-4
Kayama, A., Yada, Y., and Takahashi, H. (2019). Development of network structure and synchronized firing patterns in dissociated culture of neurons. Electron. Commun. Japan 102, 3–11. doi: 10.1002/ecj.12199
Keller, G. B., and Mrsic-Flogel, T. D. (2018). Predictive processing: a canonical cortical computation. Neuron 100, 424–435. doi: 10.1016/j.neuron.2018.10.003
Kern, F. B., and Chao, Z. C. (2023). Short-term neuronal and synaptic plasticity act in synergy for deviance detection in spiking networks. PLoS Comput. Biol. 19:e1011554. doi: 10.1371/journal.pcbi.1011554
Kern, F. B., Date, T., and Chao, Z. C. (2024). Effects of spatial constraints of inhibitory connectivity on the dynamical development of criticality in spiking networks. bioRxiv [Preprint]. arXiv:2024.12.04.626902. doi: 10.1101/2024.12.04.626902
Khajehnejad, M., Habibollahi, F., Paul, A., Razi, A., and Kagan, B. J. (2024). Biological neurons compete with deep reinforcement learning in sample efficiency in a simulated gameworld. arXiv [Preprint]. arXiv:2405.16946v1. doi: 10.48550/arXiv.2405.16946
Kirkpatrick, S., Gelatt, C. D., and Vecchi, M. P. (1983). Optimization by simulated annealing. Science (1979) 220, 671–680. doi: 10.1126/science.220.4598.671
Kobayashi, T., Shimba, K., Narumi, T., Asahina, T., Kotani, K., and Jimbo, Y. (2024). Revealing single-neuron and network-activity interaction by combining high-density microelectrode array and optogenetics. Nat. Commun. 15, 1–13. doi: 10.1038/s41467-024-53505-w
Kossio, F. Y. K., Goedeke, S., Van Den Akker, B., Ibarz, B., and Memmesheimer, R. M. (2018). Growing critical: self-organized criticality in a developing neural system. Phys. Rev. Lett. 121:058301. doi: 10.1103/PhysRevLett.121.058301
Kubota, T., Nakajima, K., and Takahashi, H. (2019). “Echo state property of neuronal cell cultures,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11731 LNCS (Cham: Springer), 137–148. doi: 10.1007/978-3-030-30493-5_13
Kubota, T., Sakurayama, K., Shiramatsu, T. I., and Takahashi, H. (2021a). Deviance detection property in dissociated cultures of neurons. IEEJ Trans. Electron. Inf. Syst. 141, 661–667. doi: 10.1541/ieejeiss.141.661
Kubota, T., Takahashi, H., and Nakajima, K. (2021b). Unifying framework for information processing in stochastically driven dynamical systems. Phys. Rev. Res. 3:043135. doi: 10.1103/PhysRevResearch.3.043135
Kuśmierz, Ł., Ogawa, S., and Toyoizumi, T. (2020). Edge of chaos and avalanches in neural networks with heavy-tailed synaptic weight distribution. Phys. Rev. Lett. 125:028101. doi: 10.1103/PhysRevLett.125.028101
Lamberti, M., Tripathi, S., van Putten, M. J. A. M., Marzen, S., and le Feber, J. (2023). Prediction in cultured cortical neural networks. PNAS Nexus 2:pgad188. doi: 10.1093/pnasnexus/pgad188
Lamberti, M., van Putten, M. J. A. M., Marzen, S., and le Feber, J. (2024). The role of NMDA receptors in memory and prediction in cultured neural networks. bioRxiv [Preprint]. arXiv:2024.02.01.578348. doi: 10.1101/2024.02.01.578348
Lancaster, M. A., Renner, M., Martin, C. A., Wenzel, D., Bicknell, L. S., Hurles, M. E., et al. (2013). Cerebral organoids model human brain development and microcephaly. Nature 501, 373–379. doi: 10.1038/nature12517
Le Feber, J., Postma, W., de Weerd, E., Weusthof, M., and Rutten, W. L. C. (2015). Barbed channels enhance unidirectional connectivity between neuronal networks cultured on multi electrode arrays. Front. Neurosci. 9:159060. doi: 10.3389/fnins.2015.00412
Le Feber, J., Rutten, W. L. C., Stegenga, J., Wolters, P. S., Ramakers, G. J. A., and Van Pelt, J. (2007). Conditional firing probabilities in cultured neuronal networks: a stable underlying structure in widely varying spontaneous activity patterns. J. Neural Eng. 4, 54–67. doi: 10.1088/1741-2560/4/2/006
Le Feber, J., Stegenga, J., and Rutten, W. L. C. (2010). The effect of slow electrical stimuli to achieve learning in cultured networks of rat cortical neurons. PLoS ONE 5:e8871. doi: 10.1371/journal.pone.0008871
Le Feber, J., Stoyanova, I. I., and Chiappalone, M. (2014). Connectivity, excitability and activity patterns in neuronal networks. Phys. Biol. 11:036005. doi: 10.1088/1478-3975/11/3/036005
Levina, A., Herrmann, J. M., and Geisel, T. (2007). Dynamical synapses causing self-organized criticality in neural networks. Nat. Phys. 3, 857–860. doi: 10.1038/nphys758
Liu, B., and Buonomano, D. V. (2025). Ex vivo cortical circuits learn to predict and spontaneously replay temporal patterns. Nat. Commun. 16, 1–13. doi: 10.1038/s41467-025-58013-z
Maass, W. (2014). Noise as a resource for computation and learning in networks of spiking neurons. Proc. IEEE 102, 860–880. doi: 10.1109/JPROC.2014.2310593
Maeda, E., Robinson, H. P. C., and Kawana, A. (1995). The mechanisms of generation and propagation of synchronized bursting in developing networks of cortical neurons. J. Neurosci. 15, 6834–6845. doi: 10.1523/JNEUROSCI.15-10-06834.1995
Marković, D., Mizrahi, A., Querlioz, D., and Grollier, J. (2020). Physics for neuromorphic computing. Nat. Rev. Phys. 2, 499–510. doi: 10.1038/s42254-020-0208-2
Marom, S., and Shahaf, G. (2002). Development, learning and memory in large random networks of cortical neurons: lessons beyond anatomy. Q. Rev. Biophys. 35, 63–87. doi: 10.1017/S0033583501003742
Masumori, A., Sinapayen, L., Maruyama, N., Mita, T., Bakkum, D., Frey, U., et al. (2020). Neural autopoiesis: organizing self-boundaries by stimulus avoidance in biological and artificial neural networks. Artif. Life 26, 130–151. doi: 10.1162/artl_a_00314
Matsumoto, K., and Tsuda, I. (1983). Noise-induced order. J. Stat. Phys. 31, 87–106. doi: 10.1007/BF01010923
Mazzoni, A., Broccard, F. D., Garcia-Perez, E., Bonifazi, P., Ruaro, M. E., and Torre, V. (2007). On the dynamics of the spontaneous activity in neuronal networks. PLoS ONE 2:e439. doi: 10.1371/journal.pone.0000439
Millman, D., Mihalas, S., Kirkwood, A., and Niebur, E. (2010). Self-organized criticality occurs in non-conservative neuronal networks during ‘up' states. Nat. Phys. 6, 801–805. doi: 10.1038/nphys1757
Ming, Y., Abedin, M. J., Tatic-Lucic, S., and Berdichevsky, Y. (2021). Microdevice for directional axodendritic connectivity between micro 3D neuronal cultures. Microsyst. Nanoeng. 7, 1–13. doi: 10.1038/s41378-021-00292-9
Montalà-Flaquer, M., López-León, C. F., Tornero, D., Houben, A. M., Fardet, T., Monceau, P., et al. (2022). Rich dynamics and functional organization on topographically designed neuronal networks in vitro. iScience 25:105680. doi: 10.1016/j.isci.2022.105680
Müller, J., Ballini, M., Livi, P., Chen, Y., Radivojevic, M., Shadmani, A., et al. (2015). High-resolution CMOS MEA platform to study neurons at subcellular, cellular, and network levels. Lab Chip 15, 2767–2780. doi: 10.1039/C5LC00133A
Muñoz, M. A. (2018). Colloquium: criticality and dynamical scaling in living systems. Rev. Mod. Phys. 90:031001. doi: 10.1103/RevModPhys.90.031001
Murphy, T. H., Blatter, L. A., Wier, W. G., and Baraban, J. M. (1992). Spontaneous synchronous synaptic calcium transients in cultured cortical neurons. J. Neurosci. 12:4834. doi: 10.1523/JNEUROSCI.12-12-04834.1992
Negri, J., Menon, V., and Young-Pearse, T. L. (2020). Assessment of spontaneous neuronal activity in vitro using multi-well multi-electrode arrays: implications for assay development. eNeuro 7. doi: 10.1523/ENEURO.0080-19.2019
Neto, J. P., Paul Spitzner, F., and Priesemann, V. (2022). Sampling effects and measurement overlap can bias the inference of neuronal avalanches. PLoS Comput. Biol. 18:e1010678. doi: 10.1371/journal.pcbi.1010678
Nicola, W., and Clopath, C. (2017). Supervised learning in spiking neural networks with FORCE training. Nat. Commun. 8, 1–15. doi: 10.1038/s41467-017-01827-3
Nigam, S., Shimono, M., Ito, S., Yeh, F. C., Timme, N., Myroshnychenko, M., et al. (2016). Rich-club organization in effective connectivity among cortical neurons. J. Neurosci. 36, 670–684. doi: 10.1523/JNEUROSCI.2177-15.2016
Nikolić, D., Usler, S. H., Singer, W., and Maass, W. (2009). Distributed fading memory for stimulus properties in the primary visual cortex. PLoS Biol. 7:e1000260. doi: 10.1371/journal.pbio.1000260
Obien, M. E. J., Deligkaris, K., Bullmann, T., Bakkum, D. J., and Frey, U. (2015). Revealing neuronal function through microelectrode array recordings. Front. Neurosci. 9:423. doi: 10.3389/fnins.2014.00423
Okujeni, S., and Egert, U. (2019). Self-organization of modular network architecture by activity-dependent neuronal migration and outgrowth. Elife 8:e47996. doi: 10.7554/eLife.47996.031
Okujeni, S., Kandler, S., and Egert, U. (2017). Mesoscale architecture shapes initiation and richness of spontaneous network activity. J. Neurosci. 37, 3972–3987. doi: 10.1523/JNEUROSCI.2552-16.2017
Opitz, T., De Lima, A. D., and Voigt, T. (2002). Spontaneous development of synchronous oscillatory activity during maturation of cortical networks in vitro. J. Neurophysiol. 88, 2196–2206. doi: 10.1152/jn.00316.2002
Orlandi, J. G., Soriano, J., Alvarez-Lacalle, E., Teller, S., and Casademunt, J. (2013). Noise focusing and the emergence of coherent activity in neuronal cultures. Nat. Phys. 9, 582–590. doi: 10.1038/nphys2686
Osaki, T., Duenki, T., Chow, S. Y. A., Ikegami, Y., Beaubois, R., Levi, T., et al. (2024). Complex activity and short-term plasticity of human cerebral organoids reciprocally connected with axons. Nat. Commun. 15, 1–13. doi: 10.1038/s41467-024-46787-7
Pani, D., Meloni, P., Tuveri, G., Palumbo, F., Massobrio, P., and Raffo, L. (2017). An FPGA platform for real-time simulation of spiking neuronal networks. Front. Neurosci. 11:224688. doi: 10.3389/fnins.2017.00090
Pasquale, V., Massobrio, P., Bologna, L. L., Chiappalone, M., and Martinoia, S. (2008). Self-organization and neuronal avalanches in networks of dissociated cortical neurons. Neuroscience 153, 1354–1369. doi: 10.1016/j.neuroscience.2008.03.050
Petermann, T., Thiagarajan, T. C., Lebedev, M. A., Nicolelis, M. A. L., Chialvo, D. R., and Plenz, D. (2009). Spontaneous cortical activity in awake monkeys composed of neuronal avalanches. Proc. Natl. Acad. Sci. U.S.A. 106, 15921–15926. doi: 10.1073/pnas.0904089106
Pine, J. (1980). Recording action potentials from cultured neurons with extracellular microcircuit electrodes. J. Neurosci. Methods 2, 19–31. doi: 10.1016/0165-0270(80)90042-4
Pine, J. (2006). “A history of MEA development,” in Advances in Network Electrophysiology: Using Multi-Electrode Arrays, eds. M. Taketani and M. Baudry (Boston, MA: Springer), 3–23. doi: 10.1007/0-387-25858-2_1
Plenz, D., Ribeiro, T. L., Miller, S. R., Kells, P. A., Vakili, A., and Capek, E. L. (2021). Self-organized criticality in the brain. Front. Phys. 9:639389. doi: 10.3389/fphy.2021.639389
Potter, S. M., DeMarse, T., Bakkum, D., Booth, M. C., Brumfield, J., Chao, Z. C., et al. (2004). “Hybrots: hybrids of living neurons and robots for studying neural computation,” in Proceedings of Brain Inspired Cognitive Systems (Stirling: University of Stirling), 1–15.
Potter, S. M., and DeMarse, T. B. (2001). A new approach to neural cell culture for long-term studies. J. Neurosci. Methods 110, 17–24. doi: 10.1016/S0165-0270(01)00412-5
Potter, S. M., Fraser, S., and Caltech, J. P. (1997). “Animat in a petri dish: cultured neural networks for studying neural computation,” in Proceedings of 4th Joint Symposium on Neural Computation, UCSD (San Diego, CA: University of California), 167–174.
Potter, S. M., Wagenaar, D. A., Madhavan, R., and DeMarse, T. B. (2003). Long-term bidirectional neuron interfaces for robotic control, and in vitro learning studies. Annu. Int. Confer. IEEE Eng. Med. Biol. Proc. 4, 3690–3693. doi: 10.1109/IEMBS.2003.1280959
Rao, R. P. N., and Ballard, D. H. (1999). Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87. doi: 10.1038/4580
Ravula, S. K., Wang, M. S., Asress, S. A., Glass, J. D., and Bruno Frazier, A. (2007). A compartmented neuronal culture system in microdevice format. J. Neurosci. Methods 159, 78–85. doi: 10.1016/j.jneumeth.2006.06.022
Robbins, A., Schweiger, H. E., Hernandez, S., Spaeth, A., Voitiuk, K., Parks, D. F., et al. (2024). Goal-directed learning in cortical organoids. bioRxiv. doi: 10.1101/2024.12.07.627350
Ruaro, M. E., Bonifazi, P., and Torre, V. (2005). Toward the neurocomputer: image processing and pattern recognition with neuronal cultures. IEEE Trans. Biomed. Eng. 52, 371–383. doi: 10.1109/TBME.2004.842975
Sadeh, S., and Clopath, C. (2020). Inhibitory stabilization and cortical computation. Nat. Rev. Neurosci. 22, 21–37. doi: 10.1038/s41583-020-00390-z
Schroeter, M. S., Charlesworth, P., Kitzbichler, M. G., Paulsen, O., and Bullmore, E. T. (2015). Emergence of rich-club topology and coordinated dynamics in development of hippocampal functional networks in vitro. J. Neurosci. 35, 5459–5470. doi: 10.1523/JNEUROSCI.4259-14.2015
Segev, R., Shapira, Y., Benveniste, M., and Ben-Jacob, E. (2001). Observations and modeling of synchronized bursting in two-dimensional neural networks. Phys. Rev. E 64:011920. doi: 10.1103/PhysRevE.64.011920
Seoane, L. F. (2019). Evolutionary aspects of reservoir computing. Philos. Trans. Royal Soc. B 374:20180377. doi: 10.1098/rstb.2018.0377
Shahaf, G., and Marom, S. (2001). Learning in networks of cortical neurons. J. Neurosci. 21, 8782–8788. doi: 10.1523/JNEUROSCI.21-22-08782.2001
Shew, W. L., and Plenz, D. (2013). The functional benefits of criticality in the cortex. Neuroscientist 19, 88–100. doi: 10.1177/1073858412445487
Shew, W. L., Yang, H., Petermann, T., Roy, R., and Plenz, D. (2009). Neuronal avalanches imply maximum dynamic range in cortical networks at criticality. J. Neurosci. 29, 15595–15600. doi: 10.1523/JNEUROSCI.3864-09.2009
Smirnova, L., Caffo, B. S., Gracias, D. H., Huang, Q., Morales Pantoja, I. E., Tang, B., et al. (2023). Organoid intelligence (OI): the new frontier in biocomputing and intelligence-in-a-dish. Front. Sci. 1:1017235. doi: 10.3389/fsci.2023.1017235
Smirnova, L., and Hartung, T. (2022). Neuronal cultures playing Pong: first steps toward advanced screening and biological computing. Neuron 110, 3855–3856. doi: 10.1016/j.neuron.2022.11.010
Smirnova, L., and Hartung, T. (2024). The promise and potential of brain organoids. Adv. Healthc. Mater. 13:2302745. doi: 10.1002/adhm.202302745
Soriano, J. (2023). Neuronal cultures: exploring biophysics, complex systems, and medicine in a dish. Biophysica 3, 181–202. doi: 10.3390/biophysica3010012
Soriano, J., Martínez, M. R., Tlusty, T., and Moses, E. (2008). Development of input connections in neural cultures. Proc. Natl. Acad. Sci. U.S.A. 105, 13758–13763. doi: 10.1073/pnas.0707492105
Stetter, O., Battaglia, D., Soriano, J., and Geisel, T. (2012). Model-free reconstruction of excitatory neuronal connectivity from calcium imaging signals. PLoS Comput. Biol. 8:e1002653. doi: 10.1371/journal.pcbi.1002653
Subramoney A Scherr F Maass W. (2021). “Reservoirs learn to learn,” in Reservoir Computing: Theory, Physical Implementations, and Applications, Natural Computing Series, eds. K. Nakajima and I. Fischer (Singapore: Springer), 59–76. doi: 10.1007/978-981-13-1687-6_3
Sumi, T., Yamamoto, H., Katori, Y., Ito, K., Moriya, S., Konno, T., et al. (2023). Biological neurons act as generalization filters in reservoir computing. Proc. Natl. Acad. Sci. U.S.A. 120:e2217008120. doi: 10.1073/pnas.2217008120
Suwa, E., Kubota, T., Ishida, N., and Takahashi, H. (2022). Information processing capacity of dissociated culture of cortical neurons. IEEJ Trans. Electron. Inf. Syst. 142, 578–585. doi: 10.1541/ieejeiss.142.578
Tanaka, G., Yamane, T., Héroux, J. B., Nakane, R., Kanazawa, N., Takeda, S., et al. (2019). Recent advances in physical reservoir computing: a review. Neural Netw. 115, 100–123. doi: 10.1016/j.neunet.2019.03.005
Taylor, A. M., Rhee, S. W., Tu, C. H., Cribbs, D. H., Cotman, C. W., and Jeon, N. L. (2003). Microfluidic multicompartment device for neuroscience research. Langmuir 19, 1551–1556. doi: 10.1021/la026417v
Tessadori, J., Bisio, M., Martinoia, S., and Chiappalone, M. (2012). Modular neuronal assemblies embodied in a closed-loop environment: toward future integration of brains and machines. Front. Neural Circuits 6:33917. doi: 10.3389/fncir.2012.00099
Tetzlaff, C., Okujeni, S., Egert, U., Wörgötter, F., and Butz, M. (2010). Self-organized criticality in developing neuronal networks. PLoS Comput. Biol. 6:e1001013. doi: 10.1371/journal.pcbi.1001013
Thomas, C. A., Springer, P. A., Loeb, G. E., Berwald-Netter, Y., and Okun, L. M. (1972). A miniature microelectrode array to monitor the bioelectric activity of cultured cells. Exp. Cell Res. 74, 61–66. doi: 10.1016/0014-4827(72)90481-8
Tibau, E., Ludl, A. A., Rudiger, S., Orlandi, J. G., and Soriano, J. (2020). Neuronal spatial arrangement shapes effective connectivity traits of in vitro cortical networks. IEEE Trans. Netw. Sci. Eng. 7, 435–448. doi: 10.1109/TNSE.2018.2862919
Tibau, E., Valencia, M., and Soriano, J. (2013). Identification of neuronal network properties from the spectral analysis of calcium imaging signals in neuronal cultures. Front. Neural Circuits 7:72027. doi: 10.3389/fncir.2013.00199
Vallejo-Mancero, B., Faci-Lázaro, S., Zapata, M., Soriano, J., and Madrenas, J. (2024). Real-time hardware emulation of neural cultures: a comparative study of in vitro, in silico and in duris silico models. Neural Netw. 179:106593. doi: 10.1016/j.neunet.2024.106593
Van Pelt, J., Wolters, P., Corner, M., Rutten, W., and Ramakers, G. (2004). Long-term characterization of firing dynamics of spontaneous bursts in cultured neural networks. IEEE Trans. Biomed. Eng. 51, 2051–2062. doi: 10.1109/TBME.2004.827936
Van Vreeswijk, C., and Sompolinsky, H. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 274, 1724–1726. doi: 10.1126/science.274.5293.1724
Vogels, T. P., Sprekeler, H., Zenke, F., Clopath, C., and Gerstner, W. (2011). Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science 334, 1569–1573. doi: 10.1126/science.1211095
Wagenaar, D. A., Madhavan, R., Pine, J., and Potter, S. M. (2005). Controlling bursting in cortical cultures with closed-loop multi-electrode stimulation. J. Neurosci. 25, 680–688. doi: 10.1523/JNEUROSCI.4209-04.2005
Wagenaar, D. A., Pine, J., and Potter, S. M. (2006). An extremely rich repertoire of bursting patterns during the development of cortical cultures. BMC Neurosci. 7:11. doi: 10.1186/1471-2202-7-11
Yada, Y., Mita, T., Sanada, A., Yano, R., Kanzaki, R., Bakkum, D. J., et al. (2017). Development of neural population activity toward self-organized criticality. Neuroscience 343, 55–65. doi: 10.1016/j.neuroscience.2016.11.031
Yada, Y., Yasuda, S., and Takahashi, H. (2021). Physical reservoir computing with FORCE learning in a living neuronal culture. Appl. Phys. Lett. 119:180501. doi: 10.1063/5.0064771
Yaghoubi, M., De Graaf, T., Orlandi, J. G., Girotto, F., Colicos, M. A., and Davidsen, J. (2018). Neuronal avalanche dynamics indicates different universality classes in neuronal cultures. Sci. Rep. 8, 1–11. doi: 10.1038/s41598-018-21730-1
Yaghoubi, M., Orlandi, J. G., Colicos, M. A., and Davidsen, J. (2024). Criticality and universality in neuronal cultures during “up” and “down” states. Front. Neural Circuits 18:1456558. doi: 10.3389/fncir.2024.1456558
Yamamoto, H., Moriya, S., Ide, K., Hayakawa, T., Akima, H., Sato, S., et al. (2018). Impact of modular organization on dynamical richness in cortical networks. Sci. Adv. 4:eaau4914. doi: 10.1126/sciadv.aau4914
Yamamoto, H., Spitzner, F. P., Takemuro, T., Buendía, V., Murota, H., Morante, C., et al. (2023). Modular architecture facilitates noise-driven control of synchrony in neuronal networks. Sci. Adv. 9:eade1755. doi: 10.1126/sciadv.ade1755
Yaron, A., Hershenhoren, I., and Nelken, I. (2012). Sensitivity to complex statistical regularities in rat auditory cortex. Neuron 76, 603–615. doi: 10.1016/j.neuron.2012.08.025
Zamora-López, G., Chen, Y., Deco, G., Kringelbach, M. L., and Zhou, C. (2016). Functional complexity emerging from anatomical constraints in the brain: the significance of network modularity and rich-clubs. Sci. Rep. 6, 1–18. doi: 10.1038/srep38424
Zhang, Z., Yaron, A., Akita, D., Shiramatsu, T. I., Chao, Z. C., and Takahashi, H. (2025). Deviance detection and regularity sensitivity in dissociated neuronal cultures. arXiv [Preprint]. arXiv:2502.20753. doi: 10.48550/arXiv.2502.20753
Keywords: dissociated neuronal cultures, predictive coding, self-organized criticality, neuromorphic computing, goal-directed behavior, free energy principle
Citation: Yaron A, Zhang Z, Akita D, Shiramatsu TI, Chao ZC and Takahashi H (2025) Dissociated neuronal cultures as model systems for self-organized prediction. Front. Neural Circuits 19:1568652. doi: 10.3389/fncir.2025.1568652
Received: 30 January 2025; Accepted: 09 June 2025;
Published: 25 June 2025.
Edited by:
Takao K. Hensch, Harvard University, United StatesReviewed by:
Bin Zhi Li, Chinese Academy of Sciences (CAS), ChinaJordi Soriano-Fradera, University of Barcelona, Spain
Copyright © 2025 Yaron, Zhang, Akita, Shiramatsu, Chao and Takahashi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Hirokazu Takahashi, dGFrYWhhc2hpQGkudS10b2t5by5hYy5qcA==