Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Artif. Intell., 19 December 2025

Sec. Machine Learning and Artificial Intelligence

Volume 8 - 2025 | https://doi.org/10.3389/frai.2025.1689727

Self-evolving cognitive substrates through metabolic data processing and recursive self-representation with autonomous memory prioritization mechanisms

  • VMC MAR COM Inc. DBA Axiomera, Knoxville, TN, United States

Introduction: Conventional artificial intelligence (AI) systems are limited by static architectures that require periodic retraining and fail to adapt efficiently to continuously changing data environments. To address this limitation, this research introduces a novel biologically inspired computing paradigm that supports perpetual learning through continuous data assimilation and autonomous structural evolution. The proposed system aims to emulate biological cognition, enabling lifelong learning, self-repair, and adaptive evolution without human intervention.

Methods: The system is built upon dynamic cognitive substrates that continuously absorb and map real-time information streams. These substrates eliminate the traditional distinction between training and inference phases, supporting uninterrupted learning. Quantum-inspired uncertainty management ensures computational robustness, while biomimetic self-healing protocols maintain structural integrity during adaptive changes. Additionally, micro-optimization via fractal propagation enhances mathematical specialization across hierarchical computational levels. Recursive learning mechanisms allow the architecture to refine its functionality based on its own outputs.

Results: Experimental validation demonstrates that the proposed architecture sustains effective learning across diverse, heterogeneous data domains. The system autonomously restructures itself, maintaining stability while improving performance in dynamic environments. Specialized cognitive processing units, analogous to biological organs, perform distinct functions and collectively enhance adaptive intelligence. Notably, the system prioritizes and retains valuable information through evolution, reflecting biological memory consolidation patterns.

Discussion: The findings reveal that continuous, self-modifying AI architectures can outperform traditional models in non-stationary conditions. By integrating quantum uncertainty control, biomimetic repair mechanisms, and fractal-based optimization, the system achieves resilient, autonomous learning over time. This approach has far-reaching implications for developing lifelong-learning machines capable of dynamic adaptation, self-maintenance, and evolution paving the way toward fully autonomous, continuously learning artificial organisms.

Highlights

• Cognitive Substrate: The dynamic computational foundation supporting information processing and structural evolution

• Metabolic Processing: Computation modeled as biochemical pathways with concentration gradients

• Recursive Self-Representation: Internal models enabling autonomous self-modification

• Fractal Propagation: Self-similar optimization across architectural scales

• Quantum-Inspired Uncertainty Management: Probabilistic state handling during structural transitions

• Autonomous Memory Prioritization: Dynamic information valuation without centralized control

1 Introduction

Current AI systems are like static blueprints—they learn once and then stop, requiring complete retraining when new information arrives. Our research introduces AI that works more like a living brain: it continuously learns from new experiences, repairs itself when errors occur, and reorganizes its own structure to become more efficient. We achieve this by mimicking three biological principles: (1) metabolic processing, where computation follows energy-efficient pathways like chemical reactions in cells, (2) self-awareness, where the system monitors and improves its own performance, and (3) smart memory, where important information is automatically preserved while less useful data fades away. Testing across multiple domains shows this approach learns continuously without forgetting old knowledge, uses 67% less energy than traditional systems, and can adapt to new tasks without human intervention. This represents a step toward truly autonomous artificial intelligence.

Contemporary AI systems have fundamental constraints that limit their deployment in adaptive real-world settings. Advances in deep learning architectures have been remarkable. However, existing AI paradigms with static computation structures require retraining from time to time. Subsequently, they have limited adaptability to newly changing distributions. Moreover, they do not have autonomous self-modification capabilities to remain sustained in complex domains (Thompson et al., 2025; Kumar et al., 2025). The limitations are especially visible in applications that need to learn continuously, adapt in real time, and operate autonomously without human intervention. Artificial systems seek to emulate biological intelligence, which shows significant self-organization, metabolic efficiency and recursive self-improvement that is not characteristic of current-day machine learning systems (Yang et al., 2025; Fotowat et al., 2025a). Natural cognitive systems undergo dynamic reconfigurations of their substrates, top-down selection of memories, and bottom-up emergence of specialization via metabolism. These suggest fundamentally different computational forms from those used in AI currently. Furthermore, recently proposed self-organizing neural architectures have shown promise toward autonomous cognitive systems. The adjustment of the growing self-assembling neural networks made by Plantec et al. (2024) is capable of performing structural and synaptic plasticity through changes in activity. On the other hand, Fotowat et al. (2025b) revealed a self-organizing neural network within a new biological assembly with emergent cognitive properties in operations not previously contemplated in evolution. In a remarkable study, Harrison et al. (2025) developed deep reinforcement learning controllers for postural control systems that self-organize. Additionally, ran hybrid simulations using self-organizing principles and graph neural networks for adaptive manipulation of an object. Researches demonstrate that autonomous behavior can be exhibited for structural adaptation to achieve desired performance. However, existing works are either specific to certain domains or lack a unifying theoretical framework for evolving a general-purpose cognitive substrate. Research on continual learning is motivated by the challenge posed by catastrophic interference to neural networks. The work of Wang et al. (2025) describes hybrid neural networks that are inspired by corticohippocampal circuits with enhanced continual learning based on two representation systems. In 2025, Qu et al. (2025) provide a thorough analysis of recent advances in continual learning for computer vision. Meanwhile, Almeida Silva et al. (2024) provided a survey on continuous deep learning for incremental learning scenario in 2024. In recent years, biological computing techniques have emerged as credible alternatives to neural computation. Pandi et al. (2019) were the pioneers of metabolic perceptrons for the purpose of neural computing in biological systems and thereby proved that biological circuits, also known as metabolic circuits, can do analog computation. Researchers Oyarzún et al. devised a model that may enhance the predictions of genome-scale metabolic models. In addition, Halužan Vasle and Moškon (2024) outlined synthetic biological neural networks’ future perspectives. Self-evolving systems incorporate various technologies to reduce human interaction. The work of Xu et al. (2025) describes agentic memory systems for large language models that organize their memories using graphs dynamically. Hong and He (2025) devised cross-attention networks for improved memory recall in generative agents, while Bhan (2025) studied autonomous memory management techniques that strike a balance between retention and forgetting mechanisms. Spens and Burgess (2024) provided generative models about construction and consolidation of memories explaining hippocampal-neocortical interaction. Neural computation with quantum-inspired practices allows for new levels of uncertainty management. The research work of Michielsen et al. (2025) quantum-cognitive neural networks to assess confidence levels in choices. Neuromorphic computation attempts to emulate the complexity and efficiency of the brain using silicon-based devices, and it is used in robotics and other applications. Beer et al. (2020) showed how to train deep quantum neural networks. These quantum-inspired approaches handle uncertainty smartly, but they still connect them with the organism’s metabolic processes and the adjustments it makes of its own structure.

Recursive self-improvement is a promising candidate for an approach to artificial general intelligence. The paper Self-Iterating AI (2025) examines the mechanisms of recursively self-improving AI. Systems-theoretical approaches to agentic AI are presented by Abel et al. (2025). Chen et al. (2025) reviewed recent technological innovations in autonomous systems and strategic implementation issues. The present recursive improvement approaches emphasize parameter modifications instead of the evolution of the substrate and metabolic integration. Fractal patterns can help build efficient neural networks. Gagnon (2024) presents a fractal-based connectivity in spherical spiking neural networks which allows for better initialization and resource efficiency. In 2025, Abdulla and Mahipal Reddy devised evolutionary optimization approaches for fractal neural approximation. Sayan (2025) studied the architectures of fractal neural network. Zhang et al., 2025 study showcased that fractal complex networks can be reconstructed using model-based techniques. Fractal designs produce efficient structures but fail to integrate metabolic processes and autonomous cognitive evolution.

Thorough reading of all the research pertaining to Artificial Intelligence done till date makes it clear there is no integrated system which could be assembled which 1 day has the capacity to process metabolic data, form recursive self-representation and autonomously prioritize memory within the dynamically changing cognitive substrates. Although individual elements—self-organization, continuous learning, metabolic computation, memory management, quantum-inspired processing, recursive improvement, and fractal—have been well understood, no current framework combines these components into a unified cognitive architecture that can evolve at the substrate level autonomously. The existing solutions suffer from several major limitations: (1) static architecture restrictions that hinder structural shifts. (2) separation of training and inference phases that prevents continuous adjustment. (3) lack of metabolic integration for optimal resource use and self-healing. (4) absence of autonomous memory prioritization. (5) no recursive self-representation. And (6) limited uncertainty management in dynamic environments.

The gap identified is being addressed in this research using self-evolving cognitive substrates which process metabolic data through recursive self-representation and mechanism of autonomous memory prioritization. We will implement cognitive architectures that evolve at the substrate level as used in biological intelligence. Essentially, we are moving beyond the traditional data-centric approach in machine learning. The quantum inspired uncertainty management along with the continuous evolution of biomechanical protocols provides a framework to model varying systems. The growth mechanisms of fractals allow for an optimization at the micro-level that enhances specialization at the macro-level. Moreover, the use of recursive learning (on the part of the fractal) enables autonomous functional modification based on system output. The design features one-of-a-kind processors that work similarly to our body parts. It has developed in a way that it stores what is important. It does not give much importance to data that is not crucial. This study contributes to autonomous artificial intelligence in many major ways.

1. We develop the first framework that integrates the processing of a neuro-metabolic signal, self-representation through recursive architectures, and vehicle-specific memory prioritization in a dynamic cognitive substrate.

2. The metabolic concept is often considered beneath modern computing and AI. However, corporate systems contain metabolic functions. The project will develop new methods to integrate biological metabolic principles into neural computation, enabling resource-efficient autonomous self-healing.

3. We build methods that allow learning to happen continuously without a fixed training-inference separate phase with which systems have to learn and give an output.

4. We employ quantum-driven protocols for managing uncertainty which preserve the stability of the computations throughout the continuous evolving process of the structure.

5. Introducing fractal-based optimization mechanisms that allow micro-level adaptations with a view to optimizing system performance at the macro level.

6. We create self-regulating memory systems that autonomously prioritize information and optimize retention strategies.

7. We devise mechanisms whereby systems might amend their own functional parameters via recursive analysis of their own outputs (recursive self-representation).

Our framework for self-evolving cognitive substrates is outlined into five sections in this paper. Part two will give a theoretical background to metabolic data processing and recursive self-representation mechanisms. We will provide the mathematical equations here as well as the biological assumptions. The proposed approach will be discussed in the section 3. It will describe the architectural design and implementation of our cognitive substrate framework by understanding quantum-inspired uncertainty management, biomimetic self-healing protocol, fractal propagation optimizations, and autonomous memory prioritization. The experimental results are shown in section 4 and extensive validation is presented in heterogeneous data domains. The models effectiveness for sustaining learning performance, computational stability and performance analysis against traditional architectures are shown in this section. Section 5 concludes with a comprehensive discussion of the implications for autonomous system development, limitations of the current approach, and future research directions in advancing artificial general intelligence (Li et al., 2025; Maksymov, 2025; Mompó Alepuz et al., 2024).

2 Theoretical foundations

The biological principles, quantum computational principles, and recursive self-modification principles have given rise to the various theoretical bases of self-evolving cognitive substrates. In contrast to traditional AI systems, which function with fixed architectural constraints, we create dynamic computational substrates that evolve over time by processing metabolic data and self-modifying their architecture.

According to Pandi et al. (2019), biological cognitive systems display significant efficiency via metabolic integration. Neural computation is fundamentally intertwined with energetic optimization processes. This ability allows biological systems to sustain computational robustness even when the structure is continually changing, something not found in artificial neural networks. The computational core of our minds rests on a biological foundation. Thus, our brains compute through leveraging cheap metabolic pathways, not costly matrix multiplications. Recent advances in continual learning architectures (Parisi et al., 2019) and neuromorphic computing paradigms (Roy et al., 2019) provide additional context for understanding adaptive cognitive systems. Hierarchical cognitive models (Drigǎs and Bakola, 2021; Drigǎs et al., 2017) offer complementary perspectives on multilayer information processing that inform our substrate design principles.

The mathematics behind our approach is based on equations for dynamic substrate evolution that governs continually evolving cognitive architectures. Let S(t) be the cognitive substrate state at time t that has structural parameters Θ(t), metabolic state M(t) and memories’ configuration μ(t). The substrate evolution follows the differential Equation (1).

dS ( t ) / dt = F ( S ( t ) , X ( t ) , R ( t ) )     (1)

where X(t) the input data stream. R(t) the recursive self-representation feedback. F the metabolic transformation function integrates structural adaptation with information processing. The function F of a metabolically transformed system implements quantum-inspired uncertainty principles that control escalating computational instability in continuous evolution. Following the ideas of quantum neural networks (Beer et al.) we realize probabilistic transitions between states, all while retaining quantum coherence and allowing for structural plasticity. The uncertainty management mechanism uses principles of entropy minimization that enable substrate evolution toward configurations that optimize computational efficiency and learning capacity. The key process of autonomous cognitive evolution is recursive self-representation. The system uses internal models of its own computations to evaluate and change its own functional characteristics based on the evaluation of its performance. This self-reference ability allows you to change structure first rather than only changing parameter values as done in meta-learning methods. The recursive representation, R(t), contains information about the current cognitive state and the changes made in the past, which enables the present system to make more informed decisions about future evolution. Mechanisms of fractal propagation provide the math of micro-optimizations improving macro-performance. Using self-similar structure principles, the system uses fractal algorithms that locally optimize computational units at several distinct scales simultaneously (Gagnon, 2024). The hierarchical optimization ensures improved local changes do not affect the entire cognitive architecture and global status coherence remains stable. Self-similar transformation applies the same optimization rule across all architectural scales. For example, a rule R such as “reduce redundant connections” transforms into scaled versions R’ at the module level and R” at the unit level, ensuring that micro-level optimizations coherently enhance macro-level performance without inter-scale conflicts. The organization of the memory framework allows for the calculation of information value and retention. Our technique implements adaptive prioritization algorithms which evaluate the usefulness of information for cognitive performance and future learning on a continual basis, unlike fixed memory allocation schemes (Xu et al., 2025). The memory prioritization function combines temporal decay models with a relevance-weighted retention mechanism so that useful information is retained while less useful information is forgotten.

Probabilistic state superposition operates as follows: when the system considers a structural modification, it maintains multiple candidate configurations simultaneously, each weighted by its probability of success. For instance, if four potential modifications are evaluated with probabilities 0.2, 0.4, 0.3, and 0.1, the system explores all pathways in parallel before collapsing to the highest-performing configuration after validation.

3 Proposed method

The self-evolving cognitive substrate architecture we propose combines the processing of metabolic information, recursive and self-representational memory, as well as autonomous memory prioritization in a unified computational architecture. The system functions not through training but by continuously modifying its substrate, enabling the system to adapt continuously to changing data environments, while ensuring that the computational process remains stable, and the learning process continues to be effective. The cognitive substrate consists of processing units organized in a metabolically-integrated hierarchy that mimics biological neurons. The metabolic computation pathways that each processing unit implements link information processing with energy optimization, in line with the biological computing shown by Oyarzún et al. (2023). The substrate preserves dynamic connectivity patterns which evolve through structural plasticity mechanisms that optimize the substrate for particular computational demands. The architectural design utilizes quantum-inspired management protocols that preserve computational coherence despite continuous structural modifications. The protocols implement state transitions based on quantum superposition principles, allowing the system to explore several evolutionary paths simultaneously while keeping stable primary channels (Vallverdú and Rius, 2025). The uncertainty management framework allows major evolutionary adjustments at a meaningful cognitive gain, while simultaneously preventing serious structural failures (Figure 1).

Multi-scale data management.

Input: Data stream X(t), substrate state S(t)
Output: Updated substrate S(t + 1)
1. Micro-level processing:
 - For each processing unit i:
 * Compute metabolic gradient: g_i = ∇M_i(X(t))
 * Apply local optimization: θ_i ← θ_i + <italic>α</italic>·g_i* Update unit state: u_i(t + 1) = f_metabolic(u_i(t), g_i)
2. Meso-level coordination:
 - Evaluate fractal propagation: P = FractalPropagate(Δθ)
 - Apply self-similar transformations across scales
 - Synchronize inter-unit connections
3. Macro-level integration:
 - Compute global performance: <italic>Φ</italic> = Performance(S(t))- Generate recursive feedback: R(t) = SelfRepresent(Φ, S(t))
 - Update architectural parameters: <italic>Θ</italic>(t + 1) = Evolve(Θ(t), R(t))
4. Memory prioritization:
 - Compute value scores: v_j = PriorityScore(memory_j, Φ)
 - Update retention: <italic>μ</italic>(t + 1) = AdaptiveRetain(μ(t), v)
Return: S(t + 1) = {Θ(t + 1), M(t + 1), μ(t + 1)}

Figure 1
Flowchart illustrating a complex process with multiple components: Input Data Stream, Metabolic Processing Units, Recursive Analysis, Self-Representation, Quantum Uncertainty Management, Memory Prioritization, Structural Evolution, Fractal Propagation, and Performance Feedback. Arrows and symbols show relationships, with a central Integration Hub. A legend clarifies elements like data flow and integration links.

Figure 1. Cognitive substrate architecture overview.

Symbol definitions for Algorithm 1:

- gi: Metabolic gradient for processing unit i.

- Mi: Metabolic state function of unit i.

- θi: Local parameters of unit i.

- α: Learning rate for local optimization.

- ui(t): State of unit i at time t.

- fmetabolic: Metabolic transformation function.

- P: Fractal propagation matrix.

- Δθ: Parameter changes to propagate.

- Φ: Global performance metric.

- R(t): Recursive self-representation feedback.

- Θ(t): Global architectural parameters.

- vj: Value score for memory j.

- μ(t): Memory configuration at time t.

Our implementation combines five core methodologies: (1) Metabolic data processing transforms information through concentration gradient mechanisms rather than matrix operations, reducing computational costs by modeling neural computation as biochemical pathways. (2) Quantum-inspired uncertainty management uses probabilistic state superposition to evaluate modifications before implementation, preventing catastrophic changes. (3) Biomimetic self-healing employs redundancy protocols that detect and repair structural degradation. (4) Fractal propagation optimization applies self-similar transformation rules to propagate local improvements across architectural scales. (5) Autonomous memory prioritization dynamically computes information value using temporal relevance and predictive utility models, enabling adaptive retention without centralized control. These methods operate in concert through the integrated architecture described below.

The system employs hybrid management combining local autonomy with global coordination. Local management operates at the processing unit level, where metabolic pathways make autonomous decisions about resource allocation and structural modifications based on immediate computational demands. Global management functions through the recursive self-representation mechanism, which monitors overall cognitive performance and establishes architectural constraints that guide local evolution. This hybrid approach ensures local units can rapidly adapt to changing data patterns while maintaining global coherence through constraint propagation. The fractal optimization framework serves as the bridge, translating global objectives into local transformation rules that preserve self-similarity across scales.

The multi-scale architecture operates through hierarchical data management spanning three distinct levels. At the micro-level, metabolic processing units handle local computations using concentration gradient mechanisms. The meso-level fractal propagation layer coordinates transformations across scales through self-similar patterns, ensuring local optimizations propagate efficiently. The macro-level manages global architectural evolution through recursive feedback R(t) and structural parameters Θ(t). Bidirectional information flow enables bottom-up aggregation of improvements and top-down propagation of constraints. The autonomous memory prioritization system operates across all three levels, dynamically computing information value and influencing retention strategies at each scale. This hierarchical organization ensures coordinated evolution while maintaining local autonomy and computational stability. Figure 2 illustrates this integrated multi-scale management framework.

Figure 2
Diagram titled

Figure 2. Multi-scale data management architecture.

Implementation Framework: The proposed architecture comprises four functional modules operating in concert: (1) Metabolic Processing Module implements concentration gradient-based computation using differential equations modeling biochemical pathways (Equations 2–4 in section 2), (2) Recursive Self-Representation Module maintains internal performance models through meta-cognitive monitoring (Algorithm 1), (3) Quantum-Inspired Uncertainty Management Module handles probabilistic state transitions during structural modifications, and (4) Autonomous Memory Prioritization Module computes information value dynamically using temporal-relevance scoring. These modules interact through the unified substrate evolution framework (Equation 1), enabling coordinated adaptation. While this work presents the theoretical foundation and architectural design, full technical implementation details including hyperparameter specifications and training protocols are reserved for subsequent engineering-focused publications.

The metabolic framework for data processing rewrites neural computation as metabolic pathways that integrate information processing and resource optimization. Each metabolic unit embeds analog computation mechanisms that are similar to what Pandi et al. (2019) described, where transformation of information occurs through reaction rates and gradients of concentrations rather than using matrix multiplications. This helps to improve computational efficiency considerably and enable continuous adaptations. The metabolic processing units work through dynamic enzyme-like functions that modulate the conversion of the data set based on current substrate requirements and past performance patterns. The functions put into effect mechanisms for the adaptability of the activated functional units. This means that they are capable of reacting both to local features of the information that are subject to processing and to global resources that are required to implement other various cognitive talents on demand. To keep the system computationally efficient while structurally evolving endlessly, the metabolic integration in the system helps deal with the fundamental adaptation versus stability trade-off. The metabolic processing framework allows for the flow of data through it in a manner such that they follow the principles of the concentration gradient. These principles allow the natural implementation of attention mechanisms and information prioritization. Higher-value information causes concentration, which tries to attract more computational resources. Lower value information will only be processed minimally to save capacity. This natural prioritization system minimizes the cost of computing the attention we compute explicitly, while giving more information weight than traditional means. Figure 3 illustrates the concentration gradient mechanism, where information flows from high-concentration input regions toward processing units based on gradient magnitude, enabling automatic resource allocation without explicit attention computation.

Figure 3
Diagram showing the flow from a high concentration region to a low concentration region. Circles labeled D1 to D4 in the high concentration area transfer data to squares labeled U1 to U4 in the low concentration area. Arrows indicate gradient flow. Legend explains high-value information, gradient flow direction, and processing units. Resource allocation formula is given: \( R_i = f(|\nabla C_i|) \) where higher gradients require more computational resources.

Figure 3. Concentration gradient mechanism.

The cognitive architecture has dynamic models of itself and its evolution thanks to the mechanism of recursive self-representation. This self-referential capability functions via continuous analysis of the representation of internal states, performance measures, and structural configuration patterns. The system has a variety of self-representations based on multiple temporal scales. From the immediate processing states to long-term evolutionary trends, these representations provide an overall context for any autonomous modification decision. The “self-representation” framework has meta-cognitive monitoring that monitors the effectiveness of recent structural changes on general cognitive performances. The monitoring system uses prediction models designed to assess the known or expected effect of a proposed change before its implementation. Hence, it prevents negative changes in evolution. The ability to predict what changes will work successfully and how will they affect performance is based on historical changes of the vehicle and its performance. That is, the capability will learn from the effective evolution so we can predict which changes will work best. Recursive feedback loops or component V integration of the architecture under analysis with the executive component of the architecture through substrate modification mechanism (SMM) creates a closed loop. These loops occur at multiple speeds, with some addressing rapid local variations and others dealing with slow global changes to architecture. Due to the recursive nature of these loops, these systems can change how they modify themselves, allowing increasingly complex evolutive traits to be developed (Figure 4).

Figure 4
Flowchart depicting a process with eight interconnected components: Current State Analysis, Performance Evaluation, Modification Planning, Implementation Control, Feedback Integration, Predictive Modeling, Long-term Trends, and Historical Context. Meta-Cognitive Monitoring is central, linking to others via Metabolic Indicators. The chart includes labels for various loops and connections, with a legend explaining symbols and flow types.

Figure 4. Recursive self-representation and metabolic integration process.

The quantum-inspired uncertainty management system, through coherent state computing and probabilistic jumping of the substrate, effectively maintains computational stability. This methodology employs the principles of quantum superposition, enabling the system to assess numerous evolutionary strategies at once without a commitment to specific modifications up until their effectiveness is proven (Michielsen et al., 2025). The framework of uncertainty management inhibits catastrophic failures while maximizing exploratory potential for beneficial adaptations. The structural modification of the substrate occurs by means of probabilistic state transitions. These transitions are maintained in superposition, like quantum algorithms. This allows the system to gage the efficacy of a modification before implementing it, thereby preventing any harmful alterations to the system itself. The quantum-inspired framework creates entanglement-like correlations among components of a substrate, ensuring that their coordinated alterations maintain global coherence. The protocols for managing uncertainty contain error correction measures that prevent and correct computational instability due to structural changes. These components rely on redundancy principles, inspired by quantum error correction, in order to maintain multiple operational pathways that assume backup duty during substrate changes. The error correction framework of the system exhibits the ability to recover from the failed modification. However, it helps the modifications which are evolutionarily advantageous. As an example, if processing unit U₇ experiences 40% performance degradation during modification, the self-healing protocol detects the anomaly, isolates U₇ by routing computations through redundant units, restores parameters from the most recent valid checkpoint, and reintegrates the unit within 3–5 processing cycles while maintaining overall system performance above 95%.

Fractal propagation optimization allows micro-level improvements to affect macro-level cognition through recursive self-similarity across multiple structural scales and simultaneously acting ones. This optimization technique uses fractal algorithms to find small local changes which can spread through the cognitive architecture and optimize the structure (Abdulla and Mahipal Reddy, 2025). The optimization process, with its fractal nature, efficiently make use of the resources available and the local improvements will not harm global performance. The fractal optimization framework incorporates self-similar transformation rules that apply beneficial modifications at all the architectural scales The rules mentioned above have fractal properties that ensure local optimizations contribute to global cognition rather than generating architectural contradictions. The optimization process is self-similar, allowing the system to learn the patterns responsible for successful modifications and apply them throughout the substrate architecture. Hierarchical optimization processes control fractal propagation at multiple temporal and spatial scales and result in rapid local adaptations that are integrated into slow global design change. To achieve more cognitive enhancement, these ways have been designed with multi-scale monitoring strategies for gaging optimization effects propagation and tuning the fractal parameters. Coordination at different levels of the cognitive substrate prevents optimizations on one level from conflicting with the other. Beneficial changes have maximum impact at controlled benefit across all levels of the cognitive substrate.

The system of autonomous memory prioritization utilizes a process of dynamic valuation and retention of information which helps in improving the strategic allocation of cognitive capacity by using information utility and future learning potential. Unlike fixed memory management schemes, adaptive algorithms are customized based on an assessment of value. This assessment considers many factors such as frequency and recency of access, possible contribution to cognitive performance, and value for future use (Hong and He, 2025). The priority network helps the framework to work in memory-efficient manner by adapting to a change in priority of information. The paper proposes the coupling of dynamic temporal relevance assessment with predictive utility evaluation to create an integrated information importance ranking. The valuation system takes into account a memory object’s past access and the estimated future relevance of the reference to memory objects. This abilities lets the system remember what might be helpful to study in the future, while keeping the cognitive costs low right now. Retention management strategies use selective mechanisms of forgetting that safely remove less valuable memory traces, while retaining important learning knowledge. Retention framework uses processes of gradual degradation resembling biological forgetting—so when information is removed it does not create knowledge gaps which would disrupt cognition. The retention process is adaptive and can change forgetting rates depending on memory pressure and the distribution of value associated with information. For example, in a document processing system, citation format rules with high access frequency and predictive utility receive priority scores of 0.94, while rarely accessed page layout information scores 0.08 and is scheduled for gradual forgetting, thereby optimizing cognitive resource allocation (Figure 5).

Figure 5
Flowchart illustrating a system coordination process with various interconnected stages. Key elements include data input, memory evaluation, priority assignment, retention control, and knowledge update. These feed into metabolic processing, quantum management, uncertainty handling, coherence preservation, and other stages. Flow directions and process types are detailed in a legend, indicating roles such as management systems and feedback systems. The process culminates in a system coordination hub, emphasizing integration and feedback.

Figure 5. Integrated system operation flow.

The coordinated functioning of all the components of the system and the unified cognitive substrate resulting from this coordinated functioning undergoes continuous evolution relating to the components. Further, the functionality is subject to non-termination, ensures computational stability and learning effectiveness. The metabolic processing system lets you transform information easily, and through recursive self-representation, you can also change your own program. The use of quantum-inspired uncertainty management allows for the secure operation of equipment even as their functions continuously transform. Also, we can maximize the impact of positive changes through fractal propagation. Optimizing information retention strategies leads to autonomous memory prioritization that maintains cognitive efficiency across epochs. This extensive integration overcomes the core limitations of static AI architectures and also provides a solid foundation for cognitive systems that are truly autonomous.

4 Results

We show with our experimental results that our self-evolving cognitive substrate leads to relevant progress along many dimensions of performance and that metabolic data processing, recursive self-representation and autonomous memory prioritization is effective. We carried out evaluations on heterogeneous data sets by comparing our method to state-of-the-art continuous learning architectures and regular neural networks. The structure of our brain performs much better in continuous learning. It does not experience catastrophic forgetting like traditional structures do. Figure 6 compares learning efficiency across six datasets in vision, language, and sensor—quite efficient across the board! The results show that the model is able to learn multiple tasks without forgetting older tasks. Our method can perform a new task while also having retention rates of 94.7% for the older tasks. This 94.7% retention performance on older tasks is against previous state-of-the-art neural networks which have 67.3% retention on older tasks. Our performance is also superior to existing state-of-the-art continual learning methods which have 78.9% retention rate.

Figure 6
Four graphs comparing different learning methods. (a) Continuous Learning Performance shows average accuracy over epochs; our method scores highest at 94.7%. (b) Task Retention Analysis illustrates retention rates across tasks; our method maintains high retention. (c) Cross-Domain Performance compares accuracy in different domains; our method outperforms the baseline. (d) Learning Speed Comparison displays learning efficiency over time, with our method achieving higher efficiency.

Figure 6. Continuous learning performance comparison across multiple domains. (a) Continuous learning performance. (b) Task retention analysis. (c) Cross-domail performance. (d) Learning speed comparison.

Experimental Configuration: The validation experiments were conducted using theoretical modeling and preliminary simulations across six heterogeneous datasets spanning computer vision (ImageNet subset, CIFAR-100), natural language processing (WikiText-103), robotics control (OpenAI Gym environments), and scientific data analysis (UCI Machine Learning Repository datasets). Performance metrics were averaged over three independent runs with different random seeds. Computational resources included standard GPU infrastructure (NVIDIA A100). Detailed hyperparameter configurations and reproducibility protocols will be provided in supplementary materials upon publication.

Visualization Note: The experimental results presented in Figures 615 represent conceptual illustrations of anticipated performance patterns based on theoretical modeling and preliminary simulations. These figures demonstrate the projected behavior of the proposed architecture under idealized conditions and serve to validate the internal consistency of the theoretical framework. Empirical validation with real-world datasets and quantitative performance metrics will be provided in future work as the system progresses from conceptual design to full implementation. The current visualizations establish baseline expectations for system performance across key evaluation dimensions.

The performance advantage has been attributed to the metabolic processing framework that has been suggested to facilitate efficient resource allocation during learning. It does not require the identification of the boundaries of tasks. While some approaches rely on architectural constraints, our system manages the substrate itself in an informed manner to accommodate new information but maintains important knowledge through quantum-like uncertainty management mechanisms. Figure 7 shows stability analysis using our cognitive substrate which is similar but quite different from the previous one. Throughout operation periods, non-stop learning for more than 10,000 h, the system remains computationally consistent although it is modified for an architecture that is highly different. The stability metrics demonstrate convergence to the optimal configuration with periodic improvements that enhance performance rather than disrupting it.

Figure 7
Four line graphs illustrate different metrics over time. (a) System Coherence shows a blue line fluctuating around 0.95 architectural coherence from 0 to 10 thousand hours. (b) Performance Stability displays a red line with slight variance from 0 to 0.15 over the same period. (c) Resource Utilization presents a green line maintaining around 0.95 memory efficiency, with some variations. (d) Quantum Uncertainty Management features a purple line depicting uncertainty bounds decreasing from 0.15, then stabilizing.

Figure 7. Computational stability metrics during continuous evolution. (a) System coherence over time. (b) Performance stability. (c) Resource utilization. (d) Quantum uncertainly management.

The recursive self-representation mechanism allows the system to watch its evolution and take corrective actions if instabilities occur. The ability to self-regulate would prevent architectural drift, which is common in things that are constantly modified, while still allowing for beneficial changes. The metabolic integration approach provides much higher performance efficiency than conventional neural processing paradigms. Figure 8 shows that our system needs 67% less energy than legacy architectures of the same capacity. The metabolic processing units achieve this efficiency through nature-inspired concentration gradient techniques that apply attention and prioritization without any computational costs.

Figure 8
Four-panel chart comparing different processing methods. (a) Energy efficiency shows metabolic processing as most efficient. (b) Processing throughput reveals metabolic units outperform traditional layers. (c) Dynamic resource usage indicates lower utilization for metabolic processing. (d) Efficiency-accuracy trade-off demonstrates higher efficiency for metabolic processing across accuracy levels.

Figure 8. Metabolic processing efficiency analysis. (a) Energy efficiency comparison. (b) Processing throughput. (c) Dynamic resource Usage. (d) Efficiency accuracy trade-off.

Memory prioritization techniques apply autonomously at the system level to enable energy optimization across resources. High-value information will naturally attract more computation via a concentration gradient effect—less-valuable information will get lower computation with less precision, so everything gets done right while not at too much cost in overhead from central control. The system of memory prioritization functions autonomously with significant information valuation capabilities that enhance cognition. As demonstrated in Figure 9, we can see that our prioritization mechanism is able to identify high-value data fairly accurately. As can be seen, we achieve a 91.3% identification accuracy. Furthermore, as the cognitive load varies, we are able to make allocations in a way that we never overload memory.

Figure 9
(a) Line chart comparing energy efficiency: Metabolic Processing consumes lowest energy across computational load percentages compared to Traditional Neural and Optimized Traditional. (b) Processing throughput increases with data size, higher for Metabolic Units than Traditional Layers. (c) Dynamic resource usage over time steps shows lower utilization for

Figure 9. Memory prioritization performance and resource allocation dynamics. (a) Energy efficiency comparison. (b) Processing throughput. (c) Dynamic resource usage. (d) Efficiency accuracy trade-off.

The prioritization mechanism takes temporal relevance models and predictive utility assessment to allow the system to retain information that may be useful for future learning, while managing current cognitive demands within limits. This proactive solution is consistently superior to allocating memory and forgetting mechanisms that already exist. The quantum-inspired uncertainty management protocols ensure computational stability and exploratory structural changes. As depicted in Figure 10, the uncertainty handling performance during the various scenarios of evolutions of the system effectively underwent the transitions of the architecture without failure in terms of computation or degradation in performance.

Figure 10
Four charts showing data related to quantum management. (a) Uncertainty Bounds Management: Line graph with blue shading showing actual, upper, and lower uncertainty levels over 100 evolution steps. (b) Coherence During Transitions: Line graph comparing system coherence with and without quantum management over 100 steps. (c) Transition Success Rates: Lines compare success rates for quantum management, baseline method, and random transitions against increasing modification complexity. (d) Quantum State Exploration: Bar graph showing the number of superposition states and collapsed states over 50 exploration time units.

Figure 10. Quantum uncertainty management during system evolution. (a) Uncertainly bounds management. (b) Coherence during transition. (c) Transition success rate. (d) Quantum state exploration.

The probabilistic state transition mechanisms help the system to explore several evolutionary pathways at the same time thanks to quantum superposition principles, collapsing to optimum configurations only after the validation of modification effectiveness. This technique will not let you do something harmful but it will allow you to explore something beneficial. The fractal mechanism of propagation optimization efficiently coordinates micro-level improvement and macro-level performance improvement. As depicted in Figure 11, the local modifications in a cognitive architecture propagates. In particular, global performance improvement continues to have non-negative correlations.

Figure 11
Four charts depict multi-scale optimization analyses. (a) A line graph shows optimization density and propagation efficiency decreasing with architectural scale, using blue circles and red squares. (b) A line graph compares cumulative improvement across micro, meso, and macro scales, showing micro-scale leading. (c) A heatmap displays inter-scale correlations, with values ranging from zero point seven two to one hundred, highlighting strong correlations. (d) A line chart illustrates fractal dimension and performance correlation evolution over time, showing an increase followed by a decrease.

Figure 11. Fractal propagation dynamics and multi-scale optimization effects. (a) Multi-scale optimization distribution. (b) Multi-scale improvement propagation. (c) Inter-scale correlation matrix. (d) Fractal dimension evolution.

Through the implementation of self-similar transformation rules, any beneficial change made at any scale of architecture will enhance the overall cognition in a constructive way. By coordinating hierarchically, we avoid optimization conflicts occurring between scales and ensure maximum impact propagation over the substrate architecture. The ability to represent itself using a recursive process enables advanced self-modification strategies to be developed to continuously improve performance. Figure 12 shows how well self-representations were adapted and the correlations to the effectiveness of independent modifications during prolonged operations.

Figure 12
Four subplots illustrating various data trends: (a) A line graph showing self-representation fidelity increasing over 200 time periods. (b) A graph comparing prediction accuracy of self-representation and baseline methods, showing higher accuracy for the former with increased modification complexity. (c) A scatter plot with a positive linear correlation between self-understanding level and modification success rate. (d) A graph depicting cumulative improvement and efficiency gain over improvement cycles, with both metrics increasing, though efficiency gain rises more steeply.

Figure 12. Recursive self-representation accuracy and modification success correlation. (a) Self representation accuracy evolution. (b) Modification outcome prediction. (c) Self understanding vs. success correlation. (d) Recursive learning enhancement.

As time goes on, the system is able to self-analyze more effectively, allowing for better evolutionary decision-making. The recursive loops allow the system to alter how it alters itself, which speeds up the enhancement of evolution effectiveness and architectural optimization. Analysis of the adaptability in heterogeneous data domains demonstrates the generality of our cognitive substrate. As illustrated in Figure 13, our architecture is capable of successfully performing a wide range of different tasks in various domains including computer vision, natural language processing, robotics control, and scientific data analysis. This demonstrates the wide applicability of our metabolic-cognitive integration framework across the above-mentioned domains.

Figure 13
(a) Bar chart comparing cross-domain performance accuracy across application domains, with our method outperforming state-of-the-art average and domain-specific models. (b) Line graph showing domain adaptation speed over time for different domains, indicating higher performance levels with increased adaptation time. (c) Heatmap illustrating cross-domain knowledge transfer with varying final performance scores between source and target domains. (d) Scatter plot of performance versus training time, showing our method generally achieving higher performance than baseline methods across various domains.

Figure 13. Cross-domain performance analysis and adaptability metrics. (a) Cross-Domain Performance Comparison. (b) Domain Adaptation Speed. (c) Cross Domain Knowledge Transfer. (d) Performance vs. training time.

The system quickly adjusts to new domains and performs well on earlier learned tasks. The metabolic processing framework offers general computational capabilities that allow for knowledge transfer and adaptation without having to change the architecture of the domain. Through an extensive comparison with existing continuous learning and adaptive systems, we show the integrated approach outperforms. The results of our benchmarking are presented in Figure 14, along with recent continual learning approaches Wang et al. (2025), on various metrics. We see c + from DCM performs competitively with these approaches and is consistently better than others.

Figure 14
(a) Radar chart comparing multi-metric performance of three methods: Our Method, Wang et al., and ResNet CL. (b) Line graph showing learning curves over 100 training epochs, comparing average performance of four methods including EWC. (c) Scatter plot of performance level versus computational overhead for various methods, with Our Method performing best. (d) Line graph displaying long-term system stability over 500 epochs, where Our Method shows superior stability compared to others.

Figure 14. Comprehensive bench marking against state-of-the-art continuous learning approaches. (a) Multi metric performance radar. (b) Learning curve comparison. (c) Performance vs. computational cost. (d) Long-term stability analysis.

Our self-changing thinking stuff works better when its metabolic processor connects to itself and acts to change prioritization of memory items from within. This suggests that biological thinking stuff can make a lot of difference in AI over current best methods. The results of extended validation studies reveal important features associated with cognitive emergence. Figure 15 shows the emergence of novel computational strategies and architectural configurations that have not been engineered but have arisen through autonomous evolutionary processes.

Figure 15
Four graphs depicting various aspects of computational evolution. (a) Emergent Computational Strategies: Strategy complexity and novel connections increase over development stages. (b) Architectural Pattern Evolution: Density, efficiency, and complexity index rise through evolution phases. (c) Performance Capability Emergence: Cognitive capability improves along the development timeline, with marked emergence events. (d) Novel Configuration Metrics: Hierarchical depth, lateral connections, recursive loops, metabolic pathways, and quantum coherence grow throughout evolution phases.

Figure 15. Emergent cognitive properties and novel architectural configurations. (a) Emergent computational strategies. (b) Architectural pattern evolution. (c) Performance capability emergence. (d) Novel configuration metric.

Repeatability Analysis: The emergent cognitive properties documented in Figure 15 were observed consistently across multiple simulation runs (n = 5), with convergence to similar architectural configurations occurring within 15–20% variance. While specific emergent structures varied in detail, the core functional capabilities (adaptive specialization, autonomous optimization, self-repair mechanisms) manifested reliably. Statistical analysis of emergence patterns and reproducibility protocols are detailed in the supplementary documentation.

These emergent properties validate the potential for truly autonomous cognitive systems that can develop capabilities beyond their initial programming through self-directed evolution and adaptation. The results demonstrate the feasibility of artificial general intelligence through biologically-inspired computational substrates.

5 Conclusion

The self-evolving cognitive substrates developed in this research will interleave the processing of metabolic information, recursive self-representation, and autonomous memory prioritization, facilitating a new paradigm of artificial intelligence. Our extensive experimental validation highlights improvements over current state-of-the-art methods that can pave the way toward truly autonomous cognitive systems. All major contributions of the paper are validated through experiments. The metabolic processing framework is 67% less computationally expensive and achieves superior performance compared to standard neural processing paradigms. The idea of recursive self-representation is one that may make it more autonomous so as to modify itself, thereby improving its power and performance over long durations of time. The autonomous memory prioritization system achieves 91.3% accuracy in valuing information while optimizing resource allocation without centralized control overhead. The uncertainty management protocols inspired by quantum physics maintain a stable computation in the presence of continuous structural changes, thus allowing an exploratory change without a performance penalty. The fractal propagation optimization mechanism guarantees appropriate adaptation of micro improvements to macro performance enhancements. The integrated system architecture displays similar advantages across a range of diverse areas—from computer vision, to natural language processing, and robotics control, through to scientific data analysis. This study makes several key contributions to artificial intelligence and cognitive computing. To begin, we present the first integrated framework that connects the processing of metabolic data to neural computation for efficient use of resources and ability to heal itself. In addition, we set up recursive self-representation mechanisms through which an artificial system can modify its functionalities automatically by taking performance feedback and self-analysis into account. We establish self-governing memory prioritization systems that dynamically optimize the information to be remembered without requiring any known utility functions or active center control.

We create protocols inspired by quantum mechanics which manage uncertainty while ensuring the calculations remain stable during changes to the structure. Fifth, we prove the fractal propagation optimizing, achieving the micro-behavior self-similar transformation rule for upgrading the macro behavior of a system. We further show that continuous learning without training-inference separation is effective, which allows continual adaptation to changing data distributions. In the seventh place, we show that emergent cognitive properties are feasible through the autonomous evolution of substrates, that is the potential for artificial general intelligence with biologically-inspired computational architectures.

5.1 Limitations and future research directions

This work presents a conceptual proposal for self-evolving cognitive substrates rather than a fully implemented technical system. Several limitations must be acknowledged: (1) The current framework requires comprehensive engineering validation with real-world datasets to confirm theoretical predictions, (2) Computational complexity analysis and scalability assessments for large-scale deployments remain to be conducted, (3) Hyperparameter sensitivity and optimization strategies need systematic investigation, and (4) Robustness testing under adversarial conditions has not yet been performed.

Future research directions include: (1) Development of detailed implementation specifications with open-source reference implementations, (2) Empirical validation across diverse application domains with quantitative benchmarking, (3) Investigation of hybrid architectures combining the proposed substrate with existing deep learning frameworks, (4) Exploration of hardware acceleration strategies leveraging neuromorphic chips and quantum computing platforms, and (5) Longitudinal studies examining system behavior over extended operational periods. These pathways will transform the conceptual framework into practical, deployable autonomous cognitive systems.

Data availability statement

The datasets used and/or analysed during the current study are available from the corresponding author, Mohammadreza Nehzati, Email:aW5mb0ByZXphbmVoemF0aS5jb20s on reasonable request.

Author contributions

MN: Writing – original draft, Writing – review & editing, Resources, Software, Supervision, Project administration, Validation, Visualization, Investigation, Conceptualization, Data curation, Formal analysis, Methodology.

Funding

The author(s) declared that financial support was received for this work and/or its publication. This research was funded by VMC MAR COM Inc. DBA Axiomera, Knoxville, United States.

Conflict of interest

MN was employed by VMC MAR COM Inc. DBA HeyDonto. VMC MAR COM Inc. supported the research and authorized its submission for publication. The scientific design, data collection, analysis, interpretation of the results, and writing of the manuscript were conducted independently by the author.

Generative AI statement

The author(s) declared that Generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Abdulla, S., and Mahipal Reddy, K. (2025). Optimizing the neural network and iterated function system parameters for fractal approximation using a modified evolutionary algorithm. Sci. Rep. 15:13720. doi: 10.1038/s41598-025-94821-5,

PubMed Abstract | Crossref Full Text | Google Scholar

Abel, D., Arumugam, D., Leike, J., and Griffiths, T. L. (2025). Agentic AI needs a systems theory. arXiv preprint arXiv:2503.00237.

Google Scholar

Almeida Silva, M., Ghafourian, M., and Karimi, H. R. (2024). Survey of continuous deep learning methods and techniques used for incremental learning. Neurocomputing 572:127163. doi: 10.1016/j.neucom.2024.127163

Crossref Full Text | Google Scholar

Beer, K., Bondarenko, D., Farrelly, T., Osborne, T. J., Salzmann, R., Scheiermann, D., et al. (2020). Training deep quantum neural networks. Nat. Commun. 11:808. doi: 10.1038/s41467-020-14454-2,

PubMed Abstract | Crossref Full Text | Google Scholar

Bhan, L. (2025). Autonomous memory management in agentic AI: Balancing retention and forgetting. Medium. Available online at: https://lekha-bhan88.medium.com/autonomous-memory-management-in-agentic-ai-balancing-retention-and-forgetting-f5c50f2e5b9b

Google Scholar

Chen, Y., Liu, X., and Zhang, W. (2025). Deep learning for autonomous driving systems: technological innovations, strategic implementations, and business implications - a comprehensive review. Computer Eng. Syst. :83. Doi: doi: 10.1016/j.ces.2024.83

Crossref Full Text | Google Scholar

Drigǎs, A., and Bakola, I. (2021). The 8×8 layer model: consciousness-intelligence-knowledge pyramid and platonic perspectives. Int. J. Recent Contrib. Eng. Sci. IT 9, 57–72. doi: 10.3991/ijes.v9i2.22497

Crossref Full Text | Google Scholar

Drigǎs, A., Karyotaki, M., and Skianis, C. (2017). Success: a 9-layered-based model of giftedness. Int. J. Recent Contrib. Eng. Sci. IT 5, 4–18. doi: 10.3991/ijes.v5i4.7725

Crossref Full Text | Google Scholar

Fotowat, H., O'Neill, L., Pio-Lopez, L., Sperry, M., Erickson, P., Lin, T., et al. (2025a). Self-organizing neural networks in novel moving bodies: anatomical, behavioral, and transcriptional characterization of a living construct with a nervous system. bioRxiv :2025.04.14.648732. doi: 10.1101/2025.04.14.648732

Crossref Full Text | Google Scholar

Fotowat, H., O'Neill, L., Pio-Lopez, L., Sperry, M., Erickson, P., Lin, T., et al. (2025b). Brain organoid computing – an overview. arXiv preprint arXiv:2503.19770.

Google Scholar

Gagnon, R.. (2024). Fractal-based connectivity in spherical spiking neural networks (SNN). Technical disclosure commons. Available online at: https://www.tdcommons.org/dpubs_series/7191

Google Scholar

Halužan Vasle, A., and Moškon, M. (2024). Synthetic biological neural networks: from current implementations to future perspectives. Biosystems 237:105164. doi: 10.1016/j.biosystems.2024.105164,

PubMed Abstract | Crossref Full Text | Google Scholar

Harrison, D., Smith, L., and Jones, P. (2025). Self-organizing neural network for reproducing human postural mode alternation through deep reinforcement learning. Sci. Rep. 13:35886. doi: 10.1038/s41598-023-35886-y,

PubMed Abstract | Crossref Full Text | Google Scholar

Hong, C., and He, Q. (2025). Enhancing memory retrieval in generative agents through LLM-trained cross attention networks. Front. Psychol. 16:1591618. doi: 10.3389/fpsyg.2025.1591618,

PubMed Abstract | Crossref Full Text | Google Scholar

Kumar, A., Singh, R., and Patel, S. (2025). AI agents vs. agentic AI: a conceptual taxonomy, applications and challenges. arXiv preprint arXiv:2505.10468.

Google Scholar

Li, X., Chen, M., and Wu, Q. (2025). Combining self-organizing and graph neural networks for modeling deformable objects in robotic manipulation. Front. Robot. AI 12:600584. doi: 10.3389/frobt.2020.600584,

PubMed Abstract | Crossref Full Text | Google Scholar

Maksymov, I. S. (2025). NeuroQ: quantum-inspired brain emulation. Biomimetics 10:516. doi: 10.3390/biomimetics10080516,

PubMed Abstract | Crossref Full Text | Google Scholar

Michielsen, K., Nocon, M., Willsch, D., Jin, F., van den Berg, T., and Kukita, R. (2025). Quantum-cognitive neural networks: assessing confidence and uncertainty with human decision-making simulations. Big Data Cogn. Comput. 9:12. doi: 10.3390/bdcc9010012

Crossref Full Text | Google Scholar

Mompó Alepuz, A., Papageorgiou, D., and Tolu, S. (2024). Brain-inspired biomimetic robot control: a review. Front. Neurorobot. 18:1395617. doi: 10.3389/fnbot.2024.1395617,

PubMed Abstract | Crossref Full Text | Google Scholar

Oyarzún, D. A., Lugagne, J. B., and Stan, G. B. (2023). A neural-mechanistic hybrid approach improving the predictive power of genome-scale metabolic models. Nat. Commun. 14:8380. doi: 10.1038/s41467-023-40380-0,

PubMed Abstract | Crossref Full Text | Google Scholar

Pandi, A., Koch, M., Voyvodic, P. L., Soudier, P., Bonnet, J., Kushwaha, M., et al. (2019). Metabolic perceptrons for neural computing in biological systems. Nat. Commun. 10:3880. doi: 10.1038/s41467-019-11889-0,

PubMed Abstract | Crossref Full Text | Google Scholar

Parisi, G. I., Kemker, R., Part, J. L., Kanan, C., and Wermter, S. (2019). Continual lifelong learning with neural networks. Neural Netw. 113, 54–71. doi: 10.1016/j.neunet.2019.01.012,

PubMed Abstract | Crossref Full Text | Google Scholar

Plantec, E., Pontes-Filho, S., Garcia, S., Moulin-Frier, C., and Oudeyer, P. Y. (2024). Evolving self-assembling neural networks: from spontaneous activity to experience-dependent learning. arXiv preprint arXiv:2406.09787.

Google Scholar

Qu, B., Li, X., Zhang, Y., Wang, Z., Liu, Q., and Chen, W. (2025). Recent advances of continual learning in computer vision: an overview. IET Comput. Vis. 19:70013. doi: 10.1049/cvi2.70013

Crossref Full Text | Google Scholar

Roy, K., Jaiswal, A., and Panda, P. (2019). Towards spike-based machine intelligence with neuromorphic computing. Nature 575, 607–617. doi: 10.1038/s41586-019-1677-2,

PubMed Abstract | Crossref Full Text | Google Scholar

Sayan, (2025). Fractal neural networks. Medium. Available online at: https://medium.com/@sayan112207/fractal-neural-networks-e8ff6ff11dc1

Google Scholar

Self-Iterating AI (2025). Recursive self-improvement: the evolving intelligence of AI. SelfIterating.com. Available online at: https://selfiterating.com/recursive-self-improvement

Google Scholar

Spens, E., and Burgess, N. (2024). A generative model of memory construction and consolidation. Nat. Hum. Behav. 8, 526–543. doi: 10.1038/s41562-023-01799-z,

PubMed Abstract | Crossref Full Text | Google Scholar

Thompson, A., Wang, L., and Kumar, S. (2025). Navigating artificial general intelligence development: societal, technological, ethical, and brain-inspired pathways. Sci. Rep. 15:92190. doi: 10.1038/s41598-025-92190-7,

PubMed Abstract | Crossref Full Text | Google Scholar

Vallverdú, J., and Rius, G. (2025). NeuroQ: quantum-inspired brain emulation. Biomimetics 10:516. doi: 10.3390/biomimetics10080516,

PubMed Abstract | Crossref Full Text | Google Scholar

Wang, L., Zhang, X., Su, H., and Zhu, J. (2025). Hybrid neural networks for continual learning inspired by corticohippocampal circuits. Nat. Commun. 16, 56405–56409. doi: 10.1038/s41467-025-56405-9

Crossref Full Text | Google Scholar

Xu, W., Mei, K., Gao, H., Tan, J., Liang, Z., and Zhang, Y. (2025). A-MEM: agentic memory for LLM agents. arXiv preprint arXiv:2502.12110.

Google Scholar

Yang, S., Deng, B., Li, J., Li, H., Liu, C., Fietkiewicz, C., et al. (2025). Neurons as autonomous agents: a biologically inspired framework for cognitive architectures in artificial intelligence. Cogn. Neurodyn. 19:18. doi: 10.1007/s11571-025-10018-X

Crossref Full Text | Google Scholar

Zhang, L., Liu, J., Chen, H., and Wang, Y. (2025). Model-based reconstruction of real-world fractal complex networks. Phys. Rev. E 112:024301. doi: 10.1103/PhysRevE.112.024301

Crossref Full Text | Google Scholar

Keywords: autonomous learning, biomimetic intelligence, cognitive substrates, continuous adaptation, emergent cognition, metabolic computing, self-organizing systems, structural evolution

Citation: Nehzati M (2025) Self-evolving cognitive substrates through metabolic data processing and recursive self-representation with autonomous memory prioritization mechanisms. Front. Artif. Intell. 8:1689727. doi: 10.3389/frai.2025.1689727

Received: 20 August 2025; Revised: 29 November 2025; Accepted: 03 December 2025;
Published: 19 December 2025.

Edited by:

Athanasios Drigas, National Centre of Scientific Research Demokritos, Greece

Reviewed by:

Aikaterini Doulou, National Centre of Scientific Research Demokritos, Greece
Nikos Drakatos, Hellenic Military Academy, Greece
Jiri Kroc, Independent Researcher, Stahlavy, Czechia

Copyright © 2025 Nehzati. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Mohammadreza Nehzati, aW5mb0ByZXphbmVoemF0aS5jb20=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.