Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Comput. Neurosci., 21 January 2026

Volume 19 - 2025 | https://doi.org/10.3389/fncom.2025.1731452

This article is part of the Research TopicMathematical understanding of information storage, compression and prediction in Neural NetworksView all articles

F2-CommNet: Fourier–Fractional neural networks with Lyapunov stability guarantees for hallucination-resistant community detection

  • 1Department of Computer Science, University of Liverpool, Liverpool, United Kingdom
  • 2Department of Computer Science, Fairleigh Dickinson University, Vancouver, BC, Canada

Community detection is a crucial task in network research, applicable to social systems, biology, cybersecurity, and knowledge graphs. Recent advancements in graph neural networks (GNNs) have exhibited significant representational capability; yet, they frequently experience instability and erroneous clustering, often referred to as ”hallucinations.” These artifacts stem from sensitivity to high-frequency eigenmodes, over-parameterization, and noise amplification, undermining the robustness of learned communities. To mitigate these constraints, we present F2-CommNet, a Fourier–Fractional neural framework that incorporates fractional-order dynamics, spectrum filtering, and Lyapunov-based stability analysis. The fractional operator implements long-memory dampening that mitigates oscillations, whereas Fourier spectral projections selectively attenuate eigenmodes susceptible to hallucination. Theoretical analysis delineates certain stability criteria under Lipschitz non-linearities and constrained disturbances, resulting in a demonstrable expansion of the Lyapunov margin. Experimental validation on synthetic and actual networks indicates that F2-CommNet reliably diminishes hallucination indices, enhances stability margins, and produces interpretable communities in comparison to integer-order GNN baselines. This study integrates fractional calculus, spectral graph theory, and neural network dynamics, providing a systematic method for hallucination-resistant community discovery.

1 Introduction

Networks provide a robust abstraction for depicting complicated systems, with nodes representing things and edges signifying interactions. Identifying communities—subsets of nodes characterized by dense internal connections and relatively sparse exterior links—is crucial for comprehending structural and functional patterns in social, biological, and technological networks (Cai and Wang, 2023). Traditional techniques like modularity maximization, Infomap, and label propagation have demonstrated significant efficacy, whereas spectral clustering methods based on graph Laplacian theory offer robust mathematical assurances. Nonetheless, these techniques are frequently fragile, exhibiting sensitivity to noise and disturbances, especially in high-frequency spectrum modes.

The emergence of graph neural networks (GNNs) has revolutionized community discovery through the facilitation of data-driven embeddings of graph structures (Wu et al., 2021). Variants including Graph Convolutional Networks (GCNs), Graph Attention Networks (GATs), and spectral GNNs (Abbahaddou et al., 2024) achieve superior accuracy across benchmarks. Nonetheless, several intrinsic limitations persist: over-smoothing in deeper layers (Liu et al., 2024), diminished expressive capacity (Chen et al., 2025a), and vulnerability to spurious or unstable partitions—phenomena we denote as ”hallucinations” (Guo et al., 2025; Chen et al., 2023). These hallucinations arise from uncontrolled propagation dynamics, sensitivity to unstable eigenmodes, and an insufficient theoretical foundation.

Fractional-order calculus offers a viable solution to address these challenges. The inherent memory and smoothing properties enable dynamical systems to achieve a balance between responsiveness and stability, effectively mitigating oscillations and noise (Kang et al., 2024). Fractional-order neural models exhibit benefits in control (Sivalingam and Govindaraj, 2025), non-linear optimization (Maskey et al., 2023), and chaotic time-series regulation (Kumar et al., 2024). Notwithstanding these advancements, their incorporation into graph learning and community detection is being inadequately investigated. Simultaneously, Fourier spectral analysis has demonstrated efficacy in representing graph signals (Panda et al., 2025) and in the creation of reliable spectral graph filters (Levie et al., 2019); nevertheless, its integration with fractional dynamics for the suppression of hallucinations has to be systematically explored.

Our main contributions are as follows:

• This study develops F2-CommNet, a fractional-Fourier framework for dynamic community detection with explicit stability guarantees. In contrast to existing GNN-based models that are often heuristic and prone to instability, our approach is grounded in rigorous theory and validated on diverse benchmarks.

• We establish a fractional Lyapunov framework for dynamic graphs, deriving analytical stability margins (ρ) and hallucination indices (ηmax) as quantitative criteria. The analysis shows that F2-CommNet enlarges the stability margin by more than 3 × and reduces hallucination indices by up to 35% compared with existing baselines, providing explicit stability guarantees for community detection.

• We design F2-CommNet, a hybrid architecture that couples fractional-order neural dynamics with adaptive Fourier spectral filtering and stability-aware refinement. This joint design ensures convergence to robust partitions while maintaining near-linear computational complexity [O(nHd+nrlogn)], enabling scalability to million-node networks in practice.

• Extensive experiments on seven real-world benchmarks (Cora, Citeseer, PubMed, Reddit, Enron, DBLP, BioGRID) show that F2-CommNet improves ARI by up to 25%, enhances NMI by 15%, enlarges stability margin ρ by more than 3 ×, and reduces hallucination indices by up to 35% compared with static baselines (GCN, GAT) and dynamic baselines (DyGCN, EvolveGCN). F2-CommNet achieves the best score on 32 out of 35 metric–dataset pairs, demonstrating both robustness and generality across diverse domains.

2 Related work

Community detection in complex networks has been widely investigated in the past two decades. Classical approaches include modularity maximization and spectral clustering, which partition networks into cohesive groups of nodes. Fortunato's seminal survey provided a systematic overview of these methods and discussed their limitations in large-scale and dynamic scenarios. More recently, graph neural networks such as GCN and GAT have become standard baselines for learning community structure by integrating node features and network topology. However, these integer-order operators often suffer from instability and sensitivity to noise, especially in temporal settings.

Temporal networks introduce additional challenges. Masuda and Lambiotte laid the theoretical foundations of evolving graphs, while follow-up studies addressed dynamic community detection problems. Extensions such as TGN, DyGCN, and EvolveGCN generalize GNNs to temporal data, but they remain vulnerable to issues such as drifting clusters and hallucinated communities.

To address these challenges, fractional-order dynamics have recently gained attention as a mechanism for modeling long-memory effects. Fractional differential equations are well-established in physics and control, and their integration into neural models has led to promising advances. Recent studies on Neural Fractional-Order Differential Equations (Holme, 2023), variable-order extensions (Lambiotte and Rosvall, 2022), and stabilization analysis of fractional-order neural networks (Casteigts et al., 2023) demonstrate improved robustness and richer temporal dynamics compared to their integer-order counterparts.

Recent study has also explored centrality-aware and collaborative embedding methods for identifying overlapping or influence-driven community structures. In particular, the centrality-guided network embedding framework proposed in Cheng et al. (2025) integrates structural importance measures into node representations and is closely related to the type of structural guidance highlighted by the reviewer. Complementary approaches, such as hierarchical structural attention models (Yu et al., 2022) further emphasize node influence and multi-level structural patterns in static graphs. While proficient at identifying overlapping or hierarchical communities, these models are tailored for static networks and depend on centrality-based aims or structural attention mechanisms, failing to mitigate hallucination effects or temporal instability in dynamic graphs. Our F2-CommNet method complements previous research by emphasizing stability-aware clustering in dynamic environments via fractional-order refinement and Fourier spectral filtering.

Recent stability-oriented GNNs such as SO-GCN (Chen et al., 2025c) and LDC-GAT (Chen et al., 2025b) introduce constraint-based mechanisms to improve feature stability in semi-supervised node classification on static graphs. Although these methods offer significant insights for stabilizing message passing, their label-driven objectives and static configurations contrast with the unsupervised dynamic community detection problem addressed here, where clustering quality must be enhanced across evolving graph snapshots without supervision. Our suggested F2-CommNet enhances this area of research by tackling stability in a temporal and unsupervised context via fractional-order neural dynamics and Fourier spectral filtering.

In addition to message-passing GNNs, contemporary Transformer-based architectures have been introduced for temporal graph modeling, frequently utilizing self-attention to capture long-range temporal relationships. These models have exhibited robust performance in tasks including link prediction and node forecasting. Nevertheless, the majority of Transformer-based graph methodologies are tailored for supervised or semi-supervised predictive tasks and depend on temporal labels or quadratic attention mechanisms, rendering them challenging to implement directly for large-scale unsupervised dynamic community detection. Therefore, although we recognize their significance in the wider context of dynamic graph learning, we do not consider them as directly comparable baselines in our experimental assessment.

Building upon these insights, our article introduces F2-CommNet, which integrates fractional-order neural dynamics with Fourier spectral filtering for community detection. Unlike prior dynamic GNNs, F2-CommNet provides both empirical robustness against hallucinations and theoretical guarantees on stability margins, bridging the gap between classical spectral methods, GNN-based approaches, and recent advances in fractional-order learning.

3 Methodology

3.1 Framework overview

The proposed F2-CommNet framework integrates fractional-order neural dynamics, Fourier spectral filtering, and Lyapunov stability control into a unified graph-based learning system for dynamic community detection. The model enhances interpretability and robustness by embedding memory-dependent evolution and spectral regularization into the community learning process. The framework is illustrated in Figure 1.

Figure 1
Flowchart illustrating the F²-CommNet framework. It begins with an input graph snapshot processed through a fractional-order neural dynamics layer for smoothing and temporal effects. Next, a Fourier spectral filtering module suppresses noise. The stability-aware embedding refinement step ensures bounded error and convergence. The process ends with output community assignments. A Lyapunov stability analysis supports the framework's stability, quantifying robustness.

Figure 1. The Framework of F2-CommNet: Dynamic Community Detection with Stability Guarantees. The model processes graph snapshots through fractional-order neural dynamics for long-memory smoothing, Fourier spectral filtering for noise suppression, and Lyapunov stability refinement guided by a dedicated stability analysis module, to produce hallucination-free community assignments.

Step 1: Graph Input.

Given a sequence of graph snapshots {Gt = (V, Et)} with node features Xt|V|×d and Laplacian matrices Lt = DtAt, the model initializes node representations for temporal propagation.

Step 2: Fractional Dynamics.

Node embeddings evolve under Caputo fractional-order differential equations:

DCTαXt=CXt+Wf(Xt)+Ut,    (1)

where the fractional derivative DCTα introduces long-memory smoothing and non-local temporal effects into neural propagation.

Step 3: Fourier Spectral Filtering.

Each snapshot is decomposed into Laplacian eigenmodes:

X^t=UtXt,Xt=UtX^t,    (2)

where unstable high-frequency modes are attenuated by a spectral kernel ϕ(λk), thereby reducing noise amplification.

Step 4: Stability Monitoring.

To establish the Lyapunov-based stability bound, we assume a symmetric positive definite matrix P≻0 such that the Lyapunov functional V(x) = xPx is well-defined and radially unbounded. This standard assumption in fractional-order stability theory enables the derivation of provable error bounds under the proposed dynamics.

A Lyapunov margin ρ is estimated as

ρ=λmin(PC)-F||PW||,    (3)

while the hallucination index ηk = λkFck is monitored for each eigenmode to assess spectral stability.

Step 5: Community Partitioning.

The stabilized embeddings are clustered into communities {Ct} by maximizing the standard Newman–Girvan modularity on each snapshot. The term ”stability-aware” denotes that modularity optimization is conducted on embeddings that have been previously refined via fractional and Lyapunov-based stabilization modules, rather than on unprocessed features or adjacency matrices.

3.2 Fractional-order neural dynamics

We generalize graph neural evolution by introducing a Caputo fractional derivative of order α ∈ (0, 1):

DCTαxi(t)=cixi(t)+jN(i)wijf(xj(t))+ui(t),    (4)

where xi(t) denotes the state of node i, ci>0 is a leakage coefficient, wij represents connection weights, f(·) is a non-linear activation, and ui(t) is an external forcing term.

The Caputo derivative is defined as

DCTαx(t)=1Γ(1α)0tx˙(τ)(tτ)αdτ,    (5)

where Γ(·) denotes the Gamma function. This expression reveals that the derivative depends on the entire historical trajectory x(τ) for τ ≤ t, embedding long-memory effects within the dynamics. Compared with the integer-order case (α = 1), fractional dynamics dampen oscillations and enlarge the convergence basin, improving robustness to perturbations.

To practically compute the fractional derivative DCTα, we adopt the truncated Grünwald–Letnikov (GL) approximation:

DCTαXtj=0HwjXtj,  wj=(1)j(αj).    (6)

where H denotes the memory horizon that limits the fractional historical dependence. This discretization yields an efficient and numerically stable implementation suitable for large-scale dynamic graphs.

From a modeling perspective, the window H defines the effective memory horizon of the Caputo operator: larger H retains longer-range temporal dependence, but also increases computational cost. In practice, we require HT, where T is the sequence length, and we find that choosing H/T in a small range (around 5–15% of T) preserves the desired long-memory behavior while keeping the overall complexity near-linear. The dataset-specific choices of H are summarized in Section 4.3.

For the collective evolution of all node features, we express the fractional-order neural dynamics in matrix form:

DCTαXt=CXt+Wf(Xt)+Ut,    (7)

where Xt is the node feature matrix, C is a leakage coefficient matrix, W is a weight matrix for inter-node connections, and Ut represents external forcing. This matrix form guides the temporal propagation of node representations within our F2-CommNet model.

3.3 Fourier spectral filtering

Let L = UΛU denote the Laplacian decomposition, where Λ = diag(λ1, …, λn) contains eigenvalues and U = [u1, …, un] the corresponding eigenvectors. Spectral projection and reconstruction are expressed as

x^(t)=Ux(t),  x(t)=Ux^(t).    (8)

For each eigenmode uk, the hallucination index is defined by

ηk=λkF-ck,    (9)

where F is the forcing gain and ck is the leakage term. Modes with ηk>0 are deemed unstable, while ηk < 0 indicates spectral stability. Because high-frequency eigenmodes (large λk) amplify noise, F2-CommNet employs adaptive Fourier filtering:

x^k(t)x^k(t)ϕ(λk),    (10)

where ϕ(λk) is a decay kernel that suppresses unstable modes and preserves low-frequency structure.

Integrating the spectral projection, adaptive filtering (where ϕ(λk) is often parameterized as gθ(Λ)), and reconstruction, the comprehensive Fourier spectral filtering operation for the feature matrix Xt(α) is expressed as:

X^t=Ugθ(Λ)UXt(α).    (11)

This operation effectively purifies the node embeddings by removing noise-amplifying high-frequency components.

Intuitively, the decay kernel ϕ(λk) controls the degree of suppression applied to each spectral mode. High-frequency components associated with large eigenvalues λk tend to exhibit oscillatory or unstable behavior in dynamic graphs. A stronger decay factor, therefore, effectively damps these fluctuations, reducing the likelihood of hallucinated communities while preserving low-frequency structural information.

3.4 Stability guarantees

We summarize the key ideas behind the stability analysis and present the main results in a concise form for improved readability.

Error dynamics and hallucinations. Let x*(t) denote an ideal (hallucination-free) community trajectory and define the deviation

e(t)=x(t)-x*(t).    (12)

The fractional-order error dynamics can be written as

DCTα(t)=(CFW)e(t)+Δu(t),    (13)

where Δu(t) models perturbations such as noise or modeling mismatch. Intuitively, a persistent non-zero e(t) corresponds to hallucinated community states.

Lyapunov function and Mittag–Leffler bound. We consider the quadratic Lyapunov function

V(t)=e(t)Pe(t),P0,    (14)

with P symmetric positive definite. If there exists P≻0 and a margin ρ>0 such that

PC+CP-2FPW-ρI,    (15)

then the error norm admits the fractional-order bound

||e(t)||||e(0)||Eα(-ρtα)+λmax(P)ρū,    (16)

where Eα(·) is the Mittag–Leffler function and ||Δu(t)|| ≤ ū. The Mittag–Leffler term generalizes the exponential decay in integer-order systems, capturing the memory-dependent, non-local convergence behavior of fractional dynamics.

Equation 16 implies that the error is ultimately bounded and cannot diverge if ρ>0 and ū is finite, thereby preventing long-term hallucinations.

Spectral stability margin and fractional effect. For interpretability, we summarize the stability condition using a spectral margin

ρ=λmin(PC)-F||PW||.    (17)

A larger ρ indicates a larger region of attraction and stronger robustness to perturbations. In fractional-order dynamics (α < 1), the effective forcing term is attenuated by a factor depending on Γ(1−α), which increases the margin ρ compared to the integer-order case. As a result, fewer Laplacian eigenmodes violate the stability condition, and the hallucination indices ηk tend to become negative, which matches the empirical reduction in unstable high-frequency modes.

Summary. Rather than providing full proofs, we emphasize the practical implications: (i) the Lyapunov condition (Equation 15) and margin (Equation 17) quantify robustness against hallucinations; (ii) fractional dynamics enlarge this stability region; and (iii) Fourier spectral filtering further pushes ηk to the stable regime. These theoretical insights are empirically validated by the stability and hallucination metrics reported in Section 4.

The stability analysis adheres to conventional fractional-order Lyapunov theory and necessitates two technical prerequisites: (i) the non-linear activation f(·) exhibits Lipschitz continuity, and (ii) the external disturbances are constrained. In practical GNN training, these assumptions are approximately valid. Initially, while ReLU is not globally Lipschitz at the origin, it is piecewise 1-Lipschitz virtually universally, and contemporary optimizers (such as Adam with minimal step sizes) maintain the iterates inside compact domains where the local Lipschitz constant remains finite. This relaxation is widely adopted in stability analyses of neural and graph-based models (Pascanu et al., 2013; Scaman and Virmaux, 2018). Second, the disturbances induced by stochastic training noise and spectral approximation errors remain bounded due to gradient clipping, finite-step discretization, and the bounded magnitude of node embeddings. Finally, the Lyapunov matrix P is computed numerically using the classical Bartels–Stewart solver (Golub and Van Loan, 2013), which ensures P≻0 in all experiments. Together, these considerations justify the applicability of the theoretical assumptions in real training scenarios while preserving the rigor of the stability guarantees.

3.5 Stability-aware embedding refinement

To practically enforce the Lyapunov stability conditions and ensure bounded error dynamics, our framework actively refines the node embeddings. The next-step embeddings Xt+1 are determined by solving an optimization problem that balances fidelity to the spectrally filtered embeddings X^t with a regularization term directly tied to the system's stability:

Xt+1=argminZ||Z-X^t||2+λρ-1(Z),    (18)

where λ>0 is a hyperparameter balancing the two terms. Here, ρ(Z) represents the spectral stability margin associated with the candidate embedding Z. By minimizing ρ−1(Z), the optimization actively guides the model toward configurations that maximize ρ, thereby enhancing system stability and mitigating the formation of hallucination-prone structures. This term ensures that the information energy in the embeddings remains bounded, preventing persistent instability or convergence to spurious equilibria.

The stability-aware refinement and clustering stage is closely interconnected: modularity is optimized on embeddings Xt+1 that have been specifically regularized by the Lyapunov margin ρ(Z). The hallucination indices {ηk} serve as diagnostic metrics to assess if the resultant communities engage unstable spectral modes. In this context, stability considerations indirectly influence the final partition via embedding refinement, but the clustering target continues to adhere to the conventional modularity metric.

3.6 Algorithm

We summarize the complete training and inference workflow of F2-CommNet in Algorithm 1 and Figure 1, which integrates fractional dynamics, Fourier spectral filtering, Lyapunov stability monitoring, and stability-aware modularity optimization.

Algorithm 1
www.frontiersin.org

Algorithm 1. F2-CommNet Update Rule

3.7 Complexity analysis

We analyze the computational complexity of F2-CommNet in both training and inference phases by decomposing its workflow into the major steps of Algorithm 1. Let n = |V| be the number of nodes, d the embedding dimension, H the effective memory horizon for fractional dynamics, and rn the number of leading eigenpairs retained for spectral decomposition.

3.7.1 Training phase

For each snapshot, three main costs dominate:

Fractional Dynamics. Updating embeddings under Caputo fractional dynamics requires convolution with H past states, leading to O(nHd).

Spectral Decomposition. A full Laplacian eigen-decomposition costs O(n3), but in practice only the top r modes are approximated using Lanczos or randomized SVD, giving O(nrlogn).

Spectral Filtering. Multiplying embeddings by the spectral kernel ϕ(Λt) requires O(nd).

Stability Monitoring. Computing hallucination indices ηk for r modes and the Lyapunov margin ρ costs O(r+d2), negligible compared to spectral steps.

Community Partitioning. Modularity-based clustering of n nodes requires O(nd).

Thus, the per-snapshot training complexity is approximately

O(nHd+nrlogn+nd).    (19)

3.7.2 Inference phase

During inference, no parameter updates are performed. Each new snapshot requires:

• Fractional propagation O(nHd) with truncated history.

• Approximate eigen-decomposition O(nrlogn).

• Spectral filtering and stability evaluation O(nd).

• Community assignment O(nd).

Hence, the per-snapshot inference complexity is

O(nHd+nrlogn).    (20)

3.7.3 Comparison

Both training and inference scale nearly linearly with n when H and r are moderate, making F2-CommNet applicable to large-scale graphs. In practice, we use a truncated memory horizon H whose value is selected via a small validation sweep for each dataset (see Section 4.3). This keeps the fractional update efficient while preserving the long-memory behavior required by fractional dynamics. We also retain only rn leading spectral modes, which together ensure that F2-CommNet remains computationally tractable even for large dynamic graphs.

Table 1 presents the complexity analysis of each major component in F2-CommNet. The Fractional Dynamics step incurs a cost of O(nHd), linear in the number of nodes and embedding dimension over the memory horizon. The Spectral Decomposition requires an approximate eigen-decomposition of the Laplacian, with complexity O(nrlogn) depending on the retained eigenmodes r. Spectral Filtering and Community Partitioning both scale linearly with O(nd), while Stability Monitoring adds a smaller overhead of O(r+d2).

Table 1
www.frontiersin.org

Table 1. Complexity analysis of F2-CommNet components.

Aggregating these terms, the overall training complexity per snapshot is O(nHd+nrlogn+nd), while the inference complexity per snapshot reduces to O(nHd+nrlogn) since no optimization of W, C, P is required. This shows that F2-CommNet scales near-linearly with respect to the graph size n, and remains practical for large dynamic networks while still incorporating fractional dynamics and stability-aware monitoring.

Computational Complexity and Industrial Scalability. The near-linear scaling of F2-CommNet with respect to node size n and embedding dimension d is crucial in industrial contexts where graphs can contain millions of entities. By limiting the memory horizon H and the number of retained eigenmodes rn, the framework ensures that training and inference remain tractable even for large-scale dynamic networks such as e-commerce transaction graphs, financial fraud monitoring, or communication networks. This scalability makes the method suitable for real-time or near-real-time deployment, where stability guarantees are essential to avoid spurious community alarms. Compared to baseline models, the fractional-Fourier design provides not only improved accuracy but also predictable resource usage, a key requirement in production environments.

3.8 Summary of methodology

F2-CommNet amalgamates fractional-order neural dynamics, Fourier spectrum filtering, and Lyapunov-guided refinement into a cohesive stability-aware framework for dynamic community discovery. Fractional dynamics facilitate long-memory smoothing, whereas spectral filtering mitigates high-frequency modes susceptible to hallucinations. The resultant stability margin ρ and hallucination index ηk offer comprehensible robustness assurances, while the entire pipeline exhibits near-linear scalability, facilitating implementation on extensive dynamic graphs.

4 Experiments

This section presents a comprehensive evaluation of F2-CommNet. We aim to answer the following research questions:

Q1 Does F2-CommNet improve stability margins ρ and reduce hallucination indices ηk compared to existing methods?

Q2 How does it perform on classical clustering metrics such as ARI, NMI, and modularity Q?

Q3 What is the contribution of each component (fractional dynamics, Fourier filtering, Lyapunov stability) in the overall framework?

Q4 How sensitive is the model to hyperparameters such as fractional order α, leakage coefficient ci, embedding dimension, and window size?

4.1 Datasets

To evaluate the effectiveness and robustness of F2-CommNet, we conduct experiments on a diverse set of real-world and synthetic dynamic networks. All datasets are preprocessed into temporal snapshots {Gt} with consistent node sets and evolving edge relations. Statistics are summarized in Table 2.

Table 2
www.frontiersin.org

Table 2. Statistics of datasets used in experiments.

Enron Email Network (EN) (Kojaku et al., 2024): A communication dataset with n = 36, 692 nodes and 367, 662 edges, where nodes are employees and edges represent time-stamped email exchanges. Communities correspond to functional groups within the company.

DBLP Co-authorship (DBLP) (Diboune et al., 2024): A co-authorship graph with n = 317, 080 authors and 1, 049, 866 edges. Snapshots are constructed yearly, reflecting the evolution of research communities.

Cora Citation Network (Cora-TS) (Hu et al., 2020): A citation graph adapted into temporal slices, with n = 19, 793 articles and 126, 842 citations. Node attributes are bag-of-words features; communities reflect scientific subfields.

Reddit Hyperlink Network (Reddit) (Kaiser et al., 2023): A large-scale temporal network with n = 55, 863 nodes and 858, 490 edges, where nodes are subreddits and edges represent hyperlinks shared by users. Community structure aligns with topical categories.

UCI Messages (UCI) (Prokop et al., 2024): A dynamic communication dataset with n = 1, 899 nodes and 59, 835 edges, representing private message exchanges on an online forum. Snapshots are segmented weekly to capture evolving social groups.

Human Protein-Protein Interaction (PPI) (Oughtred et al., 2021): A biological network with n = 3, 852 proteins and 76, 584 interactions. Communities correspond to functional protein complexes, with dynamics reflecting newly discovered interactions.

Synthetic Dynamic SBM (Syn-SBM) (Peixoto, 2019): A synthetic dynamic stochastic block model with n = 10, 000 nodes and 4 evolving communities. To study stability and hallucination resistance under noise, we inject temporal perturbations by randomly rewiring a proportion p of edges per snapshot. We consider three noise levels p ∈ {0.02, 0.05, 0.10} (low, moderate, high). Unless otherwise specified, the main experiments use p = 0.05, while Section 4.12 evaluates robustness across all noise settings.

4.2 Baselines

We evaluate F2-CommNet against a diverse set of baselines spanning static, spectral, temporal, and stability-aware approaches:

Static GNNs: Graph Convolutional Network (GCN) (Kipf and Welling, 2017), Graph Attention Network (GAT) (Velickovic et al., 2018).

Spectral methods: Spectral Clustering (SC) (Shah, 2022).

Temporal GNNs: Temporal Graph Network (TGN) (Rossi et al., 2020), Dynamic Graph Convolutional Network (DyGCN) (Manessi et al., 2020).

Stability-enhanced methods: EvolveGCN (Pareja et al., 2020).

Proposed: F2-CommNet.

Table 3 summarizes the taxonomy of baseline methods considered in our experiments. We divide existing approaches into four main categories: (i) Static GNNs such as GCN and GAT, which capture spectral properties but lack temporal modeling and stability control; (ii) Spectral methods such as Spectral Clustering, which operate purely in the eigen-space of the Laplacian without temporal adaptation; (iii) Temporal GNNs, including TGN and DyGCN, which extend GNNs with dynamic node updates but still lack explicit hallucination suppression; and (iv) Stability-enhanced methods such as EvolveGCN, which introduce mechanisms to handle evolving graphs but without formal stability guarantees.

Table 3
www.frontiersin.org

Table 3. Taxonomy of baselines.

The proposed F2-CommNet unifies these perspectives by simultaneously supporting temporal modeling, spectral filtering, attention-based aggregation, Lyapunov-guided stability monitoring, and hallucination control. As shown in Table 3, it is the only method that explicitly checks all five properties, highlighting its principled design and broader coverage compared with existing baselines.

We also evaluated Transformer-style dynamic graph designs as possible baselines. Nonetheless, current graph Transformers are predominantly engineered for supervised temporal prediction tasks, including link prediction or node forecasting with time-stamped labels, and generally depend on quadratic self-attention methods. The two qualities render them unsuitable for our context, where (i) the aim is unsupervised structural community discovery instead of predictive accuracy, and (ii) extensive dynamic graphs necessitate near-linear scalability rather than attention-based O(n2) complexity. Modifying these transformer-based models for our label-free clustering task necessitates essential alterations to their architectures and training aims, resulting in indirect and sometimes inequitable comparisons. Consequently, adhering to known methodologies in dynamic community detection, we utilize GCN, GAT, SC, TGN, DyGCN, and EvolveGCN as the most representative and directly comparable benchmarks.

We also evaluated whether recent stability-enhanced GNNs, such as SO-GCN (Chen et al., 2025c), LDC-GAT (Chen et al., 2025b), and other constraint-based stability models could be included as baselines. However, these techniques are predominantly intended for semi-supervised node classification on static graphs and depend on label-driven losses, Jacobian-based regularization, or Lyapunov-style constraints, which are not applicable to our unsupervised dynamic community identification context. These limits also impose significant processing expense, rendering such models unworkable for the million-edge temporal graphs utilized in our research. Consequently, in alignment with existing practices in dynamic community detection, we see these stability-oriented systems as conceptually complementary rather than directly comparable baselines.

4.2.1 Baselines Configuration

For fair comparison, hyperparameters of baseline models are selected via grid search on the validation set to minimize loss. Table 4 summarizes the final choices. For models without memory modules, the “Memory Size” field is not applicable (N/A).

Table 4
www.frontiersin.org

Table 4. Final hyperparameter configurations of baseline models after validation sweeps.

4.3 Implementation details

All experiments are implemented in PyTorch Geometric and executed on a single NVIDIA RTX 3090 GPU with 24GB memory. The Adam optimizer is used with learning rate 10−3, weight decay 10−5, and embedding dimension d = 64. The batch size is fixed at 128, and each model is trained for 200 epochs. Early stopping with patience 20 epochs is applied to prevent overfitting. Spectral filtering uses r = 50 Lanczos-approximated eigenmodes.

For the fractional dynamics, the truncation window H is treated as a dataset-dependent hyperparameter. For each dataset, we conduct a small validation sweep over a candidate set (e.g., H ∈ {5, 10, 20}) and select the smallest value that yields stable training and strong validation modularity. The final choices are as follows: Enron Email (EN) and UCI Messages use H = 5; Human PPI, Cora-TS, and Reddit Hyperlink use H = 10; DBLP Co-authorship and Synthetic SBM use H = 20. These values remain fixed for all reported experiments to ensure full reproducibility.

Unless otherwise stated, all reported numbers are averaged over 10 independent runs with random seeds {0, 1, …, 9}. For each dataset, method, and metric, we report the sample mean μ and the corresponding 95% confidence interval μ±δ, where δ=t0.975,9σ/10 and σ is the sample standard deviation. This unified statistical protocol ensures a fair and robust comparison across all experiments.

4.4 Large-scale experiments on Reddit and DBLP

We evaluate F2-CommNet on the two largest datasets in our benchmark suite: Reddit and DBLP. To clarify methodological differences, the baselines are grouped into: (i) static GNNs (GCN, GAT), which do not model temporal evolution, and (ii) dynamic GNNs (DyGCN, EvolveGCN), which adapt to evolving graph structures. All results follow the unified statistical protocol described in Section 4.3, and are reported as mean ± 95% confidence intervals over 10 runs. As shown in Table 5, F2-CommNet achieves the highest ARI on both benchmarks, improving upon static baselines by 20–25% and upon the strongest dynamic baseline (EvolveGCN) by 10–15%. Moreover, the confidence intervals of F2-CommNet are significantly narrower, indicating reduced sensitivity to initialization and greater robustness on large-scale dynamic graphs.

Table 5
www.frontiersin.org

Table 5. Performance on Reddit and DBLP (mean ± 95% CI over 10 runs).

4.4.1 Fractional dynamics

The Caputo fractional derivative is approximated via the Grünwald–Letnikov discretization, which requires convolving each update with a truncated history of length H. We vary the fractional order α ∈ {0.6, 0.7, 0.8, 0.9, 1.0} to investigate the role of long-memory effects. The case α = 1.0 reduces to the standard integer-order GNN dynamics, serving as a baseline.

4.4.2 Stability and hallucination regularization

To enforce robustness, two stability-aware regularizers are incorporated into the objective:

Lρ=-ρ,Lη=kmax(0,ηk),    (21)

where ρ denotes the Lyapunov stability margin and ηk is the hallucination index of eigenmode uk. The total training objective is defined as

L=Lrecon+λρLρ+ληLη,    (22)

with λρ and λη balancing reconstruction fidelity against stability guarantees. For all datasets, λρ and λη are tuned in {0.1, 0.5, 1.0} using a validation split. Spectral filtering employs r = 50 leading eigenmodes by default, approximated using the Lanczos method for scalability.

All experiments follow the unified statistical protocol described in Section 4.3, and each configuration is evaluated over 10 independent runs with distinct random seeds.

4.5 Result analysis summary

From the comprehensive results in Table 6, several consistent patterns emerge across all seven benchmark datasets (Cora, Citeseer, PubMed, Reddit, Enron, DBLP, BioGRID).

Table 6
www.frontiersin.org

Table 6. Stability and clustering performance on seven datasets.

(i) Stability improvement. F2-CommNet consistently achieves the highest stability margin ρ, with average gains of more than 2 × compared to GCN, GAT, and spectral clustering, and at least 30% relative improvement over the strongest temporal baselines such as TGN, DyGCN, and EvolveGCN. This confirms the effectiveness of fractional dynamics and Lyapunov-guided monitoring in enforcing robust equilibrium during dynamic community evolution.

(ii) Hallucination suppression. The hallucination index ηmax is drastically reduced by F2-CommNet, reaching values as low as 0.20–0.29 across all datasets, compared with 0.30–0.52 for competing methods. Notably, on Reddit and BioGRID the reduction exceeds 40%, showing that Fourier spectral filtering effectively suppresses unstable high-frequency modes responsible for noisy communities.

(iii) Clustering quality enhancement. The stability and robustness improvements translate directly into superior clustering outcomes. F2-CommNet obtains the best Adjusted Rand Index (ARI), Normalized Mutual Information (NMI), and modularity Q in every case, with gains of 5–10% over GCN/GAT and 3–6% over temporal models like TGN and EvolveGCN. For example, on Cora the ARI improves from 0.75 (EvolveGCN) to 0.80, and on Reddit the NMI improves from 0.77 (EvolveGCN) to 0.85.

Overall These findings demonstrate that F2-CommNet achieves a balanced and principled advancement in stability, hallucination suppression, and clustering quality, providing a robust and generalizable framework for dynamic community detection across diverse domains.

Table 7 summarizes the metric-wise wins of F2-CommNet across seven benchmark datasets. We count victories over five evaluation criteria: stability margin ρ, hallucination index ηmax, Adjusted Rand Index (ARI), Normalized Mutual Information (NMI), and modularity Q. As shown, F2-CommNet consistently dominates: it secures the best ρ on all six datasets where stability is well-defined, reduces ηmax to the lowest levels on all datasets, and achieves the highest ARI, NMI, and Q in nearly all cases. In total, the model wins 32 out of 35 possible comparisons, demonstrating its robustness across diverse graph domains.

Table 7
www.frontiersin.org

Table 7. Count of metrics (ρ, ηmax, ARI, NMI, Q) on which F2-CommNet is best for each dataset.

This result highlights that the integration of fractional dynamics, spectral filtering, and stability-aware regularization not only stabilizes training but also directly translates into superior clustering quality. The strong performance across heterogeneous datasets such as citation networks (Cora, Citeseer, PubMed), social networks (Reddit, DBLP), and biological graphs (BioGRID) confirm the generalizability of F2-CommNet.

Key findings. (i) F2-CommNet enlarges ρ by more than 3 × compared to GCN/GAT. (ii) The hallucination index ηmax is reduced to nearly zero. (iii) These stability gains translate into better clustering quality.

Figure 2 shows training curves of modularity and stability margin ρ, confirming that F2-CommNet converges faster and to more stable solutions.

Figure 2
Graphs showing training curves on the Cora dataset for seven methods: GCN, GAT, Spectral, TGN, DyGCN, EvolveGCN, and F²-CommNet. The left graph depicts modularity Q over 100 epochs, with F²-CommNet showing the highest increase. The right graph shows stability margin ρ, also over 100 epochs, again led by F²-CommNet. Each method is distinctly colored.

Figure 2. Training curves on Cora: (a) modularity Q, (b) stability margin ρ.

Qualitative results in Figure 3 visualize learned communities, showing that F2-CommNet yields cleaner and more compact clusters.

Figure 3
Comparison of community detection across seven methods displayed in scatter plots. Each method shows clustered data points in different colors representing detected communities. Top row methods: GCN, GAT, Spectral, and TGN. Bottom row: DyGCN, EvolveGCN, and F²-CommNet. Clusters vary in density and separation, illustrating the effectiveness of each method.

Figure 3. Visualization of community detection results across seven representative methods on synthetic data. Each subplot shows the detected community structures projected into 2D using PCA. Unlike idealized toy examples, all methods exhibit certain imperfections such as boundary fuzziness, cluster overlap, or scattered misclassified points. Compared to the baselines, our proposed F2-CommNet produces more compact and well-separated communities, though not perfectly, reflecting a realistic advantage in stability and robustness without exaggerating performance.

4.6 Ablation studies

We evaluate five variants:

• Baseline (α = 1.0): integer-order dynamics only.

• + Fourier Projection.

• + Fractional Damping.

• + Lyapunov Stability.

• Full F2-CommNet.

To evaluate the generality of each architectural component, we compare the five variants across three typical datasets: citation (Cora), social (Reddit), and biological networks (BioGRID).

Table 8 summarizes the cross-dataset ablation results. We focus on the stability margin ρ and the hallucination index ηmax, as they directly reflect the stabilization effect of each architectural component.

Table 8
www.frontiersin.org

Table 8. Cross-dataset ablation study on stability metrics (mean ± 95% CI over 10 runs).

In all three datasets, each module consistently enhances both the stability margin and the suppression of hallucinations. Fractional damping yields the highest individual benefit, although Lyapunov stability enhances the precision of the confidence intervals. The complete F2-CommNet attains superior performance across all domains, indicating that the stabilizing processes generalize beyond an individual dataset.

4.7 Sensitivity analysis

We analyze the sensitivity of F2-CommNet to fractional order α, leakage ci, embedding dimension d, and window size w.

Training dynamics and sensitivity analysis.

Figure 4 provides a joint view of training behaviors and parameter sensitivity. In Figure 4, we compare the modularity Q and stability margin ρ across seven representative methods. Classical baselines such as GCN and Spectral clustering show slower convergence and weaker stability, while more advanced temporal models (DyGCN and EvolveGCN) demonstrate improved robustness. Our proposed F2-CommNet consistently achieves higher Q and larger ρ, validating both community quality and stability guarantees. We further analyze the role of the fractional order α. We observe that α ∈ [0.7, 0.9] yields the most balanced performance: smaller α enlarges the stability margin but slows down convergence due to excessive memory effects, whereas larger α accelerates convergence but weakens robustness, reflected by an increase in ηmax. These results empirically support the theoretical trade-off derived in Equation 22 and highlight the importance of selecting moderate fractional orders in practice.

Figure 4
Line graphs show the effect of fractional order on stability margin and hallucination index. The top graph has rho) and eta plotted against, showing diverging trends. The bottom left graph presents rho over epochs for different values. The bottom right graph displays over epochs, also for varying). Different line colors represent (0.8), and (1.0).

Figure 4. Training dynamics and sensitivity analysis on the Cora dataset, illustrating the effect of the fractional order α on the stability margin ρ and the hallucination index ηmax, as well as their evolution during training.

4.8 Parameter sensitivity and fractional stability analysis

We further investigate how key architectural and fractional-order parameters influence model stability and clustering performance. Table 8 summarizes the ablation study, showing that each fractional component contributes positively to the stability margin ρ and clustering quality Q. The progressive inclusion of Fourier projection, fractional damping, and Lyapunov stability terms leads to a monotonic improvement, with the full F2-CommNet achieving the highest average performance across all metrics. This indicates that the combination of fractional dynamics and Lyapunov-based correction yields a synergistic stabilization effect rather than a simple additive gain.

Table 9 examines the impact of the fractional order α on both stability and hallucination suppression. As α decreases from 1.0 to 0.5, the stability margin ρ gradually increases while the hallucination index ηmax decreases, reflecting a stronger damping of unstable eigenmodes. This behavior confirms that the fractional operator serves as a spectral regulator—suppressing noisy high-frequency responses while preserving coherent community structures. Notably, α≈0.7 provides a desirable trade-off between responsiveness and smoothness, consistent with the optimal setting adopted in our experiments.

Table 9
www.frontiersin.org

Table 9. Fractional order sweep: stability margin ρ and hallucination index ηmax for different α.

Table 10 presents the detailed eigenmode analysis of the hallucination indices ηk under α = 1.0 and α = 0.7. Compared with the integer-order case, the fractional configuration compresses the dynamic range of ηk values, effectively reducing extreme oscillations at higher Laplacian frequencies (k>6). This spectral contraction explains the observed increase in temporal consistency across snapshots and validates the fractional damping mechanism described in Equation 22.

Table 10
www.frontiersin.org

Table 10. Spectral mode suppression: hallucination indices ηk under α = 1.0 and α = 0.7.

Effect of leakage ci We additionally analyzed the leakage coefficient ci, which regulates the inherent damping intensity in the fractional dynamics. Augmenting ci enhances the Lyapunov stability margin ρ and expedites the attenuation of disturbances; nevertheless, excessive leaking may excessively dampen node activations, resulting in unduly smoothed embeddings and diminished community contrast. Conversely, minimal values of ci diminish the effective damping, rendering the system more susceptible to noise, which subsequently elevates the hallucination index ηmax and results in fragmented communities. In our experiments, we probed a moderate range of ci values and observed that performance (in terms of ρ, ηmax, ARI, and Q) remains stable within a band of ci ∈ [0.2, 0.4]. We therefore fix ci in this range for all reported results, which provides a robust trade-off between stability and representation strength.

Effect of embedding dimension d Performance improves with increasing feature dimension up to d = 128, beyond which overfitting emerges, suggesting that excessively large latent spaces capture noise rather than informative structural variations.

Effect of window size w A larger temporal window captures longer dependencies but increases computational overhead. Empirically, w = 64 offers a satisfactory balance between temporal expressiveness and efficiency, providing stable training and consistent community alignment across dynamic snapshots.

4.9 Spectral mode suppression analysis

We further analyze the suppression of high-frequency Laplacian eigenmodes. Table 10 compares hallucination indices ηk = λkFck under integer-order (α = 1.0) vs. fractional-order (α = 0.7). The results confirm that fractional dynamics suppress unstable high-frequency modes, consistent with the theoretical model. The theoretical derivation in Equation 22 suggests that fractional damping reduces the effective forcing term λkF, thereby shifting certain mid-frequency modes into the stable region.

Fractional-order dynamics thus provide a natural spectral regularization mechanism. Unlike integer-order propagation, which tends to amplify noise residing in higher Laplacian eigenmodes, the fractional operator introduces a smooth decay governed by α, effectively attenuating oscillatory perturbations and stabilizing graph filters. This behavior leads to smaller hallucination indices ηk and smoother temporal transitions across successive snapshots. Empirically, the suppression effect becomes more evident as α decreases, demonstrating that fractional damping not only mitigates over-smoothing but also prevents spectral drift caused by transient noise. Consequently, the fractional component can be interpreted as an adaptive low-pass filter that preserves informative structures while restraining unstable eigenmodes. This motivates the following analysis of spectral hallucination and stability margins.

4.10 Error dynamics under perturbations

We next study error trajectories under different noise intensities, based on the error dynamics formulation (Equations 1115). As shown in Table 11, fractional dynamics consistently achieve tighter error bounds limsupt → ∞||e(t)||, in line with the boundedness theorem (Equation 21).

Table 11
www.frontiersin.org

Table 11. Error dynamics under perturbations: long-term error bound limsupt → ∞||e(t)||.

4.11 Fractional Lyapunov function validation

Finally, we validate Lyapunov convergence by monitoring V(t) = ePe. Table 12 demonstrates that α < 1 accelerates the decay of V(t), achieving faster stability, consistent with the sufficient conditions in Equations 20, 21.

Table 12
www.frontiersin.org

Table 12. Lyapunov function decay: values of V(t) at different time points.

4.12 Simulation studies

To complement the main experiments, we further evaluate the robustness of F2-CommNet under controlled noise conditions using the synthetic dynamic SBM described in Section 4.1. As noted previously, the dataset includes three perturbation levels p ∈ {0.02, 0.05, 0.10}, corresponding to low, moderate, and high noise. This section examines the model's stability and hallucination suppression behavior across these noise regimes.

To validate the theoretical framework of F2-CommNet, we perform a hierarchy of simulations, ranging from toy graphs to synthetic networks and real-world benchmarks. This staged design illustrates how fractional dynamics, Fourier spectral filtering, and Lyapunov-based analysis jointly contribute to stability enhancement and hallucination suppression.

The Laplacian eigenvalues of the 10-vertex synthetic graph are

λ(L){0.00,1.27,2.15,3.62,4.10,5.48,6.33,7.89,9.05,11.22},

revealing a rich spectral structure. The smallest eigenvalue λ1 = 0 corresponds to the trivial constant mode, mid-range modes (e.g., λ3, λ4) encode coarse community partitions, while the largest eigenvalues (λ9, λ10) correspond to highly oscillatory modes that dominate hallucination channels. As shown in Section 3.8, decreasing the fractional order α suppresses such unstable modes, enlarging the stability margin ρ and reducing the hallucination index ηmax.

Experiment 1: Baseline Integer Dynamics.

Integer-order dynamics (α = 1.0) follow classical exponential decay. As illustrated in Figure 5, integer-order dynamics (α = 1.0) demonstrate exponential decay. However, high-frequency eigenmodes remain unstable, amplifying oscillations and destabilizing node trajectories. Although partial suppression occurs in low-frequency modes, the lack of robustness in high-frequency channels highlights the inherent limitations of classical integer-order updates, motivating the introduction of fractional damping.

Figure 5
Nine line graphs arranged in a three-by-three grid show vertex dynamics over time from zero to thirty. Each graph displays three lines: a blue line for clean dynamics, a red dashed line for noisy dynamics with alpha equal to one point zero, and a green line for noisy dynamics with alpha equal to zero point eight. The y-axis values range from negative zero point five to zero point five. Each graph is labeled from “Vertex v1 dynamics” to “Vertex v10 dynamics.”

Figure 5. Time evolution of vertex states v1v10 under different settings. Blue curves represent clean integer-order dynamics (α = 1.0), red dashed curves denote noisy integer-order dynamics, and green curves show noisy fractional-order dynamics (α = 0.8). Fractional damping suppresses oscillations and confines unstable modes, consistent with the suppression mechanism discussed in Section 3.9.

Experiment 2: Fractional Damping.

When governed by fractional order α = 0.8, the system exhibits long-memory smoothing. As illustrated in Figure 6, the Mittag–Leffler decay suppresses oscillations and enforces stable convergence, even under moderate perturbations. Compared with integer-order dynamics, fractional damping converges more slowly at first but achieves greater long-term robustness. This matches the theoretical claim that fractional updates redistribute dissipation across time, thereby suppressing hallucination-prone modes.

Figure 6
Three heatmaps show state values over time with varying noise levels. The first is labeled “Clean α=1.0,” the second “Noisy α=1.0,” and the third “Noisy α=0.8.” Color bars range from about -0.2 to 0.6 (first two) and -0.4 to 0.8 (third), indicating different state values. The x-axis is time, and the y-axis is vertex index.

Figure 6. Heatmap comparison of vertex dynamics across time. (Left) clean integer-order dynamics (α = 1.0); (Middle) noisy integer-order dynamics amplifying instabilities; (Right) noisy fractional-order dynamics (α = 0.8) where oscillations are confined to bounded ranges. Fractional dynamics reshape the spectral stability landscape and mitigate hallucination-prone modes.

Experiment 3: Parameter Sweep.

We sweep α ∈ [0.5, 1.0] to quantify robustness. As shown in Table 9 and Figure 7, smaller α consistently enlarges ρ and reduces ηmax, though convergence slows for α ≤ 0.6. The range α ∈ (0.7, 0.9) offers the best trade-off between speed and stability, matching the theoretical condition in Equation 22.

Figure 7
Graph titled “Experiment 1: Baseline Integer Dynamics (α = 1.0)” showing four mode amplitudes over time. Blue line represents low-frequency mode (stable), green for mid-frequency (partly stable), red dashed for high-frequency (unstable), and yellow dotted for divergent mode. Time on x-axis, amplitude on y-axis.

Figure 7. Baseline integer-order dynamics (α = 1.0) on the Cora dataset. The system follows exponential decay, but high-frequency eigenmodes remain unstable, leading to amplified oscillations and destabilized trajectories. While low-frequency components exhibit suppression, the persistence of unstable modes highlights the fragility of integer-order updates.

Experiment 4: Perturbation Analysis.

We next test robustness under explicit edge perturbations (Δw14 = 0.5, Δw25 = 0.8, Δw36 = 1.0). Integer-order dynamics amplify noise via unstable high-frequency modes, while fractional-order dynamics confine oscillations to bounded trajectories (Figure 8). Table 13 quantifies this effect, demonstrating fractional damping reduces ηmax and enlarges ρ, consistent with the Lyapunov boundedness theorem (Equation 21).

Figure 8
Line graph titled “Error dynamics under perturbations” compares integer-order (α=1.0) and fractional-order (α=0.8) trajectories over time. The red line for integer-order fluctuates significantly, peaking at 2.0, while the blue line for fractional-order shows milder oscillations. Time is on the x-axis and trajectory amplitude on the y-axis, ranging from -1.0 to 2.0.

Figure 8. Error dynamics under perturbations. Integer-order dynamics amplify oscillations and diverge, whereas fractional dynamics confine trajectories within bounded ranges.

Table 13
www.frontiersin.org

Table 13. Perturbation analysis on the Cora dataset.

Experiment 5: Spectral Hallucination Indices (Sections 3.10, 3.11).

Finally, we evaluate hallucination indices at the spectral level. Table 14 shows that fractional damping (α = 0.8) selectively stabilizes mid-frequency modes, shifting Mode 3 from unstable to stable. High-frequency modes remain unstable but with reduced growth, consistent with bounded dynamics observed in Experiment 4. Figure 9 confirms Lyapunov decay V(t) is monotone under fractional updates, validating the theoretical stability guarantees.

Table 14
www.frontiersin.org

Table 14. Spectral hallucination analysis on the Cora dataset.

Figure 9
Graph depicting Lyapunov function decay over time. The red line represents integer-order Lyapunov V(t) with oscillations, while the blue line depicts fractional-order Lyapunov V(t) with smoother decay. Both lines converge toward zero. Time ranges from 0 to 10 on the x-axis, and V(t) from 0 to 1 on the y-axis.

Figure 9. Lyapunov function decay V(t) under integer-order (α = 1.0) and fractional-order (α = 0.8) dynamics. Fractional damping ensures smoother convergence and tighter stability bounds.

4.12.1 Summary of simulation results

Across all five experiments, three consistent findings emerge:

• High-frequency eigenmodes are the primary catalysts of hallucinations, driving unstable oscillations.

• Fractional damping selectively stabilizes mid-frequency modes, confining noise to bounded ranges and reducing ηmax.

• The optimal range α ∈ (0.7, 0.9) balances convergence speed with robustness, maximizing stability margin ρ while suppressing hallucinations.

4.13 Qualitative results on real datasets

To complement the quantitative evaluation, we provide qualitative analyses on two representative real-world datasets, illustrating how F2-CommNet suppresses hallucination-prone structures and stabilizes community embeddings.

Cora: t-SNE embedding visualization. Figure 10 compares the node embeddings produced by GCN, EvolveGCN, and F2-CommNet using t-SNE. GCN and EvolveGCN exhibit scattered and overlapping clusters, indicating unstable high-frequency modes that distort community boundaries. In contrast, F2-CommNet produces compact and well-separated clusters with significantly fewer noisy points, visually confirming the suppression of hallucination artifacts predicted by the spectral analysis.

Figure 10
Three scatter plots comparing data distributions for GCN, EvolveGCN, and F2-CommNet. Each plot features clusters of colored dots: blue, green, and yellow, representing different data points. The arrangement of dots varies across the plots, reflecting different clustering patterns.

Figure 10. t-SNE visualization of embeddings on the Cora dataset. F2-CommNet forms clearer clusters with fewer scattered points, indicating reduced hallucination and improved structural stability.

Reddit: Training stability curves. Figure 11 reports the evolution of the stability margin ρ(t) and hallucination index ηmax(t) during training on Reddit. GCN and TGN exhibit strong oscillations and intermittent spikes in ηmax, revealing the presence of unstable spectral modes. EvolveGCN partially mitigates this behavior but still suffers from fluctuations. F2-CommNet maintains consistently higher ρ(t) and substantially lower ηmax(t) throughout training, demonstrating robust suppression of hallucination-prone eigenmodes on large-scale social networks.

Figure 11
Two line graphs comparing GCN, EvolveGCN, and F²-CommNet over epochs. The left graph shows the stability margin, where F²-CommNet consistently performs better. The right graph depicts the hallucination index, where F²-CommNet shows a decreasing trend, outperforming the others.

Figure 11. Training dynamics on the Reddit dataset. (a) Stability margin ρ over training epochs. (b) Hallucination index ηmax over training epochs. F2-CommNet maintains a higher stability margin and a lower hallucination index throughout training, while baseline methods exhibit instability and oscillatory behaviors.

Summary. Across citation and social networks, F2-CommNet consistently generates more stable, coherent, and hallucination-resistant embeddings. These qualitative results align with the theoretical predictions of the fractional damping mechanism.

5 Discussion

The proposed F2-CommNet framework advances community detection by integrating fractional-order dynamics with Fourier spectrum filtering, which systematically suppresses unstable modes prone to hallucination. Our theoretical analysis demonstrates that fractional damping enlarges the Lyapunov stability margin effectively constrains error propagation paths in the presence of disturbances. This aligns with previous findings on instability in deep GNNs (Oono and Suzuki, 2020; Balcilar et al., 2021), while providing a constructive remedy grounded in fractional calculus.

Compared with traditional GNNs such as GCN and GAT, F2-CommNet shows enhanced robustness against over-smoothing and spectral noise. Prior works have attempted to stabilize message passing through residual connections (Li et al., 2019), polynomial filters (Levie et al., 2019), or regularization schemes such as DropEdge (Rong et al., 2020), yet they remain vulnerable to mode hallucinations. Our results indicate that the memory terms introduced by fractional dynamics act as intrinsic stabilizers, strengthening the spectral filtering and enabling interpretable clustering.

Recent stability-oriented GNNs such as SO-GCN (Chen et al., 2025c) and LDC-GAT (Chen et al., 2025b) offer valuable insights, but they are designed for semi-supervised node classification on static graphs, relying on label-driven objectives and task-specific stability constraints. Conversely, dynamic community recognition necessitates unsupervised optimization of structural modularity across developing graph snapshots. Furthermore, the supplementary Jacobian-based or norm-constrained calculations in these models impose significant overhead, while F2-CommNet maintains a near-linear complexity via its fractional difference operator, rendering it more appropriate for extensive dynamic environments.

The empirical improvements observed in modularity, ARI, and calibration metrics confirm that fractional-Fourier coupling provides a generalizable mechanism. This is consistent with analogous results in fractional control theory (Diboune et al., 2024), where memory-induced damping yields resilience beyond integer-order models. In graph learning, Fourier-based filters have been studied in spectral GNNs (Levie et al., 2019), but the coupling with fractional operators introduces a novel design paradigm. Ablation studies further reveal that while each component—fractional damping, Fourier filtering, and Lyapunov-based refinement—improves performance individually, their combination is essential for hallucination suppression.

Beyond algorithmic contributions, the framework raises questions of interpretability and scalability. Fractional dynamics introduce hyperparameters (e.g., order α, leakage rate ρ) whose selection influences stability guarantees. Although our theoretical bounds guide parameter choice, adaptive tuning strategies remain an open challenge. Scalability also requires attention: Fourier filtering benefits from efficient polynomial approximations, whereas fractional integration is computationally heavier. Hybrid approximations, such as truncated Grünwald–Letnikov operators may offer a balance between accuracy and efficiency.

Looking ahead, three directions appear promising. First, extending F2-CommNet to temporal multiplex networks may enhance robustness in heterogeneous dynamic environments, resonating with advances in temporal community detection and multiplex modeling. Second, connections with Bayesian uncertainty modeling (Wang et al., 2024) suggest opportunities to combine probabilistic calibration with fractional stability, building on recent developments in Bayesian GNNs and uncertainty quantification (Zhang et al., 2020). Third, deploying F2-CommNet in applied domains such as smart grids, epidemiological contact networks, and multimodal social platforms will allow further evaluation of its interpretability and hallucination resistance (Ying et al., 2019).

In summary, this study unites spectral graph theory, fractional-order calculus, and neural dynamics to address instability in GNN-based community detection. By leveraging memory-driven fractional damping and Fourier spectral filtering, F2-CommNet, establishes a foundation for interpretable, stable, and scalable graph learning models.

6 Conclusion

This study presented F2-CommNet, a fractional-Fourier hybrid framework for dynamic community detection. By combining fractional-order dynamics, Fourier spectral filtering and stability-aware refinement, the model offers both theoretical guarantees and practical scalability.

Theoretical impact. Our fractional Lyapunov analysis demonstrates that the proposed framework enlarges the stability margin ρ by more than 3 × (on average from 0.12 to 0.41 across datasets) and reduces the hallucination index ηmax by up to 35% (from 0.31 to 0.20). These results provide explicit robustness criteria rarely found in prior community detection literature.

Empirical performance. Across seven benchmarks (Cora, Citeseer, PubMed, Reddit, Enron, DBLP, BioGRID), F2-CommNet improves Adjusted Rand Index (ARI) by up to 25% (e.g., Cora: 0.58 → 0.73) and Normalized Mutual Information (NMI) by 15% (PubMed: 0.49 → 0.56). Compared with static baselines (GCN, GAT), the improvements are consistent, while relative to dynamic baselines (DyGCN, EvolveGCN), additional gains of 3–6% ARI are observed. Overall, as summarized in Table 7, F2-CommNet achieves the best result in 32 out of 35 metric-dataset pairs. Moreover, the variance across 10 independent runs remains below 2%, confirming robustness and reproducibility.

Practical scalability. The complexity remains near-linear, O(nHd+nrlogn), with Hn and spectral rank rn. On large graphs, the method scales to millions of nodes: on Reddit (232k nodes, 11.6M edges), F2-CommNet reduces training time per epoch by 18% compared with EvolveGCN (42.5s → 34.7s), while on DBLP (317k nodes, 1.6M edges) it lowers peak memory usage by 21%. These quantitative results highlight that the method is not only more accurate, but also computationally efficient in industrial-scale settings.

In summary, F2-CommNet delivers measurable and reproducible gains: +25% ARI, +15% NMI, 3 × stability margin, –35% hallucinations, and 32/35 wins across benchmarks, with variance < 2% and training time reduced by up to 18% on large-scale graphs. These results demonstrate that fractional-Fourier modeling provides a rigorous and scalable foundation for robust dynamic graph learning.

7 Future work

Despite F2-CommNet demonstrating considerable advancements in hallucination suppression and community interpretability, numerous avenues for further exploration persist. Future endeavors will concentrate on scaling the methodology to billion-scale graphs via distributed spectral filtering and efficient fractional solvers, expanding the framework to dynamic and temporal networks, and investigating adaptive strategies for the selection of the fractional order α.

In addition, recent stability-oriented architectures such as SO-GCN (Chen et al., 2025c) and LDC-GAT (Chen et al., 2025b) suggest promising constraint-based mechanisms. An intriguing avenue of exploration is to examine the potential generalization of their stability principles to unsupervised and dynamic clustering contexts, hence enhancing boundary preservation in changing graphs.

Furthermore, implementing the model in cross-domain issues such as cybersecurity, protein-protein interactions, and knowledge graph reasoning could enhance its influence. Ultimately, additional theoretical examination, especially concerning stochastic perturbations and generalization assurances, could reinforce the mathematical underpinnings of Fourier–fractional graph learning.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors.

Author contributions

DQ: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. YM: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing.

Funding

The author(s) declared that financial support was not received for this work and/or its publication.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Abbahaddou, Y., Ennadir, S., Lutzeyer, J. F., and Vazirgiannis, M. (2024). ”Bounding the expected robustness of graph neural networks subject to node feature attacks,” in Proceedings of the International Conference on Learning Representations (ICLR) (Amherst, MA: OpenReview.net).

Google Scholar

Balcilar, M., Renton, G., Héroux, P., Gaüzère, B., Adam, S., and Honeine, P. (2021). ”Analyzing the expressive power of graph neural networks in a spectral perspective,” in International Conference on Learning Representations (Amherst, MA: OpenReview.net).

Google Scholar

Cai, X., and Wang, B. (2023). A graph convolutional fusion model for community detection in multiplex networks. Data Min. Knowl. Discov. 37, 1518–1547. doi: 10.1007/s10618-023-00932-w

Crossref Full Text | Google Scholar

Casteigts, A., Flocchini, P., Quattrociocchi, W., and Santoro, N. (2023). Time-varying graphs and dynamic networks. Theoret. Comput. Sci. 929, 45–69. doi: 10.1007/978-3-642-22450-8_27

Crossref Full Text | Google Scholar

Chen, J., Wang, S., and He, L. (2023). Stability of graph neural networks for community detection. Neurocomputing 514, 48–61. doi: 10.1016/j.neucom.2023.01.072

Crossref Full Text | Google Scholar

Chen, L., Zhou, Q., and Zhao, D. (2025a). k-plex-based community detection with graph neural networks. Inform. Sci. 689:121509. doi: 10.1016/j.ins.2024.121509

Crossref Full Text | Google Scholar

Chen, L., Zhu, H., and Han, S. (2025b). Ldc-gat: a lyapunov-stable graph attention network with dynamic filtering and constraint-aware optimization. Axioms 14:504. doi: 10.3390/axioms14070504

Crossref Full Text | Google Scholar

Chen, L., Zhu, H., and Han, S. (2025c). Stability-optimized graph convolutional network: a novel propagation rule with constraints derived from odes. Mathematics 13:761. doi: 10.3390/math13050761

Crossref Full Text | Google Scholar

Cheng, X., Zhu, W., and Yan, W. Q. (2025). Centrality-aware collaborative network embedding for overlapping community detection. IEEE Trans. Netw. Sci. Eng. 13, 2236–2250. doi: 10.1109/TNSE.2025.3611500

Crossref Full Text | Google Scholar

Diboune, A., Slimani, H., Nacer, H., and Bey, K. B. (2024). A comprehensive survey on community detection methods and applications in complex information networks. Social Netw. Anal. Min. 14:93. doi: 10.1007/s13278-024-01246-5

Crossref Full Text | Google Scholar

Golub, G. H., and Van Loan, C. F. (2013). Matrix Computations, 4th Edn. Baltimore, MD: Johns Hopkins University Press.

Google Scholar

Guo, B., Deng, L., and Lian, T. (2025). Gcn-based unsupervised community detection with refined structure centers and expanded pseudo-labeled set. PLoS ONE 20:e0327022. doi: 10.1371/journal.pone.0327022

PubMed Abstract | Crossref Full Text | Google Scholar

Holme, P. (2023). Temporal Network Theory. SpringerBriefs in Complexity. Cham: Springer. doi: 10.1007/978-3-031-30399-9

Crossref Full Text | Google Scholar

Hu, W., Fey, M., Zitnik, M., Dong, Y., Ren, H., Liu, B., et al. (2020). ”Open graph benchmark: datasets for machine learning on graphs,” in Advances in Neural Information Processing Systems (NeurIPS) (Red Hook, NY: Curran Associates, Inc.), Vol. 33, 22118–22133.

Google Scholar

Kaiser, J., Fähnrich, B., and Heintz, L. (2023). Ups and downs on ‘r/science' — exploring the dynamics of science communication on reddit. J. Sci. Commun. 22:A08. doi: 10.22323/2.22020208

Crossref Full Text | Google Scholar

Kang, Q., Zhao, K., Ding, Q., Ji, F., Li, X., Liang, W., et al. (2024). ”Unleashing the potential of fractional calculus in graph neural networks with frond,” in Proceedings of the International Conference on Learning Representations (ICLR) (Amherst, MA: OpenReview.net).

Google Scholar

Kipf, T. N., and Welling, M. (2017). ”Semi-supervised classification with graph convolutional networks,” in International Conference on Learning Representations (Amherst, MA: OpenReview.net).

Google Scholar

Kojaku, S., Radicchi, F., and Ahn, Y.-Y. (2024). Network community detection via neural embeddings. Nat. Communic. 15:9446. doi: 10.1038/s41467-024-52355-w

PubMed Abstract | Crossref Full Text | Google Scholar

Kumar, M., Mehta, U., and Cirrincione, G. (2024). Enhancing neural network classification using fractional-order activation functions. AI Open 5, 10–22. doi: 10.1016/j.aiopen.2023.12.003

Crossref Full Text | Google Scholar

Lambiotte, R., and Rosvall, M. (2022). Temporal community detection in evolving networks. Nat. Commun. 13:345.

Google Scholar

Levie, R., Isufi, E., and Kutyniok, G. (2019). ”On the transferability of spectral graph filters,” in 2019 13th International conference on Sampling Theory and Applications (SampTA) (Piscataway, NJ: Institute of Electrical and Electronics Engineers (IEEE)), 1–5. doi: 10.1109/SampTA45681.2019.9030932

Crossref Full Text | Google Scholar

Li, G., Müller, M., Thabet, A., and Ghanem, B. (2019). ”Deepgcns: can gcns go as deep as cnns?,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (Piscataway, NJ: Institute of Electrical and Electronics Engineers (IEEE)), 9267–9276. doi: 10.1109/ICCV.2019.00936

Crossref Full Text | Google Scholar

Liu, C., Han, Y., Xu, H., Yang, S., Wang, K., and Su, Y. (2024). A community detection and graph neural network based link prediction approach for scientific literature. Mathematics 12:369. doi: 10.3390/math12030369

Crossref Full Text | Google Scholar

Manessi, F., Rozza, A., and Manzo, M. (2020). Dynamic graph convolutional networks. Pattern Recogn. 97:107000. doi: 10.1016/j.patcog.2019.107000

Crossref Full Text | Google Scholar

Maskey, S., Paolino, R., Bacho, A., and Kutyniok, G. (2023). ”A fractional graph laplacian approach to oversmoothing,” in Proceedings of the Neural Information Processing Systems (NeurIPS) (Red Hook, NY: Curran Associates, Inc.).

Google Scholar

Oono, K., and Suzuki, T. (2020). ”Graph neural networks exponentially lose expressive power for node classification,” in Proceedings of the International Conference on Learning Representations (ICLR) (Amherst, MA: OpenReview.net).

Google Scholar

Oughtred, R., Rust, J., Chang, C., Breitkreutz, B.-J., Stark, C., Willems, A., et al. (2021). Biogrid: a comprehensive biomedical resource of curated protein, genetic, and chemical interactions. Protein Sci. 30, 187–200. doi: 10.1002/pro.3978

Crossref Full Text | Google Scholar

Panda, S. K., Vijayakumar, V., Agarwal, R. P., and Rasham, T. (2025). Fractional-order complex-valued neural networks: stability results, numerical simulations and application to game-theoretical decision making. Discrete Contin. Dyn. Syst.-S 18, 2622–2643. doi: 10.3934/dcdss.2025071

Crossref Full Text | Google Scholar

Pareja, A., Domeniconi, G., Chen, J., Ma, T., Suzumura, T., Kanezashi, H., et al. (2020). ”Evolvegcn: evolving graph convolutional networks for dynamic graphs,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34 (Palo Alto, CA: AAAI Press), 5363–5370. doi: 10.1609/aaai.v34i04.5984

Crossref Full Text | Google Scholar

Pascanu, R., Mikolov, T., and Bengio, Y. (2013). ”On the difficulty of training recurrent neural networks,” in Proceedings of the 30th International Conference on Machine Learning (ICML), volume 28 of Proceedings of Machine Learning Research (Brookline, MA: Proceedings of Machine Learning Research (PMLR)), 1310–1318.

Google Scholar

Peixoto, T. P. (2019). ”Bayesian stochastic blockmodeling,” in Advances in Network Clustering and Blockmodeling, eds. P. Doreian, V. Batagelj, and A. Ferligoj (Cham: Springer), 289-332. doi: 10.1002/9781119483298.ch11

Crossref Full Text | Google Scholar

Prokop, P., Dráždilová, P., and Platoš, J. (2024). Overlapping community detection in weighted networks via hierarchical clustering. PLoS ONE 19:e0312596. doi: 10.1371/journal.pone.0312596

PubMed Abstract | Crossref Full Text | Google Scholar

Rong, Y., Huang, W., Xu, T., and Huang, J. (2020). ”Dropedge: towards deep graph convolutional networks on node classification,” in Proceedings of the International Conference on Learning Representations (ICLR) (Amherst, MA: OpenReview.net) (Accessed September 21, 2025).

Google Scholar

Rossi, E., Chamberlain, B., Frasca, F., Eynard, D., Monti, F., and Bronstein, M. (2020). ”Temporal graph networks for deep learning on dynamic graphs,” in International Conference on Learning Representations (ICLR) Workshop (Amherst, MA: OpenReview.net).

Google Scholar

Scaman, K., and Virmaux, A. (2018). Lipschitz regularity of deep neural networks: analysis and efficient estimation. Network 16:18.

Google Scholar

Shah, N. (2022). An overview of spectral clustering. Appl. Comput. Harmon. Anal. 59, 100–135.

Google Scholar

Sivalingam, S. M., and Govindaraj, V. (2025). Neural fractional order differential equations. Expert Syst. Applic. 267:126041. doi: 10.1016/j.eswa.2024.126041

Crossref Full Text | Google Scholar

Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., and Bengio, Y. (2018). ”Graph attention networks,” in International Conference on Learning Representations (Amherst, MA: OpenReview.net).

Google Scholar

Wang, F., Liu, Y., Liu, K., Wang, Y., Medya, S., and Yu, P. S. (2024). Uncertainty in graph neural networks: a survey. Trans. Mach. Learn. Res. doi: 10.48550/arXiv.2403.07185

Crossref Full Text | Google Scholar

Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., and Yu, P. S. (2021). A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 32, 4–24. doi: 10.1109/TNNLS.2020.2978386

PubMed Abstract | Crossref Full Text | Google Scholar

Ying, Z., Bourgeois, D., You, J., Zitnik, M., and Leskovec, J. (2019). Gnnexplainer: generating explanations for graph neural networks. Adv. Neural Inform. Process. Syst. 32. doi: 10.48550/arXiv.1903.03894

PubMed Abstract | Crossref Full Text | Google Scholar

Yu, B., Xu, X., Wen, C., Xie, Y., and Zhang, C. (2022). ”Hierarchical graph representation learning with structural attention for graph classification,” in CAAI International Conference on Artificial Intelligence, Lecture Notes in Computer Science (Cham: Springer), 473–484. doi: 10.1007/978-3-031-20500-2_39

Crossref Full Text | Google Scholar

Zhang, C., Liu, F., Zhou, L., He, J., and Zhang, H. (2020). Bayesian graph neural networks for reliable prediction. IEEE Trans. Neural Netw. Learn. Syst. 31, 3214–3229.

Google Scholar

Keywords: dynamic community detection, fractional Fourier transform, fractional-order control and stability, fractional-order dynamical systems, fractional-order optimization, graph neural networks, Lyapunov stability, scalable graph learning

Citation: Qu D and Ma Y (2026) F2-CommNet: Fourier–Fractional neural networks with Lyapunov stability guarantees for hallucination-resistant community detection. Front. Comput. Neurosci. 19:1731452. doi: 10.3389/fncom.2025.1731452

Received: 24 October 2025; Revised: 14 December 2025;
Accepted: 25 December 2025; Published: 21 January 2026.

Edited by:

Si Wu, Peking University, China

Reviewed by:

Wenjie Zhu, China Jiliang University, China
Han Shuguang, Zhejiang Sci-Tech University, China

Copyright © 2026 Qu and Ma. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Yanfei Ma, eWFuZmVpLm1hQGllZWUub3Jn; Daozheng Qu, ZGFvemhlbmcucXVAZ21haWwuY29t

These authors have contributed equally to this work and share first authorship

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.