- 1Donald Bren School of Information and Computer Sciences (ICS), University of California, Irvine, Irvine, CA, United States
- 2Purdue University Northwest, Hammond, IN, United States
- 3United States Military Academy, West Point, NY, United States
Hyperdimensional Computing (HDC) is a neurally inspired computing paradigm that leverages lightweight, high-dimensional operations to emulate key brain functions. Recent advances in HDC have primarily targeted two domains: learning, where the goal is to extract and generalize patterns for tasks such as classification, and cognitive computation, which requires accurate information retrieval for human-like reasoning. Although state-of-the-art HDC methods achieve strong performance in both areas, they lack a principled understanding of the fundamentally different requirements imposed by learning vs. cognition. In particular, existing works provide limited guidance on designing encoding methods that generate optimal hyperdimensional representations for these distinct tasks. In this study, we proposed the first universal hyperdimensional encoding method that dynamically adapts to the needs of both learning and cognitive computation. Our approach is based on neural-symbolic techniques that assign random complex hypervectors to atomic bases (e.g., alphabet definitions) and then apply algebraic operations in the high-dimensional hyperspace to control the correlation structure among encoded data points. Through theoretical analysis, we show that learning tasks benefit from correlated representations to maximize memorization and generalization capacity, whereas cognitive tasks require orthogonal, highly separable representations to enable accurate decoding and reasoning. We further derived a separation metric that quantifies this trade-off and validated it empirically across image classification and decoding tasks. Our results demonstrate that tuning the encoder to increase correlation improves classification accuracy from 65% to 95%, while maximizing separation enhances decoding accuracy from 85% to 100%. These findings provide the first systematic framework for designing hyperdimensional encoders that unify learning and cognition under a single, theoretically grounded representation model.
1 Introduction
The human brain remains the most sophisticated information processing system known to date, despite decades of breakthroughs in computer science and machine learning. Advances in biological vision, cognitive psychology, and theoretical neuroscience have inspired numerous models that have significantly shaped modern artificial intelligence (AI) (Lindsay, 2020; Indiveri and Horiuchi, 2011; Mitrokhin et al., 2020). Yet, existing AI methods often face fundamental challenges when deployed on real-world platforms, including energy efficiency, robustness under noise, and the ability to generalize with limited data. Thus, bridging the gap between cognitive reasoning and machine learning remains a critical open problem.
Brain-inspired computing approaches attempt to integrate insights from neuroscience into algorithmic frameworks for learning and reasoning. Among these, Hyperdimensional Computing (HDC) has emerged as a powerful paradigm that abstracts cognition into computations over high-dimensional distributed representations, or hypervectors (Kanerva, 2009, 1988). Unlike conventional representations that store information in localized units, HDC distributes information holographically across all dimensions of a hypervector, enabling inherent robustness to noise, hardware errors, and partial information loss. Moreover, a well-defined set of algebraic operations in the high-dimensional hyperspace supports symbolic reasoning and learning in a unified mathematical framework.
Several models fall under the umbrella of HDC (Schlegel et al., 2022; Kleyko et al., 2021, 2016), including Tensor Product Representations, Holographic Reduced Representations (Tay et al., 2019; ; Plate, 1995), Multiply-Add-Permute (Gayler, 1998), Binary Spatter Codes (Kanerva, 1996), and Sparse Binary Distributed Representations (Rachkovskiy et al., 2005). These models differ in how they encode information into hypervectors, yet share common advantages such as distributed storage, associative memory, and robustness to random perturbations. Importantly, unlike spiking neural networks that attempt to mimic neuronal dynamics, HDC focuses on the representational and computational aspects of cognition, enabling efficient learning and reasoning in both software and hardware implementations.
Recent efforts have applied HDC to diverse machine learning tasks, including classification (Najafabadi et al., 2016), clustering (Imani et al., 2020), regression (Hernández-Cano et al., 2021), fault detection (Poduval et al., 2021a, 2022a), and face detection (Imani et al., 2022; Poduval et al., 2021b). HDC has also demonstrated potential for cognitive tasks involving symbolic reasoning and graph-based inference (Poduval et al., 2022b), and more broadly as a robust, interpretable representation framework for complex biological and multimodal data (Stock et al., 2024). End-to-end HDC learning frameworks have been developed for event-driven and neuromorphic data (Zou et al., 2022b), and hybrid approaches combining HDC with neural feature extractors have been proposed for federated and edge scenarios (Khaleghi et al., 2024). Comprehensive surveys unify and benchmark the expanding landscape of HDC methods across classification, encoding strategies, and efficiency trade-offs (Kleyko et al., 2021; Vergés et al., 2025). In addition, significant hardware and software advances target energy-efficient deployment, including reconfigurable HDC accelerators (Nayan et al., 2025), heterogeneous programming systems for multi-target compilation (Arbore et al., 2025), and intelligent sensing co-designs for real-time sparse-data processing (Yun et al., 2024). In all such applications, an encoder maps raw data into the hyperspace, after which learning or reasoning proceeds through simple algebraic manipulations over hypervectors. This simplicity opens the door to real-time, low-power, and error-tolerant implementations on emerging hardware platforms.
However, the design of the encoder remains a fundamental open question. Current practice relies heavily on empirical choices, with little understanding of how encoder properties influence learning accuracy, reasoning fidelity, or robustness across tasks. This raises several critical questions: What properties make an HDC encoder suitable for a given application? Do learning and cognitive reasoning tasks impose conflicting requirements on the representation space? Can a single encoder be adapted to meet both?
Our key insight is that different tasks impose fundamentally different geometric constraints on hyperspace representations. Learning tasks, such as image classification or object detection, require correlated encodings that cluster similar inputs to facilitate pattern extraction. In contrast, cognitive tasks, such as question answering or symbolic reasoning, demand exclusive encodings that maximize separation between data points to ensure accurate information retrieval and interpretable decoding. Figure 1 illustrates this dichotomy between correlative and exclusive encoder designs.
Figure 1. (a) Two directions in HDC encoding designs: the correlative approach is suitable for learning since it clusters similar datapoints together, while the exclusive approach is suitable for cognition since each datapoint is encoded individually. (b) The Learning capacity is determined by the separation of the signal and noise distribution, which is large in correlative encoding. (c) For decodability, the exclusive encoding helps identify each datapoint in the feature space.
To address these challenges, we reformulate our contributions as follows.
• Task-Aware Kernel-Based HDC Encoding. We instantiate a kernel-based hyperdimensional encoder that is mathematically equivalent to Fractional Power Encoding (FPE) and to Random Fourier Feature constructions in Vector Function Architectures (Plate, 1995; Frady et al., 2022; Rahimi and Recht, 2007). Instead of claiming a new encoding primitive, we focus on how the kernel width w controls correlation in hyperspace, thereby imposing opposing requirements for learning vs. cognitive retrieval and factorization.
• Separation Metrics for Learning and Cognition. Building on the classical VSA view that nearly orthogonal hypervectors are optimal for associative retrieval (Kanerva, 2009; Plate, 1995; Gayler, 1998), we derive simple, variance-normalized separation metrics that characterize when hypervectors remain decodable (cognition) and when class centroids remain separable (learning) as a function of w and dimensionality. These metrics provide an analytical explanation for why learning tasks prefer correlated encodings, whereas retrieval and resonator factorization prefer near-orthogonal encodings.
• Illustrative Empirical Study and Design Guidelines. Using a controlled 5 × 5 synthetic pattern and downsampled MNIST (LeCun et al., 1998), we empirically map decoding accuracy, classification accuracy, and resonator convergence as functions of kernel width. While these toy datasets do not aim for state-of-the-art performance, they provide clear evidence that aligns with our analytical predictions and yield practical guidelines for tuning kernel-based HDC encoders. Extending this analysis to richer, continuous-valued, high-dimensional datasets is an important direction for future work.
Our results show that decoding tasks require a separation value of approximately 2–3 for robust performance under noise, whereas learning tasks achieve peak accuracy at lower separation values around 0.8–1.2. Factorization tasks, in which hypervectors must be decomposed into constituent components, demand even higher exclusivity to prevent error propagation during decoding. These findings provide the first unified theoretical and empirical framework for encoder design in HDC.
2 Materials and methods
2.1 Hyperdimensional computing: an overview
Hyperdimensional Computing (HDC) represents information using large-dimensional vectors, or hypervectors, which are nearly orthogonal in the high-dimensional space, also called hyperspace (Kanerva, 1998). Computation in HDC proceeds through a small set of algebraic operations on hypervectors. Two key operations form the foundation of HDC: Bundling (+), which performs element-wise addition to combine sets of hypervectors, and Binding (⊙), which performs element-wise multiplication to encode associations or relationships. Both operations preserve the holographic property of hypervectors, ensuring that information is evenly distributed across all dimensions. Since hypervectors typically have i.i.d. and pseudo-random components, the representations are robust to noise and partial corruption, enabling recovery of information even when some dimensions are missing.
Over the past decade, HDC has been successfully applied to diverse problems, including classification (Kanerva, 2009), activity recognition (Kim et al., 2018), biomedical signal processing (Moin et al., 2021), multimodal sensor fusion (Räsänen and Saarinen, 2015), security (Thapa et al., 2021; Zhang et al., 2021), and distributed sensing (Kleyko et al., 2018). A major advantage of HDC is its ability to learn from few-shot or even one-shot examples, often outperforming support vector machines (SVMs), gradient boosting methods, and convolutional neural networks (CNNs) in settings with limited data (Rahimi et al., 2018; Mitrokhin et al., 2019). Moreover, HDC implementations are highly energy-efficient on embedded processors (Montagna et al., 2018), enabling real-time deployment on low-power hardware platforms.
The design of the encoder critically influences both the similarity metric between data points in hyperspace and the degree of correlation or exclusivity in their representations. Figure 1 illustrates two primary encoding regimes. The first is Correlative Encoding, which captures shared structure among data points by preserving similarities in hyperspace. This regime is well-suited to learning tasks such as classification, where the objective is to extract patterns from data while maintaining smooth decision boundaries for generalization. The second regime is Exclusive Encoding, which maximizes separation between hypervectors to enable accurate decoding of stored information. This regime is essential for cognitive tasks requiring symbolic reasoning, information retrieval, or logical inference.
Therefore, learning tasks benefit from correlated representations that increase capacity and prevent overfitting, whereas cognitive tasks demand orthogonal, highly separated hypervectors for accurate decoding and reasoning. These two contrasting requirements highlight the need for an encoding framework that dynamically tunes the balance between correlation and exclusivity.
2.2 Universal neural encoding
Our positional encoder uses complex-valued hypervectors constructed as random phasors. We first sample base vectors and define
where wx and wy are kernel length scales controlling the correlation structure along the horizontal and vertical axes, respectively. A pixel at the integer coordinate (X, Y) is then encoded by binding powers of these bases,
where ⊙ denotes component-wise complex multiplication. In Figure 2, we show the similarity distribution of the kernel generated by the inclusive and correlative encoding, which can be tuned by the scale vectors wx and wy.
Figure 2. Our universal encoder controls the trade-off between correlation and exclusivity via tunable parameters in the Gaussian kernel. (a) Universal HDC Encoding. (b) Exclusiveness of Encoding.
This construction is mathematically equivalent to Fractional Power Encoding (FPE) (; Plate, 1995) and to the Vector Function Architecture proposed by Frady et al. (2022) in which continuous coordinates are represented by exponentiating random base vectors. By Bochner's theorem, exponentiating Gaussian-distributed phases yields a shift-invariant Gaussian kernel whose width is controlled by the scaling parameter (Rahimi and Recht, 2007; Li et al., 2021). In the terminology of HDC, our model instantiates a complex-valued VSA closely related to Fourier Holographic Reduced Representations (FHRR) (Plate, 1995; Kleyko et al., 2021). We therefore do not claim novelty at the level of the encoding primitive; instead, we build on this established mechanism to analyze how the kernel width w induces task-dependent trade-offs between learning and cognitive retrieval.
2.2.1 Complex-domain implementation
All hypervectors in this study are represented as complex-valued phasor vectors, following the FHRR-style VSA model (Plate, 1995). Binding is implemented as component-wise complex multiplication, and superposition (bundling) as component-wise addition, optionally followed by normalization. Similarity between two hypervectors is measured by the real part of the normalized complex inner product. This is the same complex-domain framework that underlies the Vector Function Architecture in Frady et al. (2022) and the positional encodings used in resonator-based scene understanding (Renner et al., 2024).
2.2.2 Decoding pipeline
For clarity, we summarize the decoding procedure below.
1. Encode the image f as a single hypervector .
2. For each pixel (X, Y), compute a similarity score , where δ(·, ·) is the normalized complex inner product and Re(·) denotes the real part.
3. Apply a threshold τ (e.g., the mean score) to obtain a binarized estimate .
4. Optionally, refine the reconstruction by iteratively subtracting the contribution of the current estimate and repeating steps (2)–(3), as described in Section 2.4.
Figures showing continuous similarity distributions (e.g., Figure 4) plot the pre-binarization scores sX, Y for pixels with ground-truth values of 0 and 1. These histograms are diagnostic tools that show how the kernel width w affects the separation between the two score distributions. All reported decoding accuracies are computed after the binarization step by comparing against the binarized MNIST ground truth fX, Y∈{0, 1} on a per-pixel basis.
2.3 Empirical illustration
The effect of kernel parameters on hyperspace representations is illustrated in Figure 3. Panel (a) shows the two-dimensional Gaussian kernel for wx = 2 and wy = 1, with correlations decaying more slowly along the x-axis due to the larger length scale. Panel (b) depicts the similarity distributions computed from a synthetic image classification dataset. Images from the same class exhibit high intra-class similarity, while inter-class similarity remains low, demonstrating that the proposed encoder controls separation and correlation in hyperspace via kernel parameters.
Figure 3. (a) The two-dimensional Gaussian kernel for wx = 2, wy = 1, with slower decay along the x-axis. (b) Similarity distributions for images within the same class and across different classes.
2.4 Information retrieval for cognition
In cognitive tasks, HDC must accurately retrieve information encoded in hypervectors, which requires careful control of correlations in the representation space. As an illustrative example, consider decoding the original pixel values fX, Y from the encoded hypervector for an image dataset. Inspired by prior work on HDC-based knowledge extraction and compression (Poduval et al., 2022b), we employ an iterative decoding procedure that refines estimates of fX, Y by progressively canceling noise contributions in the reconstruction.
Decoding begins by computing an initial estimate of each pixel value based on the similarity between the encoding basis and the composite hypervector . Specifically, the initial estimate is
where the binarization assigns 0 or 1 depending on whether the similarity is below or above its mean value, respectively. This step simplifies the analysis by considering binary features without sacrificing generality, as the approach extends naturally to multi-level quantization.
The initial reconstruction of the encoded vector is then
which is used to approximate noise contributions. The decoding is refined iteratively as
and repeated until convergence.
A key parameter affecting decoding accuracy is the kernel length scale w, which determines correlations among nearby pixels. A small w produces nearly orthogonal hypervectors, minimizing interference but losing spatial correlations; a large w preserves correlations but introduces overlap between signal and noise. The initial estimate for pixel fX, Y can be written as
where the second term arises from correlations with all other pixels. Under the assumption of uncorrelated neighboring pixels, the Central Limit Theorem approximates this noise as Gaussian with
where k(·) denotes the Gaussian kernel.
To illustrate, we consider a 5 × 5 image with the center pixel set to 0 or 1, while all other pixels are randomly generated. Figure 4 shows the distributions of the recovered center pixel for different kernel widths w = wx = wy. Small w values produce well-separated distributions for the two cases, whereas larger w values produce significant overlap. A practical criterion for reliable decoding is that the difference between the distribution means must exceed the sum of their standard deviations. We formalize this as a separation metric
where μi and σi denote the mean and standard deviation for the two cases. Figure 5a plots s(w) across various hypervector dimensions D, showing that separation decreases as w increases. This provides a principled way to select w for decoding tasks: a small w ensures robust information retrieval by maximizing s(w).
Figure 4. Distributions of the similarity scores used in decoding for different kernel widths w. For each w, we plot the similarity between the positional basis hypervector and the composite image hypervector for pixels whose ground-truth value is 0 (blue) or 1 (orange) in the synthetic 5 × 5 dataset. These histograms are computed before binarization and are used to visualize how w affects the separation between the two score distributions.
Figure 5. (a) Separation s(w) for different values of dimension D in the case of decoding and (b) s(p) Separation as a function of number of learning data points for the case of memorization.
In the decoding setting, separability is evaluated using one-dimensional similarity scores: for each pixel, we compared the scalar similarity between its positional basis hypervector and the composite image hypervector under the two ground-truth cases fX, Y = 0 vs. fX, Y = 1. For such one-dimensional distributions, we adopt a symmetric, variance-normalized separation index closely related to the classical signal-detection measure d′ (Macmillan and Creelman, 2004). This metric directly reflects the degree of overlap between the two score distributions and is easy to interpret as an effect size. Conversely, multivariate measures such as the Mahalanobis distance are unnecessary here: they reduce simple variance-normalized differences in one dimension and would require estimating covariance matrices without providing additional insight. Thus, our choice balances interpretability and analytical convenience while remaining consistent with standard practice in detection-theoretic analyses.
2.5 HDC memorization and pattern extraction for learning
While decoding tasks require orthogonal representations, learning tasks, such as classification, benefit from correlated encodings that capture shared structures across samples. Consider p training samples encoded as hypervectors and stored via bundling into a class hypervector
A query hypervector belongs to the class if its similarity with significantly exceeds its similarity to hypervectors from other classes.
By the Central Limit Theorem, the similarity between and its correct-class hypervector follows
while the similarity with an incorrect class hypervector follows
where μ1>μ12 since intra-class similarities exceed inter-class similarities. The separation between these distributions is then
This expression shows that separation improves as , implying that adding more training samples enhances learning capacity when correlations are maintained.
Figures 5b, 6 empirically validate this result. Random encodings cause separation to collapse as p increases, whereas correlated encodings maintain high separation, enabling robust classification. Specifically, Figure 6a shows the signal and noise distributions for random encoding, where overlap grows with p, while Figure 6b shows correlated encoding preserving distributional separation even as p increases.
Figure 6. The signal and noise distribution for (a) exclusive encoding suitable for cognition and (b) correlated encoding for different values of p, suitable for learning. The results were calculated for D = 1500.
Together, these results reveal a fundamental trade-off: learning tasks require correlated encodings to exploit shared structure among samples, whereas decoding tasks demand orthogonal encodings for accurate information retrieval. The kernel parameters wx, wy thus provide a tunable mechanism for balancing these competing requirements.
To make the analysis concrete, we begin with a simple synthetic 5 × 5 dataset. Each image has a designated central pixel at (3, 3) whose binary value determines the class label, while all other pixels are drawn independently from a Bernoulli(0.5) distribution. This construction ensures that class discrimination depends only on the central pixel, allowing the effect of the encoder and its correlation structure to be studied in isolation from more complex spatial cues.
For both the synthetic dataset and MNIST, we use a centroid-based classifier in hyperspace. Given the encoded training hypervectors for a class c, we form a class centroid
where indexes the training samples of class c. A test hypervector is then assigned to the class whose centroid has maximum cosine similarity:
Unless otherwise stated, we sweep the kernel width w over a predefined range and report the resulting classification accuracies as functions of (D, w). The heuristic in Section 3.2, which relates w to empirical correlation lengths in the data, is used only as an intuitive guideline for selecting reasonable scales; our primary objective is to characterize how changes in w affect learnability rather than to optimize w for a particular benchmark.
2.6 Memorizing associations
Hyperdimensional Computing (HDC) encodes associations in a distributed and noise-robust manner using the binding operation. If and are two nearly orthogonal bipolar hypervectors, their binding produces a hypervector nearly orthogonal to both and , owing to the inherent randomness in high dimensions. This property enables HDC to represent complex structures such as key–value pairs (Rachkovskij, 2015; Plate, 1995), sequences (Zou et al., 2022a; Poduval et al., 2021c), and multi-attribute data (Frady et al., 2020), in a single holographic representation.
For example, consider the task of memorizing an object along with the location, time, and size at which it was observed. Each object is represented by a hypervector sampled from a discrete codebook , where each corresponds to a distinct entity, such as a ball, cat, or dog. The location, time, and size are continuous-valued attributes, which we encode using the kernel-based method described in earlier sections. Specifically, we assign random base hypervectors (BX, BT, BS) to the spatial, temporal, and size dimensions, with each base vector generated as
with wi controlling the correlation length scale along dimension i∈{X, T, S}. A continuous value (x, t, s) for position, time, and size is encoded as . The association of object Oi with its features (x, t, s) is then stored in the single hypervector
Given a hypervector , recovering its factors is a nontrivial problem because binding is a lossy superposition in high dimensions. The state-of-the-art approach for this factorization is the resonator network (Frady et al., 2020), a recurrent iterative algorithm that refines factor estimates over successive iterations by canceling noise contributions.
Assume that position, time, and size values are drawn from discrete sets {x1, …, xn}, {t1, …, tn}, and {s1, …, sn}. At iteration (n−1), let the current guesses for the factors be , with A∈{O, X, T, S}. At the next iteration, the guess for factor A is updated by unbinding the current estimates of all other factors from , then projecting onto the subspace spanned by the corresponding codebook:
where [M]A denotes the projection operator onto the subspace associated with factor A.
Intuitively, each iteration eliminates contributions from all factors except A, yielding progressively cleaner estimates as noise terms cancel out. For sufficiently large hypervector dimension D and small correlation length scales wi, the resonator network converges to the correct factorization. However, when correlations are high (large wi), the representations of different factors overlap significantly, and the network may converge to spurious solutions. A full theoretical analysis of this nonlinear recurrent system remains challenging; thus, we rely on experimental evaluation to study convergence under varying correlation parameters.
3 Experimental results
This section presents a comprehensive empirical analysis of the proposed universal hyperdimensional encoding across cognitive information retrieval (decoding), statistical learning (classification), and factorization. The focus is on how the kernel length scales (wx, wy) and the hypervector dimension D modulate the correlation structure in hyperspace and, in turn, performance. All experiments are implemented in PyTorch on an Intel Core i7–12700K platform.
3.1 Experimental setup
We conducted experiments to examine how encoder settings influence HDC performance for both learning and cognitive information retrieval. We selected the MNIST handwritten digits as the primary benchmark. The original dataset comprises 28 × 28 grayscale images with intensities in [0, 1]. For a controlled analysis without loss of generality, we cropped and downsampled to 13 × 13 and then binarized the pixel values to {0, 1}. This preserves salient structure while simplifying the decoding analysis and the computations of the separation metric introduced earlier. Unless otherwise noted, we vary D and (wx, wy) systematically to expose the correlation–orthogonality trade-off predicted by our theory.
Our empirical evaluation is intentionally based on a small synthetic 5 × 5 example and on the MNIST handwritten digits. The goal of this work is not to compete with state-of-the-art deep networks on challenging benchmarks but to provide a controlled testbed for analyzing how the kernel length scales (wx, wy) govern the trade-off between correlation and orthogonality in hyperspace for learning vs. cognition.
MNIST is widely regarded as a saturated benchmark for modern machine learning: convolutional neural networks routinely exceed 99% test accuracy, and even shallow classical models often reach around 97% accuracy. Thus, several authors have argued that MNIST is “too easy” for evaluating new architectures and have proposed more challenging variants such as EMNIST, Fashion-MNIST, and Oracle-MNIST (Wang and Deng, 2024). Conversely, the HDC literature often uses (binarized or downsampled) MNIST and Fashion-MNIST precisely because their simplicity and standardization allow performance changes to be attributed directly to encoding choices rather than to dataset intricacies (Smets et al., 2024). Our experiments follow this latter tradition: we deliberately choose a simple, well-understood dataset to expose how varying wx and wy affects decodability, classification accuracy, and the separation metrics derived in Sections 2.5 and 2.6.
Although we binarized MNIST for analytic clarity in the decoding analysis, the encoder in Equation 1 is linear in the pixel intensity fX, Y, and all theoretical results depend only on the correlations between basis hypervectors, not on the discretization of fX, Y. Hence, the same formulation applies directly to real-valued inputs fX, Y∈ℝ and to higher-resolution images by replacing the binary weights in the superposition with continuous-valued intensities. Moreover, MNIST still exhibits non-trivial local spatial correlations (e.g., strokes, loops, and thickness variations), so sweeping (wx, wy) already reveals how aligning or misaligning the positional code with these correlations affects decoding and learning.
We acknowledge that extending the empirical study to more complex or strongly correlated datasets (e.g., Fashion-MNIST, Oracle-MNIST, or natural images) would further substantiate the generality of the proposed framework. To keep the present manuscript focused on the fundamental representational trade-offs, we leave these extensions for future study and explicitly highlight this as a limitation in the Conclusion.
3.2 Encoding: learning vs. cognition
The kernel length scales (wx, wy) determine how quickly positional basis vectors decorrelate with distance in the image: small values induce near-orthogonality across nearby locations, whereas large values preserve similarity over broader neighborhoods. From the analysis in Sections 2.6 and 2.5, we expect decoding and factorization (cognition) to favor small (wx, wy) while learning to favor intermediate values that preserve intra-class correlations without collapsing inter-class structure.
To connect these parameters to the empirical statistics of a dataset, we consider horizontal and vertical co-activation probabilities. For each pixel position (X, Y) we estimate
from the training set and compute average correlation lengths
as expected horizontal and vertical distances over which pixels tend to co-activate. These lengths provide an intuitive heuristic for choosing kernel scales,
so that the positional code respects the dominant spatial correlations present in the data.
We emphasized that in this study, we used this rule only as a guideline for selecting a reasonable range of (wx, wy) values. All decoding and learning experiments sweep over a grid of (wx, wy) and report performance as a function of these parameters, rather than relying exclusively on the correlation-length estimate. A fully automated, data-driven procedure for selecting the optimal (wx, wy) across tasks and datasets is an important direction for future study, but it lies beyond the scope of the present study. Here, our primary objective is to show how varying (wx, wy) mediates the trade-off between learning and cognition, and to connect this trade-off qualitatively to the underlying spatial correlation structure of the data. Tables 1, 2 Show the data results corresponding to Figures 7, 8 respectively
Table 1. Table corresponding to Figure 7.
Table 2. Table corresponding to Figure 8.
Figure 7. The decoding accuracy as (a) function of D and w (b) function of wx and wy; the decoding separation as (c) function of D and w (d) function of wx and wy.
Figure 8. The classification accuracy as (a) a function of D and w (b) a function of wx and wy; the classification separation as (c) a function of D and w (d) a function of wx and wy. The small-w regime corresponds to nearly random/orthogonal positional encodings and serves as the baseline case. As w increases from this baseline, moderate correlations between nearby positions improve intra-class clustering and thus accuracy; for large w, over-correlation causes different classes to become less separable and accuracy decreases.
3.3 Cognition: HDC decodability
In all decoding experiments, accuracy is measured as per-pixel reconstruction accuracy. Given a binarized ground-truth image f with pixels fX, Y∈{0, 1} and its decoded estimate obtained from the hypervector, we define
where Ω is the set of pixel coordinates and 𝕀[·] denotes the indicator function. To compute , we first obtain a continuous similarity score sX, Y between the positional basis hypervector and the composite image hypervector, then apply a fixed threshold τ (e.g., the mean score) to produce a binary estimate . Figures that show continuous similarity distributions (such as Figure 4) plot the pre-binarization scores sX, Y for diagnostic purposes, whereas all reported decoding accuracies are computed from the binarized reconstructions and compared with the binarized MNIST ground truth.
We first quantified decodability by reconstructing the binarized image from its hypervector. Figure 7a shows decoding accuracy as a joint function of D and an isotropic kernel width w with wx = wy = w. Accuracy is high for small w, reflecting that near-orthogonal positional codes keep cross-terms in (3) small; it ranges from 85% at the lowest to 100% at the highest settings we tested. As w increases, neighboring positions become correlated and grows for (X′, Y′)≠(X, Y), thereby elevating the effective noise. A clear degradation boundary emerges around w≈1.5, and the drop becomes sharper as w grows beyond roughly 0.75–1.5, consistent with the distributional overlap analysis in Figure 4 and the separation definition in (4). Larger D mitigates projection noise and stabilizes accuracy, consistent with concentration effects in high dimensions.
Figure 7b separates horizontal and vertical effects at fixed D = 500. We observed boundaries near wy≈1.0 and wx≈1.5, beyond which decodability deteriorates. This anisotropy is consistent with MNIST digit morphology: vertical strokes are more regular, so increasing wy is initially less harmful than increasing wx. The corresponding separation maps in Figures 7c, d track these transitions: as w (or one of wx, wy) increases, the signal and noise distributions approach each other, and the separation in (4) declines, correlating with reduced decoding accuracy in the mid-accuracy regime. Near saturation (>85% accuracy), separation becomes less predictive because errors are dominated by a small number of ambiguous pixels.
3.4 HDC Learnability
We next evaluated classification from hypervectors. Figure 8a reports accuracy vs. (D, w) with wx = wy = w. Unlike decoding, learning exhibits a non-monotonic dependence on w. When w is too small, each sample is mapped to an almost independent code, intra-class similarity is suppressed, and the classifier cannot pool evidence; when w is too large, inter-class codes overlap, and the decision boundary collapses. Peak accuracy occurs at an intermediate value around w≈0.5 for sufficiently large dimensions (e.g., D≳1.5k), where correlations are strong enough to capture shared structure without eroding class separability.
Figure 8b probes anisotropy at fixed D. Increasing wx generally improves accuracy more than increasing wy, reflecting that MNIST class distinctions rely more on horizontal variation (e.g., left–right strokes and loops). The separation plots in Figures 8c, d corroborate these accuracy trends: class-level separation is maximal in the same intermediate-w band that yields the best generalization, and it decreases on either side as encodings become either too exclusive (under-smoothing) or too inclusive (over-smoothing). These observations are consistent with the capacity analysis in Section 2.5, where the separation between the class-similarity distributions scales as and is optimized when the encoder preserves intra-class correlations while keeping cross-class similarity low.
3.5 HDC factorization problem
Finally, we studied factorization via the resonator network in Section 2.6. Objects are sampled from a random codebook , while position, time, and size are continuously encoded using kernel-generated bases with adjustable length scales wi. We initialized with wX = wT = wS = 10 so that hypervectors for nearby values remain highly correlated. Under this setting, the resonator converges to spurious assignments because the factors are not sufficiently separated in hyperspace. We then sequentially reduced one factor's length scale to wi = 1 at a time, which decorrelates its codebook and restores identifiability. As shown in Figure 9, setting wX = 1 enables correct recovery of the position (and the object), while time and size remain ambiguous; subsequently setting wT = 1 and wS = 1 resolves the remaining factors. This behavior aligns with the decodability analysis: exclusive (near-orthogonal) encodings minimize cross-terms in the iterative updates and are therefore crucial for reliable multi-factor inference.
Figure 9. Solution to the resonator network with four factors, with each factor continuously encoded using the random feature encoding.
4 Conclusion
This study examined how kernel-based hyperdimensional encoders mediate a fundamental trade-off between learning and cognition via a single correlation-controlling parameter. Building on complex-valued VSA models, we showed analytically and empirically that the kernel width (wx, wy) imposes opposing requirements across different classes of tasks. Cognitive operations such as decoding and resonator-based factorization benefit from nearly orthogonal positional codes (small w), which minimize cross-talk and maximize separability in hyperspace, whereas statistical learning tasks such as classification prefer intermediate w values that preserve intra-class correlations without collapsing inter-class structure. We captured these effects using simple separation metrics that quantify the overlap of similarity distributions during decoding and the class-centroid separability during learning.
The observed correlation-orthogonality tradeoff suggests a practical design guideline for future HDC encoders. Rather than treating the encoder as a fixed, task-agnostic module, kernel-based HDC systems can expose w (and related parameters) as explicit knobs that are tuned according to the dominant role of the representation. When precise retrieval, error correction, or factorization is paramount, the encoder should favor low-correlation, near-orthogonal hypervectors. When learning and generalization from limited data are central, the encoder should be configured in an intermediate regime where representations remain correlated enough to capture shared structure while still maintaining class-level separation. In this sense, the same underlying encoding mechanism can be adapted across applications by changing only the correlation scale of the positional basis, rather than redesigning the entire HDC pipeline.
A fully systematic, data-driven method for selecting the kernel width w across tasks and datasets remains an open problem. Section 3.2 proposed a heuristic link between empirical spatial co-activation statistics and (wx, wy) via estimated correlation lengths, which provides an intuitive initialization rule but is not yet a complete tuning algorithm. In all experiments, we therefore swept w over a range of values to map out performance curves, rather than fixing w solely from this heuristic. Developing a principled selection strategy (for example, by combining our separation metrics with data-driven correlation estimates or using validation-based model selection) is an important direction for future work.
Our empirical study focused on a toy 5 × 5 dataset and binarized, downsampled MNIST to keep the analysis transparent and to isolate the effects of the kernel width. However, the encoding formulation is linear in the input values fX, Y and is not limited to binary images. The same approach can, in principle, be applied to real-valued images, time series, and other sensory modalities by replacing binary pixel values with continuous intensities or feature amplitudes in the superposition. Extending the present framework to such continuous-valued and higher-dimensional datasets and to multimodal settings where different channels may require distinct correlation scales is left for future work. We expect that the correlation–orthogonality trade-off characterized here will continue to provide a useful lens for designing task-aware HDC encoders in these more complex domains.
Data availability statement
Publicly available datasets were analyzed in this study. This data can be found here: https://cocodataset.org (Lin et al., 2015).
Author contributions
PP: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. HE: Conceptualization, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Visualization, Writing – original draft, Writing – review & editing. XL: Investigation, Resources, Software, Validation, Visualization, Writing – original draft, Writing – review & editing. SY: Formal analysis, Investigation, Methodology, Resources, Software, Visualization, Writing – original draft, Writing – review & editing. YN: Formal analysis, Writing – original draft, Writing – review & editing. ZZ: Conceptualization, Formal analysis, Investigation, Software, Writing – original draft, Writing – review & editing. NB: Conceptualization, Formal analysis, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. MI: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing.
Funding
The author(s) declared that financial support was received for this work and/or its publication. This work was supported in part by the DARPA Young Faculty Award, the National Science Foundation (NSF) under Grants #2127780, #2319198, #2321840, #2312517, #2235472, and #2431561, the Semiconductor Research Corporation (SRC), the Office of Naval Research through the Young Investigator Program Award, and Grants #N00014-21-1-2225 and #N00014-22-1-2067, Army Research Office Grant #W911NF2410360. Additionally, support was provided by the Air Force Office of Scientific Research under Award #FA9550-22-1-0253, along with generous gifts from Xilinx and Cisco.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Arbore, R., Routh, X., Noor, A. R., Kothari, A., Adve, V., Yang, H., et al. (2025). “Hpvm-hdc: a heterogeneous programming system for accelerating hyperdimensional computing,” in Proceedings of the 52nd Annual International Symposium on Computer Architecture, 1342–1355. doi: 10.1145/3695053.3731095
Frady, E. P., Kent, S. J., Olshausen, B. A., and Sommer, F. T. (2020). Resonator networks, 1: an efficient solution for factoring high-dimensional, distributed representations of data structures. Neural Comput. 32, 2311–2331. doi: 10.1162/neco_a_01331
Frady, E. P., Kleyko, D., Kymn, C. J., Olshausen, B. A., and Sommer, F. T. (2022). ”Computing on functions using randomized vector representations,” in Neuro-Inspired Computational Elements Conference (NICE). doi: 10.1145/3517343.3522597
Gayler, R. W. (1998). “Multiplicative binding, representation operators and analogy,” in International Conference on Cognitive Science, Workshop Poster.
Hernández-Cano, A., Zhuo, C., Yin, X., and Imani, M. (2021). ”Reghd: robust and efficient regression in hyper-dimensional learning system,” in DAC (San Francisco, CA: IEEE), 7–12. doi: 10.1109/DAC18074.2021.9586284
Imani, M., Kim, Y., Pampana, S., Gupta, S., Zhou, M., and Rosing, T. (2020). “Dual: acceleration of clustering algorithms using digital-based processing in-memory,” in MICRO (Athens: IEEE), 356–371. doi: 10.1109/MICRO50266.2020.00039
Imani, M., Zakeri, A., Chen, H., Kim, T. H., Poduval, P., Lee, H., et al. (2022). “Neural computation for robust and holographic face detection,” in Proceedings of the 59th ACM/IEEE Design Automation Conference, 31–36. doi: 10.1145/3489517.3530653
Indiveri, G., and Horiuchi, T. (2011). Frontiers in neuromorphic engineering. Front. Neurosci. 5:118. doi: 10.3389/fnins.2011.00118
Kanerva, P. (1996). “Binary spatter-coding of ordered k-tuples,” in ICANN'96, Proceedings of the International Conference on Artificial Neural Networks, volume 1112 of Lecture Notes in Computer Science (Berlin: Springer), 869–873. doi: 10.1007/3-540-61510-5_146
Kanerva, P. (1998). ”Encoding structure in boolean space,” in ICANN 98 (London: Springer), 387–392. doi: 10.1007/978-1-4471-1599-1_57
Kanerva, P. (2009). Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors. Cogn. Comput. 1, 139–159. doi: 10.1007/s12559-009-9009-8
Khaleghi, B., Yu, X., Kang, J., Wang, X., and Rosing, T. (2024). Private and efficient learning with hyperdimensional computing. IEEE Trans. Circuits Syst. Artif. Intell. doi: 10.1109/TCASAI.2024.3481193
Kim, Y., Imani, M., and Rosing, T. S. (2018). “Efficient human activity recognition using hyperdimensional computing,” in Proceedings of the 8th International Conference on the Internet of Things (ACM), 38. doi: 10.1145/3277593.3277617
Kleyko, D., Osipov, E., Papakonstantinou, N., and Vyatkin, V. (2018). Hyperdimensional computing in industrial systems: the use-case of distributed fault isolation in a power plant. IEEE Access 6, 30766–30777. doi: 10.1109/ACCESS.2018.2840128
Kleyko, D., Osipov, E., and Rachkovskij, D. A. (2016). Modification of holographic graph neuron using sparse distributed representations. Proc. Comput. Sci. 88, 39–45. doi: 10.1016/j.procs.2016.07.404
Kleyko, D., Rachkovskij, D. A., Osipov, E., and Rahimi, A. (2021). A survey on hyperdimensional computing aka vector symbolic architectures, part i: models and data transformations. arXiv preprint arXiv:2111.06077. doi: 10.1145/3538531
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324. doi: 10.1109/5.726791
Li, Z., Ton, J.-F., Oglic, D., and Sejdinovic, D. (2021). Towards a unified analysis of random fourier features. J. Mach. Learn. Res. 22, 1–51. doi: 10.5555/3546258.3546366
Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., et al. (2015). “Microsoft coco: common objects in context,” in Computer Vision – ECCV 2014. ECCV 2014. Lecture Notes in Computer Science, vol 8693, eds. D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars (Cham: Springer). doi: 10.1007/978-3-319-10602-1_48
Lindsay, G. W. (2020). Convolutional neural networks as a model of the visual system: past, present, and future. J. Cogn. Neurosci. 1–15. doi: 10.1162/jocn_a_01544
Macmillan, N. A., and Creelman, C. D. (2004). Detection Theory: A User's Guide, 2nd Edn. Mahwah, NJ: Lawrence Erlbaum Associates. doi: 10.4324/9781410611147
Mitrokhin, A., Sutor, P., Fermuller, C., and Aloimonos, Y. (2019). Learning sensorimotor control with neuromorphic sensors: toward hyperdimensional active perception. Sci. Robot. doi: 10.1126/scirobotics.aaw6736
Mitrokhin, A., Sutor, P., Summers-Stay, D., Fermuller, C., and Aloimonos, Y. (2020). Symbolic representation and learning with hyperdimensional computing. Front. Robot. AI 7:63. doi: 10.3389/frobt.2020.00063
Moin, A., Zhou, A., Menon, A., Alexandrov, G., Tamakloe, S., Ting, J., et al. (2021). A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition. Nat. Electr. 4, 54–63. doi: 10.1038/s41928-020-00510-8
Montagna, F., Rahimi, A., Benatti, S., Rossi, D., and Benini, L. (2018). ”Pulp-hd: accelerating brain-inspired high-dimensional computing on a parallel ultra-low power platform,” in 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC) (San Francisco, CA: IEEE), 1–6. doi: 10.1109/DAC.2018.8465801
Najafabadi, F. R., Rahimi, A., Kanerva, P., and Rabaey, J. M. (2016). “Hyperdimensional computing for text classification,” in DATE, 1.
Nayan, M. M. R., Liu, C.-K., Wan, Z., Raychowdhury, A., and Naeemi, A. J. (2025). Hydra: sot-cam based vector symbolic macro for hyperdimensional computing. arXiv preprint arXiv:2504.14020. doi: 10.1109/ICCAD66269.2025.11240977
Plate, T. (1991). “Holographic reduced representations: convolution algebra for compositional distributed representations,” in IJCAI, 30–35.
Plate, T. A. (1995). Holographic reduced representations. IEEE Trans. Neural Netw. 6, 623–641. doi: 10.1109/72.377968
Poduval, P., Issa, M., Imani, F., Zhuo, C., Yin, X., Najafi, H., et al. (2021a). “Robust in-memory computing with hyperdimensional stochastic representation,” in 2021 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH) (IEEE), 1-6. doi: 10.1109/NANOARCH53687.2021.9642237
Poduval, P., Ni, Y., Kim, Y., Ni, K., Kumar, R., Cammarota, R., et al. (2022a). ”Adaptive neural recovery for highly robust brain-like representation,” in Proceedings of the 59th ACM/IEEE Design Automation Conference, 367–372. doi: 10.1145/3489517.3530659
Poduval, P., Zakeri, A., Imani, F., Alimohamadi, H., and Imani, M. (2022b). Graphd: graph-based hyperdimensional memorization for brain-like cognitive learning. Front. Neurosci. 16:757125. doi: 10.3389/fnins.2022.757125
Poduval, P., Zou, Z., Najafi, H., Homayoun, H., and Imani, M. (2021b). “Stochd: stochastic hyperdimensional system for efficient and robust learning from raw data,” in 2021 58th ACM/IEEE Design Automation Conference (DAC) (San Francisco, CA: IEEE), 1195–1200. doi: 10.1109/DAC18074.2021.9586166
Poduval, P., Zou, Z., Yin, X., Sadredini, E., and Imani, M. (2021c). “Cognitive correlative encoding for genome sequence matching in hyperdimensional system,” in 2021 58th ACM/IEEE Design Automation Conference (DAC) (IEEE), 781–786. doi: 10.1109/DAC18074.2021.9586253
Rachkovskij, D. (2015). Formation of similarity-reflecting binary vectors with random binary projections. Cybernet. Syst. Anal. 51, 313–323. doi: 10.1007/s10559-015-9723-z
Rachkovskiy, D. A., Slipchenko, S. V., Kussul, E. M., and Baidyk, T. N. (2005). Sparse binary distributed encoding of scalars. J. Autom. Inform. Sci. 37, 12–23.. doi: 10.1615/JAutomatInf Scien.v37.i6.20
Rahimi, A., Kanerva, P., Benini, L., and Rabaey, J. M. (2018). Efficient biosignal processing using hyperdimensional computing: network templates for combined learning and classification of exg signals. Proc. IEEE 107, 123–143. doi: 10.1109/JPROC.2018.2871163
Rahimi, A., and Recht, B. (2007). “Random features for large-scale kernel machines,” in Advances in Neural Information Processing Systems (NeurIPS).
Räsänen, O. J., and Saarinen, J. P. (2015). Sequence prediction with sparse distributed hyperdimensional coding applied to the analysis of mobile phone use patterns. IEEE Trans. Neural Netw. Learn. Syst. 27, 1878–1889. doi: 10.1109/TNNLS.2015.2462721
Renner, A., Indiveri, G., Supic, L., Danielescu, A., Olshausen, B. A., Sommer, F. T., et al. (2024). Neuromorphic visual scene understanding with resonator networks. Nat. Mach. Intell. 6, 641–652. doi: 10.1038/s42256-024-00848-0
Schlegel, K., Neubert, P., and Protzel, P. (2022). A comparison of vector symbolic architectures. Artif. Intell. Rev. 55, 4523–4555. doi: 10.1007/s10462-021-10110-3
Smets, L., Van Leekwijck, W., Tsang, I. J., and Latré, S. (2024). An encoding framework for binarized images using hyperdimensional computing. Front. Big Data 7:1371518. doi: 10.3389/fdata.2024.1371518
Stock, M., Van Criekinge, W., Boeckaerts, D., Taelman, S., Van Haeverbeke, M., Dewulf, P., et al. (2024). Hyperdimensional computing: a fast, robust, and interpretable paradigm for biological data. PLoS Comput. Biol. 20:e1012426. doi: 10.1371/journal.pcbi.1012426
Tay, Y., Hui, S. C., Vinh, T. D. Q., Zhang, S., Yao, L., and Luu, A. T. (2019). Holographic factorization machines for recommendation. AAAI 33, 5143–5150. doi: 10.1609/aaai.v33i01.33015143
Thapa, R., Lamichhane, B., Ma, D., and Jiao, X. (2021). “Spamhd: memory-efficient text spam detection using brain-inspired hyperdimensional computing,” in 2021 IEEE Computer Society Annual Symposium on VLSI (ISVLSI) (Tampa, FL: IEEE), 84–89. doi: 10.1109/ISVLSI51109.2021.00026
Vergés, P., Heddes, M., Nunes, I., Givargis, T., Nicolau, A., and Kleyko, D. (2025). Classification using hyperdimensional computing: a review and comparative analysis. Artif. Intell. Rev. doi: 10.1007/s10462-025-11181-2
Wang, M., and Deng, W. (2024). A dataset of oracle characters for benchmarking machine learning algorithms. Sci. Data 11:87. doi: 10.1038/s41597-024-02933-w
Yin, J., Zhang, Y., Shyngyssova, N., and Liu, C. (2024). Hypersense: hyperdimensional intelligent sensing for energy-efficient sparse data processing. Adv. Intell. Syst. 6:2400228. doi: 10.2991/978-94-6463-600-0
Zhang, S., Wang, R., Zhang, J. J., Rahimi, A., and Jiao, X. (2021). “Assessing robustness of hyperdimensional computing against errors in associative memory,” in 2021 IEEE 32nd International Conference on Application-specific Systems, Architectures and Processors (ASAP) (IEEE), 211–217. doi: 10.1109/ASAP52443.2021.00039
Zou, Z., Alimohamadi, H., Kim, Y., Najafi, M. H., Srinivasa, N., and Imani, M. (2022b). Eventhd: robust and efficient hyperdimensional learning from neuromorphic sensors. Front. Neurosci. 16:858329. doi: 10.3389/fnins.2022.858329
Keywords: brain-inspired learning, cognitive computation, high-dimensional representation, hyperdimensional computing (HDC), neural-symbolic encoding
Citation: Poduval PP, Errahmouni Barkam H, Liu X, Yun S, Ni Y, Zou Z, Bastian ND and Imani M (2026) Optimal hyperdimensional representation for learning and cognitive computation. Front. Artif. Intell. 9:1690492. doi: 10.3389/frai.2026.1690492
Received: 21 August 2025; Revised: 16 January 2026; Accepted: 20 January 2026;
Published: 10 February 2026.
Edited by:
Jeff Orchard, University of Waterloo, CanadaReviewed by:
Peter Sutor, University of Maryland, College Park, United StatesKenny Schlegel, Chemnitz University of Technology, Germany
Copyright © 2026 Poduval, Errahmouni Barkam, Liu, Yun, Ni, Zou, Bastian and Imani. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Mohsen Imani, bS5pbWFuaUB1Y2kuZWR1