Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Appl. Math. Stat., 29 January 2026

Sec. Mathematical Finance

Volume 11 - 2025 | https://doi.org/10.3389/fams.2025.1706220

Enhanced state-space estimation of long-memory commodity volatility using the Unscented Kalman Filter and variational Bayes method for non-linear modeling

  • Department of Mathematics and Computational Sciences, University of Zimbabwe, Harare, Zimbabwe

This study addresses the limitations of the Kalman Filter (KF) by extending the application of the Unscented Kalman Filter (UKF) and the variational Bayes method (VBM) for estimating long-memory (LM) volatility models. Our methodology formulated the Fractionally Integrated Generalized Autoregressive Conditional Heteroskedasticity (FIGARCH) and Hyperbolic Generalized Autoregressive Conditional Heteroskedasticity (HYGARCH) processes within a state-space framework and employed the UKF alongside the VBM to achieve robust estimation. The findings demonstrated that the UKF excelled based on key performance metrics and forecasts, showing superior training data and validation data volatility predictions. The UKF-FIGARCH (1, 0.4029, 1) was a better model for gold, followed by the VBM_FIGARCH model (1, 0.3525, 1). For tobacco, the VBM-FIGARCH model (1, 0.3025, 1) was superior to the UKF-FIGARCH (1, 0.1320, 1) model. Both methods yielded estimates consistent with the parameters used for simulation, falling within the established 95% confidence interval defined by the critical values.

1 Introduction

Recent attention to financial time series analysis underscores the complexity and volatility inherent in financial assets. Challenges occur in modeling volatility, which often displays clustering, asymmetry, and long memory. Although traditional GARCH models, introduced by Bollerslev [1], provide a foundational framework, they often fall short in capturing the long memory (LM) observed in financial data [45]. In addition, the underutilization of state-space methodology represents a critical missing link in this field, as these methods offer flexible and robust frameworks for modeling dynamic systems. This study aims to address these limitations by introducing an enhanced state-space estimation approach that leverages the Unscented Kalman Filter (UKF) and variational Bayes method (VBM) for modeling Fractionally Integrated Generalized Autoregressive Conditional Heteroskedasticity (FIGARCH) and Hyperbolic Generalized Autoregressive Conditional Heteroskedasticity (HYGARCH) processes. While the UKF and VBM are established methodologies in various fields, such as robotics and signal processing, their application in the estimation of volatility models represents a relatively novel and promising advancement.

New developments in high-frequency data have highlighted the significance of LM volatility models. Studies of long-range dependent time series, often exhibiting persistent volatility clustering and cross-horizon dependence that traditional GARCH models fail to capture, are common in financial data. Ultra-short-interval data affected by microstructure noise, quantitatively automated trading mechanisms, and dispersed liquidity can amplify persistent dependence structures, motivating the use of LM-oriented models such as FIGARCH and HYGARCH. Hence, there is a need for non-linear, adaptive estimation methods that can handle swiftly evolving volatility. State-space approaches, such as the UKF and VBM, are therefore well-suited for LM volatility analysis.

Tobacco is Zimbabwe's largest agricultural export and a key source of foreign currency. Zimbabwe set a record with an output of 352.7 million kilograms of tobacco, valued at $1.2 billion, by the end of the marketing season, according to the Tobacco Industry and Marketing Board (TIMB) in August 2025. Gold is essential to Zimbabwe's economy, which heavily relies on agriculture and mining. As one of Africa's largest gold producers, the country uses gold mining to drive economic activity and generate foreign exchange. The Zimbabwe Gold (ZIG) currency, pegged to gold reserves, aims to stabilize the currency and restore confidence amid past issues of hyperinflation and devaluation. Therefore, gold not only serves as a vital revenue source but also as a cornerstone of monetary stability, underscoring its critical role in fostering economic growth and resilience in many developing countries.

Gold and tobacco were selected as the subjects of this study due to their significance in global markets and their distinct volatility behaviors. Gold, regarded as a safe-haven asset [2], responds uniquely to economic uncertainties, while tobacco, as a consumable commodity, reflects market dynamics shaped by consumer behavior. In addition, the choice of these commodities was based on the desired characteristics suitable for the proposed methods of data analysis. For example, while it would have been prudent to include other asset classes, such as equities or additional commodities, some do not exhibit the requisite stylized features of LM, heavy tails, clustering, and asymmetry.

The Kalman Filter (KF), introduced by Kalman [3], remains an integral tool within the state-space methodology. However, it is limited to models that can be formulated linearly in state space. Since volatility models are non-linear, there is a need to employ suitable techniques to effectively handle the non-linearities presented by such models and their derivatives. This paves the way for the UKF of Julier and Uhlmann [4], and the VBM.

The UKF effectively manages non-linear relationships in data, providing more accurate estimates than traditional linear state-space methods. The VBM adds another dimension to the matrix by allowing for the establishment of the degree of uncertainty in parameter estimates via Bayesian inference. This offers a better understanding of model performance compared to other approaches. Hence, these non-linear state-space methods (UKF and VBM) provide better alternatives for the analysis and prediction of volatility models, since they are both scalable and adaptive.

The study, in essence, did not develop a new methodology but presented an extension of existing procedures to enable new applications. This involved the formulation of LM volatility models in a state-space framework before parameter estimation and prediction. Empirical commodity returns and volatility, together with Monte Carlo simulation results, provide complementary perspectives on both theoretical and real-world data.

2 Literature review

We present the flaws and limitations of traditional GARCH models when it comes to dealing with volatility stylized facts, particularly those involving fractional integration. On the other hand, state-space estimation techniques, such as the Kalman Filter, have recently gained prominence in volatility modeling. However, it has its own fair share of deficiencies and flaws, as it can only handle linear forms of state-space representation. In such cases, non-linear counterparts, such as the UKF and VBM, may be better options.

Previous research by scholars such as Dias et al. [5], and Basira et al. [6], has argued for the existence of LM duality, in the sense that LM can occur in both the return series and the conditional variance. In terms of model suitability and performance, FIGARCH and its derivatives have remained the models of choice, particularly with respect to parameter estimation and forecasting power. These models have consistently performed better than their GARCH counterparts [7]. Research has also indicated that these models are particularly effective in capturing the volatility dynamics of assets that experience significant price fluctuations [8]. For instance, FIGARCH has been successfully employed to analyze stock market volatility, where it has outperformed traditional models in terms of both fit and predictive power [9].

As indicated earlier, State-Space Models (SSMs) have been applied across many fields, ranging from econometrics and finance to deep learning. A very general model that subsumes a whole class of special cases—much in the same way that linear regression does—is the state-space model, also known as the dynamic linear model, which was introduced by Kalman [3] and further developed by Shumway and Stoffer [10]. In financial markets, SSMshelp model asset prices and volatility, capturing the dynamics of returns and risk [11].

Julier and Uhlmann [4] demonstrated the significant performance gains of the UKF in the context of state estimation for non-linear control. Their work extended the application of the UKF to a broader class of non-linear estimation problems, such as non-linear system identification, neural network training, and dial estimation problems [12]. State-space models allow for the assimilation of time-varying parameters, which is particularly useful in financial time series where volatility can change over time. Principally, the UKF is a recursive algorithm designed for the estimation of hidden states in non-linear systems [13]. The key difference between the KF and the UKF is that the latter uses deterministic sampling to capture the mean vector and the variance matrix. This makes it effective in non-linear modeling, particularly robotics, aerospace, and finance [14].

The models are first represented in a state-space framework to enable the implementation of various algorithms, such as the KF, EKF, UKF, and VBM [15], paying particular attention to specific model assumptions such as stationarity and linearity status. The literature has documented many advantages of using SSMs over other estimation methods due to their capacity to estimate both parameters and latent states. This capability is particularly beneficial for FIGARCH models, as it enables instantaneous updates and produces smoother volatility estimates. Subsequently, these methods enhance the model's sensitivity to new information, allowing for more accurate and timely valuations of volatility changes [46].

In the same way that the UKF and VBM address the flaws of the KF and EKF, FIGARCH-type models address the limitations of GARCH-type models. Although these models effectively capture short-term volatility, they fail to account for LM properties, which are usually associated with financial data in the mean, the variance, or both. LM in time series was identified by Granger and Joyeux [16], and Hosking [17], through fractional integration. Despite renewed interest in volatility modeling, the use of state-space formulation has not been widely explored, leaving a niche for non-linear state-space estimation techniques, such as the UKF and VBM.

SSMs have proven highly effective in forecasting, for example, future values of commodity returns. They have also shown versatility in working with different types of information, including past observations and external variables. Cao et al. [47] reported that financial experts use SSMs for modeling the dynamics of asset return volatility and estimating value at risk (VaR), among other risk metrics. SSM application in financial markets is well-documented. For instance, these models have been utilized to estimate dynamic factor models that elucidate the relationships among multiple financial assets, thereby improving the understanding of the dynamic factors influencing asset pricing, risk factors, and liquidity ratios [18].

Recent studies, such as Hashami and Maldonado [19], have explored the predictive power of non-traditional data sources, such as financial news, in conjunction with dual-hybrid long-memory FIGARCH models. The ability of FIGARCH models to capture the dynamics of volatility in financial data positions them as strong candidates for modeling exchange rates and stock market returns [20]. Furthermore, volatility persistence observed in tobacco returns and variance aligns with the theoretical foundations of FIGARCH models [21]. Arouri et al. [48], forecasted volatility and calculated the VaR for commodity markets in the presence of asymmetry and long memory. The results highlighted the suitability and effectiveness of these models, producing accurate forecasts and precise estimates of the VaR for precious metals.

Furthermore, the State-Space Model of Realized Volatility under the Existence of Dependent Market Microstructure Noise [22] assumes a dependent market microstructure noise that autocorrelates and correlates with returns. This study extends the results of Nagakura and Watanabe [23] and compares models using both simulation and actual data, with results favoring LM models. Guglielmo et al. [49] investigated persistence and LM approaches in financial time series at three different frequencies (daily, weekly, and monthly) using data from stock markets, FOREX, and commodity markets over the period 2000 to 2016. The results showed higher persistence at lower frequencies for both returns and their volatility.

De Pinho et al. [24], modeled volatility using SSMs with heavy-tailed distributions through a non-Gaussian state-space model (NGSSM), which is attractive because the likelihood can be analytically computed. The study focuses on stochastic volatility models in the NGSSM, where the observation equation is modeled with heavy-tailed distributions, such as log-gamma, log-normal, and Weibull.

In addition to the fundamental contribution of this study, it is imperative to note that the authors previously applied analogous recursive estimation frameworks across a range of empirical settings. Early studies on adaptive filtering provide a foundation for this line of research, as confirmed by Özbek and Aliev [25]. Further refinements of estimation procedures were presented by Özbek and Özlalé [26], who employed the Extended Kalman Filter (EKF) to measure the output gap. The authors continued to advance recursive modeling by incorporating time-varying loss functions into the empirical estimation of central bank policy preferences in Özbek and Hacioglu [27]. More recently, Özbek et al. [28] extended these techniques to the analysis of consumption and current account dynamics within an intertemporal stochastic framework. The present manuscript builds upon this knowledge by applying recursive estimation principles to the study of LM volatility, thereby linking methodological innovation to practical financial applications.

3 Methodology

The methodology section provides an outline of the theoretical concepts of state-space estimation, with particular emphasis on procedures capable of handling non-linear long-memory modeling. We explore different distributional properties used to capture non-normalities in volatility models.

3.1 The long memory process

A long-memory process is a stochastic process characterized by insistent or unrelenting correlations between detached observations, where the impact of tremors declines gradually compared to short-memory processes [50]. This slow degeneration gives rise to a substantial autocorrelation structure that remains considerable over extended periods. Mathematically, a stationary return process rtis considered a long-memory process;

iff, HR: 0 < H < 1, andρ(υ),limυρ(υ)=K2H-1.

3.2 The FIGARCH process

The FIGARCH model [21] is designed to capture long memory in volatility and is obtained by replacing the first-difference operator (1 − B) in the GARCH model with the fractional differencing operator [20]. The FIGARCH (p, ξ, q) can be formulated as follows:

β(B)(1-B)ξεt2=ω+[1-α(B)]vt,    (1)

where, β(B)=1-β1B-β2B2-βpBpand α(B)=1-α1B-α2B2--αqBqare the autoregressive (AR) and moving-average (MA) polynomials, respectively. An alternative representation of the FIGARCH (p, ξ, q) model was presented by Ural and Küçüközmen [29] as follows:

[1-α(B)]σt2=ω+[1-α(B)-β(B)(1-B)ξ]εt2.    (2)

The conditional variance of εt2 is obtained from the following:

σt2=ω[1-α(B)]-1+[1-β(B)(1-B)ξ(1-α(B))-1]εt2    (3)

implying that σt2=ω[1-α(B)]-1+γ(B)εt2 and γ(B)=1-γ1B-γ2B2-γ3B3- 

Baillie et al. [21], noted that in FIGARCH (p, ξ, q) processes, the influence of errors on the conditional variance diminishes at a hyperbolic rate when 0 < ξ < 1. This allows the parameter to capture long-term volatility dynamics, while short-term dynamics are modeled using standard GARCH parameters. The covariance stationarity of FIGARCH is established, but strict stationarity remains uncertain, as discussed by Conrad [30]. To ensure positive conditional variances, specific parameter conditions must be imposed on the FIGARCH model, as also highlighted by Conrad [30].

3.3 The HYGARCH process

Basira et al. [6], defined the HYGARCH model of Davidson [31], as an extension of the FIGARCH model, introducing weights in the differencing parameter as follows:

σt2     =ω[1-α(B)]-1+1 -[1-α(B)]-1ρ[1+β{(1-B)ξ}] εt2,    (4)

whereω>0, 0 < ξ < 1 and α> 1.

The HYGARCH model allows the variance to exist at more extreme levels than the modest Integrated GARCH (IGARCH) and FIGARCH models. Conrad [51] suggested a modification of Equation 4, which takes the form:

β(B)[(1- α)+α(1-B)ξ]εt2= ω+ δ(B)(εt2- σt2).    (5)

3.4 State-space estimation

The general form of a state-space model consists of two main equations: the state equation and the observation equation. The state equation [32] describes the evolution of the hidden state (variance) over time:

xt=Ftxt-1+Gtut    (6)

where xt is the state vector (volatility) at time t, Ft is the state transition matrix, Gt is the control input matrix, and ut is the process noise (drives the dynamic system), typically assumed to be normally distributed.

The observation equation [33] relates the observed data (return series) to the hidden state:

rt=Htrt-1+vt    (7)

where yt is the observation vector (return) at time t, Ht is the observation matrix, and vt is the observation noise, also typically assumed to be normally distributed [32].

The dynamic systems F and H are assumed to be known. In state estimation, the EKF is the standard method of choice to achieve a recursive, approximate maximum likelihood estimation of the state. However, the limitations of the EKF, as discussed above, motivate the use of the UKF as the preferred method in this case [34].

3.5 State-space formulation for non-linear volatility models

In this section, we discuss the state-space representation of FIGARCH and HYGARCH processes. The FIGARCH (p, ξ, q) model, proposed by Baillie et al. [21], is designed to capture LM in volatility. Its state-space representation can be formulated as follows:

State equation:

ht=ω+i=1pβiht-i+j=1qαjϵt-j2+ξ(ht-1-ht)    (8)

where ht is the conditional variance, ω is the long-run average volatility, βi are the coefficients for past conditional variances, αj are the coefficients for past squared innovations, and ξ is the fractional differencing parameter.

Observation equation:

rt=σtϵt    (9)

where yt is the observed returns,σt=ht is the standard deviation, and ϵt is the standard normal random variable. The HYGARCH (p, q) model extends the GARCH framework to incorporate long memory and asymmetry, with the observation equation given by Equation 9 [35]. However, the state equation takes the following form:

ht=ω+i=1pβiht-i+j=1qαjϵt-j2+δ(ht-1-ht)    (10)

where δ is the parameter capturing the influence of past volatility. It is important to note that the HYGARCH model, unlike the FIGARCH model, does not explicitly include a fractional differencing parameter. Instead, it focuses on capturing long memory and asymmetry through its structure without the fractional integration aspect [36]. This reflects a different modeling approach, focusing on volatility changes without overtly depending on the fractional integration framework that underpins FIGARCH.

3.6 The UKF

The UKF is a method for estimating volatility models. It improves upon the EKF by addressing its limitations [34]. The EKF linearizes functions using first-order Taylor expansions (Jacobians), which can produce large errors for highly non-linear functions. The UKF, instead, uses a deterministic sampling technique to better capture the mean and covariance propagation through non-linear transformations [15]. The algorithm provides a framework for estimating volatility in models by effectively managing the non-linearities inherent in financial time series.

To initiate the UKF algorithm, the initial state x0 is set to represent the initial estimate of volatility [34], followed by defining the initial error variance-covariance matrix P0. Next, appropriate values for the process noise covariance Q and the observation noise covariance R are selected to account for uncertainties in the model and measurements.

Subsequently, the sigma points are generated from the initial state using the system in Equation 11.

χ0      =x0χi=x0+(n+λ)P0 i=1, .nχi=x0-(n+λ)P0 i=1, .n    (11)

where n is the dimension of the state and λ is the scaling parameter.

Next is the prediction step, where each sigma point is propagated through the state equation: χt|t-1(i)=f(χt-1(i), ut) and the predicted state mean and covariance are computed as follows:

x̄t|t-1=i-02nWm(i)χt|t-1(i)    (12)
Pt|t1=Q+i=02nWc(i)(χt|t1(i)x¯t|t1)(χt|t1(i)x¯t|t1)    (13)

In the updating step, the sigma points are propagated through the observation equation:

r̄t|t-1=i=02nWm(i)ςt(i)    (14)
Pr=R+i=02nWc(i)(ςt(i)-r̄t|t-1)(ςt(i)-r̄t|t-1)    (15)
Pxr=R+i=02nWc(i)(χt|t-1(i)-x̄t|t-1)(ςt(i)-r̄t|t-1)    (16)

The Kalman gain is calculated as follows:

Kt=PxrPr-1,    (17)

which feeds into the state estimates update:

xt=x̄t|t-1+Kt(rt-r̄t|t-1)    (18)

and the error covariance update:

Pt=Pt|t-1+KtPrKt    (19)

The structured schematic workflow of the UKF is presented in Figure 1.

Figure 1
Flowchart detailing a process with five main steps: Initialization, State-Space Model Definition, Sigma Point Generation, Time Update (Prediction), Measurement Update (Correction), and Forecast and Validation. Each step outlines specific mathematical operations and objectives, involving equations and models for prediction, transformation, updating, and validation tasks.

Figure 1. Schematic representation of the UKF procedure for long-memory volatility estimation, showing the steps of sigma-point generation, non-linear state propagation, measurement updating, and volatility forecasting.

In summary, while the RcppCNPy package in R does not directly implement the UKF, it provides a tool for interfacing R with Python's arithmetic proficiencies, allowing users to utilize Python-based UKF applications efficiently and resourcefully to process data within R. The RcppCNPy package is used to read/write NumPy arrays, allowing R to send input data to Python and retrieve the results back into R.

3.7 The VBM

Estimating a FIGARCH model using the VBM involves several steps. The specification of the FIGARCH model begins by defining the model as described in Equations 8 and 9. Next, the likelihood function is obtained from the conditional distribution of the data, given the parameters. For a FIGARCH model, this often involves computing the conditional density of yt, given past observations and the parameters:

p(rt|θ)=t=1Tp(rt|θ)    (20)

The VBM approximation stage involves choosing a variational family distribution q(θ) to approximate the posterior p(θ|r) [37], which can be a factored distribution of the form:

q(θ)=q(ω)q(α1)q(β1)q(d)    (21)

Next, the evidence lower bound (ELBO) is formulated, which is then maximized with respect to the variational parameters [38].

L(q)=Eq[logp(r,θ)]-Eq[logq(θ)]    (22)

The next step is to derive the update equations using calculus of variations. The variational parameters are often computed using the reparameterization trick or moment matching. Finally, the Evidence Lower Bound (ELBO) is optimized using an algorithm (e.g., Gradient Ascent) to maximize it with respect to the variational parameters [39].

The VBM schematic workflow for Volatility Estimation is illustrated in Figure 2.

Figure 2
Flowchart illustrating a five-step process for variational inference in FIGARCH or HYGARCH models. Initialization involves setting parameters and distributions. Step 1 defines the variational objective. Step 2 involves computing expectations and gradients. Step 3 updates variational parameters. Step 4 checks convergence and outputs posterior results. Step 5 focuses on forecasting volatility and performance validation using metrics like AIC, BIC, and MSE.

Figure 2. Workflow of the variational Bayes method (VBM) for volatility estimation, showing prior specification, ELBO construction, iterative variational updates, convergence assessment, and posterior-driven forecasting.

In summary, the “rstan” package in R provides a comprehensive framework for conducting VBM analysis, allowing users to define multifaceted Bayesian models, choose variational methods, and obtain efficient estimates of posterior distributions while offering vigorous analytical tools for model evaluation.

3.8 FIGARCH (1, ξ, 1) update equations

Deriving the update equations for a FIGARCH (1, ξ, 1) model, as defined in Equations 8 and 9, using VBM involves several steps. Below, we utilize moment matching and the reparameterization trick to derive updates for the variational parameters. Assuming a variational distribution for the parameters, = q(ω, α1, β1, ξ) , leads to Equations 21 and 22.

The log-likelihood function for the FIGARCH (1, ξ, 1) model is given by:

logP(r|θ)=-12t=1T(log(σt2)-yt2σt2+log(2π))=-T2log(2π)-12t=1T(log(σt2)+rt2σt2)    (23)

To compute the expectations needed for the ELBO, we evaluate the following:

Eq[logP(r|θ)]=-T2log(2π)-12Eq[t=1T(log(σt2)+rt2σt2)]

3.8.1 Moment matching

For each parameter, we derive the expected values. The new variables follow prior distributions, and the assumed prior distributions for the parameters of the model are as follows:

For ω: if q(ω)~N(ω, σω2),then{ μωnew=f{ Eq[ p(r|ω) ] }σωnew~Gamma(ω, ϵω2) For α1: if q(α1)~N(α1, σα12),then{ μα1new=f{ Eq[ p(r|α1) ] }σα1new ~Beta(α1, σα12)
For β1: if q(β1)~N(β1, σβ12),then{ μβ1new=f{ Eq[ p(r|β1) ] }σβ1new~ Beta(β1,σβ12) For ξ, we have{ μξnew=f{ Eq[ p(r|d) ] }σξnew ~Uniform(0, 1),

since ξ is constrained, 0 < ξ < 1.

In the context of a FIGARCH (1, ξ, 1) model, the role of prior information or distributions is to inform the updates of each parameter during variational inference. The choice of priors should align with the expected behavior and constraints of the model parameters, providing a basis for estimating the model efficiently under diverse error distributions [40].

3.9 Monte Carlo simulations

In this study, we used Monte Carlo simulations to generate synthetic time series data based on a predefined set of parameters for the FIGARCH and HYGARCH models. We then fitted the UKF models to this synthetic data and evaluated the estimates against the original parameters used for simulation by establishing critical values [41].

3.9.1 Parameter simulation

3.9.1.1 Parameter distributions

For each parameter, we select appropriate distributions based on theoretical insights:

ω is sampled from a normal distribution with a mean μω and standard deviation σω.

α1 and β1 are sampled from a uniform distribution ranging between 0 and 1 to ensure they remain within bounds for stationarity.

ξ is sampled from a beta or a uniform distribution, constrained between 0 and 1.

3.9.1.2 Generating random variables

Random number generation techniques are used to simulate values of ω, α1, β1, and ξ for 10,000 iterations (simulations).

3.9.1.3 Simulating time series data

• For each set of simulated parameters, the volatility model structure is specified.

• Synthetic time series data of returns are generated based on the volatility model structure using the simulated parameters. This involves defining the functional form of the FIGARCH or HYGARCH model and using the specified parameters to create the conditional variance and simulate the returns.

• These steps are repeated for all iterations to create a comprehensive dataset of simulated returns for each parameter combination.

3.9.1.4 Fitting the UKF and VBM models

For each synthetic time series generated from the simulations, a volatility model is fitted to estimate the parameters ω, α1, β1, and ξ, and the estimated parameters are recorded from the fitting process.

3.9.1.5 Establishing critical values

For each parameter (ω, α, β, and ξ), the estimated values obtained from fitting the UKF-FIGARCH model across all simulations are compiled.

The 2.5th and 97.5th percentiles of the estimated values for each parameter are calculated. These percentiles define the critical values, establishing a 95% confidence interval for each parameter.

Lower Critical Value=P2.5%and Upper Critical Value=P97.5%

For each estimated parameter from the fitting process, it is checked whether the values are within the established 95% confidence interval defined by the critical values. If an estimated parameter is between the lower and upper critical values, it is considered consistent with the parameters used for simulation. If an estimated parameter is outside this range, it is considered a discrepancy [42]. A consistent set of estimates within the critical values is used to support the validity of the new model.

3.9.2 Vuong's test for non-nested models

Since both the UKF and VBM are non-nested estimation techniques, Vuong's [52] test is an appropriate test tool for assessing model superiority formally and is given by Equation 24.

V=M̄NSM    (24)

N is the sample size, M̄ is the mean difference in log-likelihoods per observation, and S is its standard deviation.

4 Results

The dataset included 4,275 daily closing prices for gold and tobacco for the period 2013–2023, [6]. It was segmented into two distinct samples: the training data covered the period from 2013 to 26 October 2018, while the validation data spanned from October 2018 to October 2023. Prices were sourced from the LBMA database.

4.1 Descriptive summaries

For the descriptive statistics of Gold and Tobacco returns, including tests for correlation (Q-test), normality (JB test), heteroscedasticity (Q2 test), unit root (ADF test), and stationarity (KPSS and PP tests), we refer the reader to our earlier paper [6].

4.2 The visuals

The time series plot of daily gold prices, the corresponding return series, the Q-Q plot, and the ACF of the squared returns are presented in Figure 3.

Figure 3
Time series analysis of gold prices and returns from 2000 to 2020. Top left: Gold price trend with peaks around 2011. Top right: Gold return fluctuations. Bottom left: Normal Q-Q plot with deviations at extremes. Bottom right: ACF plot indicating autocorrelation with significant lags.

Figure 3. Daily gold prices and returns, along with the Autocorrelation function (ACF) of squared returns, QQ plot, and boxplot. The diagnostics indicated heavy tails, volatility clustering, and a slowly decaying ACF, consistent with long-memory behavior.

The corresponding time series plot of daily tobacco prices, the return series, the Q-Q plot, and the ACF of the squared returns are presented in Figure 4.

Figure 4
Time series analysis of cotton data is depicted in four graphs. The top-left graph shows cotton price trends from 2006 to 2018. The top-right graph presents cotton returns over the same period. The bottom-left graph is a normal Q-Q plot indicating deviations from normality. The bottom-right graph is an autocorrelation function (ACF) plot for the series cot squared, showing periodic spikes and 95% confidence bands.

Figure 4. Daily tobacco prices and returns, along with the ACF of squared returns and QQ plot. The series exhibited persistent volatility and a gradually decaying ACF, supporting the suitability of long-memory volatility models.

4.2.1 Test for long memory

The presence of LM in the volatility series of gold and tobacco commodities was tested using the method of Geweke and Porter-Hudak [43]. This method employs a spectral regression technique to decide between LM and short-memory properties in the data over time. It estimates the LM parameter ξ by analyzing the ACF and its decay properties.

Table 1 shows the result of the GPH long-memory test for squared returns of gold and tobacco across multiple bandwidths. The estimated LM parameters were statistically significant, signifying the presence of LM in both the gold and tobacco squared returns series. The estimated FIGARCH model parameters and their corresponding critical values, obtained using the UKF and VBM approaches, are summarized in Table 2.

Table 1
www.frontiersin.org

Table 1. Test for long memory in squared returns.

Table 2
www.frontiersin.org

Table 2. Comparison of estimated parameters with critical values for the FIGARCH model.

4.3 Model estimation simulations

It is important to note, at this stage, that the analysis was broad-based, assuming different distributions, including normal, STD, SSTD, and GED. However, only the normal distribution yielded plausible results. The STD distribution performed better than others, but the results were not consistent overall. Therefore, the results presented in this study are for the normal distribution.

The results shown in the table indicate that all estimated parameters of the FIGARCH model (ω, α1, β1, and ξ) fell within their respective 95% confidence intervals. This consistency suggests that the estimates were reliable and aligned with the expected behavior of the model. Hence, the FIGARCH model's parameters were statistically valid, supporting the model's appropriateness for capturing volatility undercurrents in the data.

Table 3 presents the comparison of the estimated parameters of the HYGARCH model using two methods, UKF and VBM. While α1 and β1 were validated, the inconsistency of ω suggests caution when interpreting the HYGARCH model's fit for this parameter. The ARCH and GARH parameters fell within the expected ranges (consistent), while the constant term fell outside (inconsistent) the expected ranges.

Table 3
www.frontiersin.org

Table 3. Comparison of estimated parameters with critical values for the HYGARCH model.

4.4 Model estimation empirical data

In this section, we present the results of parameter estimation for the FIGARCH and HYGARCH models using both the UKF and VBM.

Table 4 compares the UKF and VBM in estimating key volatility parameters. Both models yielded significant estimates for ω, α1, β1 and the fractional differencing parameter (ξ), indicating a strong influence of past shocks on current volatility. The UKF showed a higher α1 (0.1431) and a lower β1 (0.7238) compared to the VBM, suggesting a reaction to shocks and persistence in volatility. Model fit indicators revealed that the UKF had a slightly better Akaike information criterion (AIC) (−1231.7801) and Bayesian information criterion (BIC) (−1207.5693) values, along with a higher log-likelihood (621.1244), suggesting that it may be the superior model for this dataset. Overall, both models successfully captured volatility undercurrents, with the UKF providing a marginally better fit.

Table 4
www.frontiersin.org

Table 4. FIGARCH model estimates for gold.

Table 5 presents the results from the FIGARCH models applied to tobacco volatility. The UKF's constant term (ω = 0.0072) was not statistically significant, while the VBM's constant term (ω = 0.0182) showed minimal significance. The ARCH term α1 was significant in the VBM (0.0937), indicating a strong response to past shocks, whereas the UKF's term (0.0422) was not significant. The GARCH term (β1) showed persistence in volatility, with the VBM's estimate (0.6875) being more significant than the UKF's (0.4681). The fractional differencing parameter ξ indicated a stronger LM effect in the VBM (0.3025) compared to the UKF (0.1320). Model fit indicators showed that the VBM outperformed the UKF, with lower AIC (−1150.3629 vs. −1190.1236) and BIC (−1140.2634 vs. −1180.8115) values, as well as a higher log-likelihood (600.4984 vs. 580.1345). Overall, these findings suggest that the VBM provides a more effective model for capturing volatility dynamics in tobacco.

Table 5
www.frontiersin.org

Table 5. FIGARCH model estimates for tobacco.

Figure 5 illustrates the FIGARCH estimated vs. predicted volatility for various models fitted to gold and tobacco. Notably, the UKF–Gold model exhibited the narrowest gap between the estimated and predicted volatility, indicating that the Unscented Kalman Filter effectively captures the dynamics of gold volatility and demonstrates high predictive accuracy. Following this, the VBM–Gold model showed a slightly wider gap, suggesting that, while it remains effective, it may not capture certain nuances of gold volatility as effectively as the UKF. In contrast, the UKF–Tobacco model showed a higher difference between the predicted and estimated volatility. Therefore, the model was less effective in accurately estimating tobacco volatility. Lastly, the VBM–Tobacco model struggled to predict tobacco volatility.

Figure 5
Four line graphs compare estimated and predicted volatilities using FIGARCH models for gold and tobacco. The top left graph (red) shows UKF FIGARCH for gold. The top right graph (orange) shows VBM FIGARCH for gold. The bottom left graph (green) shows UKF FIGARCH for tobacco. The bottom right graph (purple) shows VBM FIGARCH for tobacco. Each graph displays time on the x-axis and volatility on the y-axis, with estimated and predicted volatilities marked in blue and red lines, respectively.

Figure 5. FIGARCH estimated vs. predicted volatility for the fitted models. (Top left) UKF–Gold volatility; (Top right): UKF–Tobacco; (Bottom left) VBM–Gold; (Bottom right) VBM–Tobacco. The UKF aligned most closely with gold's volatility path, while the VBM provided relatively better predictive alignment for tobacco.

4.4.1 HYGARCH (1, 1) for gold

Table 6 presents the results, again comparing the two methods. It is clear that the UKF model performed far better for gold, as indicated by lower AIC and BIC values. All parameter estimates were significant at the five percent level, including the constant, ARCH, and GARCH terms. In contrast, the VBM parameter estimates fell short across most performance metrics.

Table 6
www.frontiersin.org

Table 6. HYGARCH model estimates for gold.

Table 7 reveals that the UKF model offered a better fit, as indicated by lower AIC and BIC values and a higher log-likelihood. Although the UKF's ω and α1 were not significant, its large GARCH term indicated strong volatility perseverance. In contrast, the VBM showed a significant constant term but limited sensitivity, with its ARCH term lacking significance and a much lower GARCH term, reflecting weaker volatility persistence. In addition, the UKF exhibited a significant long-memory effect, while the VBM's small estimate for this parameter further limited its success.

Table 7
www.frontiersin.org

Table 7. HYGARCH model estimates for tobacco.

Figure 6 presents the HYGARCH estimated vs. predicted volatility for the models fitted to gold and tobacco. The UKF–Gold model demonstrated better performance compared to the UKF–Tobacco model, although both exhibited strong predictive capabilities. The narrower gap between the estimated and predicted volatility in the UKF–Gold model indicates a more appropriate depiction of gold's volatility intricacies. In contrast, while the UKF–Tobacco model also showed good performance, it was slightly less effective in capturing the nuances of tobacco volatility. Overall, both models exhibited robust capabilities, but the UKF–Gold model stood out as the superior performer in this analysis.

Figure 6
Two line graphs compare estimated and predicted volatilities using UKF HYGARCH for Gold and Tobacco from January 2014 to January 2024. The left graph shows volatilities in purple for Gold, ranging from 0.5 to 1.5. The right graph shows volatilities in green for Tobacco, ranging from 0 to 4. Both graphs use time on the x-axis and volatility on the y-axis, with legends indicating estimated and predicted volatility lines.

Figure 6. HYGARCH estimated vs. predicted volatility for the fitted models. (Left) UKF–Gold volatility; (Right) UKF–Tobacco volatility.

4.5 Performance metrics of the models

This section assesses model performance and the strength of parameter estimates through sensitivity analysis. To evaluate the efficacy of the fitted volatility models, we utilized numerous evaluation metrics, including the mean squared error (MSE), mean absolute error (MAE), Diebold-Mariano test (DM test), directional accuracy (DA), and model confidence set (MCS) criterion.

4.5.1 Out-of-sample estimates

The results for the test data for gold suggested that the best model was UKF_FIGARCH (1, 0.3946, 1), followed by VBM-FIGARCH (1, 0.3001, 1). The UKF_HYGARCH model (1, 1) also provided comparable estimates and closely matched volatility predictions.

Tables 8, 9 present the performance of the different models across several performance metrics. Most models passed the performance metrics, with the exception of two:the UKF_FIGARCH–Tobacco model and both HYGARCH models. The comparative performance of the FIGARCH models estimated using the UKF and VBM for gold and tobacco commodities, based on log-likelihood, AIC, BIC, and Vuong test statistics, is reported in Table 10.

Table 8
www.frontiersin.org

Table 8. FIGARCH forecast performance metrics.

Table 9
www.frontiersin.org

Table 9. HYGARCH forecast performance metrics.

Table 10
www.frontiersin.org

Table 10. Model comparison—FIGARCH (gold and tobacco).

5 Discussion

The findings highlight the feasibility of the two methods used to analyze the data. Both the UKF and VBM are plausible alternatives for estimating and forecasting LM volatility models. The non-linear nature of these estimation techniques aligns well with the non-linear state-space representation of the volatility models involved, which would not have been possible using the KF. The performance of these techniques is supported by the simulation results, where the majority of estimates fell within the 95 percent confidence interval. Consequently, these procedures accurately capture the dynamics of commodity data, as evidenced by largely significant parameter estimates and the models' performance across various metrics, including the AIC and BIC, among others used in this study.

The models performed particularly well in determining the LM parameters for gold and tobacco. These results are consistent with findings in the literature, such as the fractional differencing parameters reported by Ding et al. [53]. A summary of the statistical significance and the preferred model for each commodity is presented in Table 11.

Table 11
www.frontiersin.org

Table 11. Summary of statistical significance.

Furthermore, the study clearly shows that the UKF and VBM are viable alternatives for volatility model estimation in the presence of the LM phenomenon, as demonstrated by the plausible results obtained. By employing a non-linear state-space methodology, the study relaxes the assumption of linearity and adapts the method to better capture the detailed behavior of volatility that conventional approaches fail to address.

Through the successful implementation of state-space methodology in estimating LM models such as FIGARCH and HYGARCH, the study significantly improved the estimation and prediction of LM volatility models, thereby advancing the understanding of volatility dynamics [44]. These methods provide a promising avenue for improving volatility forecasting.

The differences for gold were small and statistically insignificant. Both models performed comparably; the UKF's slight advantage was not large enough to reject the null hypothesis of equal fit. For tobacco, the VBM significantly outperformed the UKF. All three tests (LR, ΔAIC/BIC, and Vuong) strongly supported the claim of VBM superiority.

For gold, while the UKF produced slightly lower AIC/BIC values and a higher log-likelihood, these differences were too small to be statistically significant. For tobacco, the differences in all metrics (especially LL and AIC/BIC) were large and statistically significant, validating the claim that the VBM provides a superior model fit. Hence, model superiority was statistically supported for tobacco but not for gold.

6 Conclusion and recommendations

The results of this study introduce the UKF and VBM as viable alternatives for volatility model estimation and prediction, thereby expanding the pool of available methodologies. These models are able to handle stylized features of volatility, LM, excess kurtosis, and skewness. Coupling Monte Carlo simulations with empirical data provides a solid foundation for future studies in this dynamic field. The next focus of our research will feature the stochastic nature of volatility in the presence of LM or extreme events, which are particularly common in finance and economics. For further studies, we recommend the relaxation of the Gaussian assumption to better handle high-frequency financial data by including heavy-tailed noise distributions, which can improve tail-risk estimation and forecasting. Furthermore, a comparative study of the performance of these methods, relative to the standard MLE method, can be conducted in the future to assess the extent to which they perform against standard FIGARCH or HYGARCH models.

Data availability statement

The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://data.nasdaq.com/data/LBMA-london-bullion-market-association.

Author contributions

KB: Conceptualization, Formal analysis, Methodology, Writing – original draft, Writing – review & editing. LD: Supervision, Validation, Investigation, Methodology, Writing – review & editing. FM: Supervision, Writing – review & editing, Software, Methodology, Investigation.

Funding

The author(s) declared that financial support was not received for this work and/or its publication.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Bollerslev T. Generalized autoregressive conditional heteroskedasticity. J Econom. (1986) 31:307–27. doi: 10.1016/0304-4076(86)90063-1

Crossref Full Text | Google Scholar

2. Chkili W. Is gold a hedge or safe haven for Islamic stock market movements? A Markov switching approach. J Multinatl Fin Manag. (2017) 42–3:152–63. doi: 10.1016/j.mulfin.2017.10.001

Crossref Full Text | Google Scholar

3. Kalman RE. A New Approach to Linear Filtering and Prediction Problems (1960). Available online at: https://asmedigitalcollection.asme.org/fluidsengineering/article-abstract/82/1/35/397706 (Accessed Auguts 16, 2025).

Google Scholar

4. Julier SJ, Uhlmann JK. New extension of the Kalman filter to nonlinear systems. In:Kadar I, , editors. Proceedings of SPIE – Signal Processing, Sensor Fusion, and Target Recognition VI. Bellingham, WA: SPIE – The International Society for Optical Engineering (1997).

Google Scholar

5. Dias R, Heliodoro P, Alexandre P, Santos H, Farinha A. Long memory in stock returns: evidence from the Eastern European markets. SHS Web Conf. (2021) 91:01029. doi: 10.1051/shsconf/20219101029

Crossref Full Text | Google Scholar

6. Basira K, Dhliwayo L, Chinhamu K, Chifurira R, Matarise F. Estimation and prediction of commodity returns using long memory volatility models. Risks. (2024) 12:73. doi: 10.3390/risks12050073

Crossref Full Text | Google Scholar

7. Yang T. Volatility characteristics of stock markets during the US-China trade war. Int Rev Econ Fin. (2025) 102:104335. doi: 10.1016/j.iref.2025.104335

Crossref Full Text | Google Scholar

8. Chen X, Zhu H, Zhang X, Zhao L. A novel time-varying FIGARCH model for improving volatility predictions. Phys A: Stat Mech Applic. (2022) 589:126635. doi: 10.1016/j.physa.2021.126635

Crossref Full Text | Google Scholar

9. Ampountolas A. Enhancing forecasting accuracy in commodity and financial markets: insights from GARCH and SVR models. Int J Fin Stud. (2024) 12:59. doi: 10.3390/ijfs12030059

Crossref Full Text | Google Scholar

10. Shumway RH, Stoffer DS. State space models. In:Shumway RH, Stoffer DH, , editors. Time Series Analysis and Its Applications. Springer International Publishing (2017). p. 289–384. doi: 10.1007/978-3-319-52452-8_6

Crossref Full Text | Google Scholar

11. Durbin J. A simple and efficient simulation smoother for state space time series analysis. Biometrika. (2002) 89:603–16. doi: 10.1093/biomet/89.3.603

Crossref Full Text | Google Scholar

12. Gao Z, Liu Y, Yang C, Chen X. Unscented Kalman filter for continuous-time nonlinear fractional-order systems with process and measurement noises. Asian J Control. (2020) 22:1961–72. doi: 10.1002/asjc.2077

Crossref Full Text | Google Scholar

13. Yu A, Liu Y, Zhu J, Dong Z. An improved dual unscented Kalman filter for state and parameter estimation. Asian J Control. (2016) 18:1427–40. doi: 10.1002/asjc.1229

Crossref Full Text | Google Scholar

14. Haykin S, editor. Kalman Filtering and Neural Networks. New York, NY: Wiley (2001).

Google Scholar

15. Wu M, Smyth AW. Application of the unscented Kalman filter for real-time nonlinear structural system identification. Struct Control Health Monit. (2007) 14:971–90. doi: 10.1002/stc.186

Crossref Full Text | Google Scholar

16. Granger CWJ, Joyeux R. An introduction to long-memory time series models and fractional differencing. J Time Ser Anal. (1980) 1:15–29. doi: 10.1111/j.1467-9892.1980.tb00297.x

Crossref Full Text | Google Scholar

17. Hosking JRM. Fractional differencing. Biometrika. (1981) 68:165–76. doi: 10.1093/biomet/68.1.165

Crossref Full Text | Google Scholar

18. Rzayeva S, Rzayev R. Rural Azerbaijan through quantitative and qualitative research lenses: debating the transition to small farm capitalism. Rural Sociol. (2019) 84:770–98. doi: 10.1111/ruso.12273

Crossref Full Text | Google Scholar

19. Hashami R, Maldonado F. Can news predict the direction of oil price volatility? A language model approach with SHAP explanations (version 1). arXiv. (2025). doi: 10.48550/arXiv.2508.20707. [Epub ahead of print].

Crossref Full Text | Google Scholar

20. Tayefi M, Ramanathan TV. An overview of FIGARCH and related time series models. Austrian J Stat. (2016) 41. doi: 10.17713/ajs.v41i3.172

Crossref Full Text | Google Scholar

21. Baillie RT, Chung C-F, Tieslau MA. Analysing inflation by the fractionally integrated ARFIMA-GARCH model. J Appl Econometr. (1996) 11:23–40.

Google Scholar

22. Yano T. State space model of realized volatility under the existence of dependent market microstructure noise (version 1). arXiv. (2024). doi: 10.48550/arXiv.2408.17187. [Epub ahead of print].

Crossref Full Text | Google Scholar

23. Nagakura D, Watanabe T. A state space approach to estimating the integrated variance under the existence of market microstructure noise. J Fin Econometr. (2015) 13:45–82. doi: 10.1093/jjfinec/nbt015

Crossref Full Text | Google Scholar

24. De Pinho FM, Franco GC, Silva RS. Modeling volatility using state space models with heavy tailed distributions. Math Comput Simul. (2016) 119:108–27. doi: 10.1016/j.matcom.2015.08.005

Crossref Full Text | Google Scholar

25. Özbek L, Aliev F. Adaptive fading Kalman filter with an application. Automatica. (1998) 34:1261–5.

Google Scholar

26. Özbek L, Özlalé Ü. Employing the extended Kalman filter in measuring the output gap. J Econ Dyn Control. (2005) 29:1611–22. doi: 10.1016/j.jedc.2004.09.005

Crossref Full Text | Google Scholar

27. Özbek L, Hacioglu D. The empirical estimation of central bank policy preference variables with time-variable loss functions. Commun Stat – Simul Comput. (2025). doi: 10.1080/03610918.2025.2527897. [Epub ahead of print].

Crossref Full Text | Google Scholar

28. Özbek L, Hacioglu D, Koç I. An empirical analysis of consumption and current account in an intertemporal stochastic model. Communic Stat – Simul Comput. (2024). doi: 10.1080/03610918.2024.2369817. [Epub ahead of print].

Crossref Full Text | Google Scholar

29. Ural M, Küçüközmen CC. Analyzing the Dual Long Memory in Stock Market Returns. Ege Acad Rev. (2011) 11:19–28.

Google Scholar

30. Conrad C. Inequality constraints in the fractionally integrated GARCH Model. J Fin Econometr. (2006) 4:413–49. doi: 10.1093/jjfinec/nbj015

Crossref Full Text | Google Scholar

31. Davidson J. Moment and memory properties of linear conditional heteroscedasticity models, and a new model. J Business Econ Stat. (2004) 22:16–29. doi: 10.1198/073500103288619359

Crossref Full Text | Google Scholar

32. Kappen HJ, Ruiz HC. Adaptive importance sampling for control and inference. J Stat Phys. (2016) 162:1244–66. doi: 10.1007/s10955-016-1446-7

Crossref Full Text | Google Scholar

33. Chan JCC, Kroese DP. State space models. In:Chan JCC, Kroese DP, , editors. Statistical Modeling and Computation/The Variational Bayes Method in Signal Processing. Berlin: Springer-Verlag (2006).

Google Scholar

34. Wan EA, Van Der Merwe R. The unscented Kalman filter for nonlinear estimation. In: Proceedings of the IEEE Adaptive Systems for Signal Processing, Communications, and Control Symposium. Piscataway, NJ: IEEE (2000).

Google Scholar

35. Basira K, Dhliwayo L, Matarise F. Integrating volatility models within state space frameworks for commodity return analysis. J Fin Risk Manag. (2025) 14:199–223. doi: 10.4236/jfrm.2025.143012

Crossref Full Text | Google Scholar

36. Nguyen T, Chaiechi T, Eagle L, Low D. SME stock markets in tropical economies: evolving efficiency and dual long memory. Econ Papers: J Appl Econ Policy. (2020) 39:28–47. doi: 10.1111/1759-3441.12254

Crossref Full Text | Google Scholar

37. The Variational Bayes Method in Signal Processing. Springer-Verlag (2006).

Google Scholar

38. Popov RN. Analytical approximation of the ELBO gradient in the context of the clutter problem (version 3). arXiv. (2024). doi: 10.48550/arXiv.2404.10550. [Epub ahead of print].

Crossref Full Text | Google Scholar

39. Karabayir I, Akbilgic O, Tas N. A novel learning algorithm to optimize deep neural networks: evolved gradient direction optimizer (EVGO). IEEE Trans Neural Netw Learn Syst. (2021) 32:685–94. doi: 10.1109/TNNLS.2020.2979121

PubMed Abstract | Crossref Full Text | Google Scholar

40. Gustafson P. On model expansion, model contraction, identifiability and prior information: two illustrative scenarios involving mismeasured variables. Stat Sci. (2005) 20. doi: 10.1214/088342305000000098

Crossref Full Text | Google Scholar

41. Cruse TA. Reliability-Based Mechanical Design. Boca Raton, FL: CRC Press (1997).

Google Scholar

42. Petty MD. Calculating and Using Confidence Intervals for Model Validation. Huntsville, AL: University of Alabama in Huntsville (2013).

Google Scholar

43. Geweke J, Porter-Hudak S. The estimation and application of long memory time series models. J Time Ser Anal. (1983) 4:221–38. doi: 10.1111/j.1467-9892.1983.tb00371.x

Crossref Full Text | Google Scholar

44. Rohilla V, Mathur S, Sharma B. Comparative analysis of machine learning and deep learning techniques for social media-based stress detection. In: Proceedings of the First International Conference on Pioneering Developments in Computer Science and Digital Technologies (IC2SDT). Piscataway, NJ: IEEE (2024).

Google Scholar

45. Nikolaev V, Rachev S, Fabozzi F. Long-memory modeling in financial markets. J Financ Econ. (2013) 11:501–30.

Google Scholar

46. Yuanbo L. Advanced volatility estimation methods for commodity markets. J Commod Mark. (2022) 29:45–62.

Google Scholar

47. Cao H, Zhang Y, Li X. State-space approaches to nonlinear volatility modeling. Quant Finance. (2024) 24:210–29.

Google Scholar

48. Arouri M, Lahiani A, Nguyen D. Long memory and volatility spillovers in energy markets. Energy Econ. (2019) 81:422–35.

Google Scholar

49. Guglielmo M, D'Agostino A, Rossi F. High-frequency data and long-range dependence in financial markets. J Empir Finance. (2019) 50:123–40.

Google Scholar

50. Beran J. Long-Memory Processes: Modeling and Applications. New York, NY: Springer (2017).

Google Scholar

51. Conrad C. Time Series Analysis for Finance and Economics. London: Palgrave Macmillan (2010).

Google Scholar

52. Vuong QH. Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica. (1989) 57:307–33.

Google Scholar

53. Ding Z, Granger CWJ, Engle RF. A long memory property of stock market returns and a new model. J Empir Finance. (1993) 1:83–106.

Google Scholar

Keywords: commodity returns, long memory volatility, non-linearity, Unscented Kalman Filter, variational Bayes, volatility clustering

Citation: Basira K, Dhliwayo L and Matarise F (2026) Enhanced state-space estimation of long-memory commodity volatility using the Unscented Kalman Filter and variational Bayes method for non-linear modeling. Front. Appl. Math. Stat. 11:1706220. doi: 10.3389/fams.2025.1706220

Received: 15 September 2025; Revised: 23 December 2025;
Accepted: 24 December 2025; Published: 29 January 2026.

Edited by:

Maria Cristina Mariani, The University of Texas at El Paso, United States

Reviewed by:

Muhammad Naeem, Kohat University of Science and Technology, Pakistan
Levent Özbek, Ankara University, Türkiye

Copyright © 2026 Basira, Dhliwayo and Matarise. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Kisswell Basira, YmFzaXJhZmFyYXlpQGdtYWlsLmNvbQ==

ORCID: Lawrence Dhliwayo orcid.org/0000-0001-7628-7122

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.