<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="research-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Environ. Sci.</journal-id>
<journal-title>Frontiers in Environmental Science</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Environ. Sci.</abbrev-journal-title>
<issn pub-type="epub">2296-665X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">740093</article-id>
<article-id pub-id-type="doi">10.3389/fenvs.2021.740093</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Environmental Science</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>An Ensemble Prediction System Based on Artificial Neural Networks and Deep Learning Methods for Deterministic and Probabilistic Carbon Price Forecasting</article-title>
<alt-title alt-title-type="left-running-head">Yang et&#x20;al.</alt-title>
<alt-title alt-title-type="right-running-head">Carbon Price Forecasting</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Yang</surname>
<given-names>Yi</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Guo</surname>
<given-names>Honggang</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1404062/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Jin</surname>
<given-names>Yu</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Song</surname>
<given-names>Aiyi</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<label>
<sup>1</sup>
</label>School of Information Science and Engineering, Lanzhou University, <addr-line>Lanzhou</addr-line>, <country>China</country>
</aff>
<aff id="aff2">
<label>
<sup>2</sup>
</label>School of Statistics, Dongbei University of Finance and Economics, <addr-line>Dalian</addr-line>, <country>China</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1336020/overview">Wendong Yang</ext-link>, Shandong University of Finance and Economics, China</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1408434/overview">Yao Dong</ext-link>, Jiangxi University of Finance and Economics, China</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1425497/overview">Jamshid Piri</ext-link>, Zabol University,&#x20;Iran</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Honggang Guo, <email>ghg970612@163.com</email>
</corresp>
<fn fn-type="other">
<p>This article was submitted to Environmental Economics and Management, a section of the journal Frontiers in Environmental Science</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>17</day>
<month>09</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>9</volume>
<elocation-id>740093</elocation-id>
<history>
<date date-type="received">
<day>12</day>
<month>07</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>23</day>
<month>08</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2021 Yang, Guo, Jin and Song.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Yang, Guo, Jin and Song</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these&#x20;terms.</p>
</license>
</permissions>
<abstract>
<p>Carbon price prediction is important for decreasing greenhouse gas emissions and coping with climate change. At present, a variety of models are widely used to predict irregular, nonlinear, and nonstationary carbon price series. However, these models ignore the importance of feature extraction and the inherent defects of using a single model; thus, accurate and stable prediction of carbon prices by relevant industry practitioners and the government is still a huge challenge. This research proposes an ensemble prediction system (EPS) that includes improved data feature extraction technology, three prediction submodels (GBiLSTM, CNN, and ELM), and a multiobjective optimization algorithm weighting strategy. At the same time, based on the best fitting distribution of the prediction error of the EPS, the carbon price prediction interval is constructed as a way to explore its uncertainty. More specifically, EPS integrates the advantages of various submodels and provides more accurate point prediction results; the distribution function based on point prediction error is used to establish the prediction interval of carbon prices and to mine and analyze the volatility characteristics of carbon prices. Numerical simulation of the historical data available for three carbon price markets is also conducted. The experimental results show that the ensemble prediction system can provide more effective and stable carbon price forecasting information and that it can provide valuable suggestions that enterprise managers and governments can use to improve the carbon price market.</p>
</abstract>
<kwd-group>
<kwd>carbon price forecasting</kwd>
<kwd>ensemble prediction system</kwd>
<kwd>deep learning methods</kwd>
<kwd>error distribution function</kwd>
<kwd>multiobjective optimization algorithm</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>This section describes the research background, provides a literature review, and states the purpose and innovation of this&#x20;study.</p>
<sec id="s1-1">
<title>Research Background</title>
<p>With the rapid development of the economy, the environment and the climate will inevitably change. Climate change is clearly a common problem facing all countries. On February 16, 2005, the Kyoto Protocol went into effect. According to the situation in each country, specific emission reduction plans and schedules were formulated. In January 2005, the EU emissions trading scheme (EU ETS), which was designed to achieve the emission reduction targets stipulated in the Kyoto protocol, was introduced (<xref ref-type="bibr" rid="B1">Arouri et&#x20;al., 2012</xref>). The EU ETS allocates carbon trading quotas to different emission entities according to its regulations, and entities that exceed the quota must purchase emission rights from entities that are lower than the quota through the carbon trading market. This measure of using a market trading mechanism provides valuable experience for solving the problem of global climate change.</p>
<p>As the world&#x2019;s largest carbon dioxide emitter (in 2018, its total carbon dioxide emissions reached 10 billion tons, accounting for approximately 30% of global carbon dioxide emissions), China has successively established eight carbon emission trading markets since 2013. However, this system is still in the construction stage, and the market mechanism is not perfect and needs further improvement. By studying the regular price fluctuation pattern of the EU ETS and China&#x2019;s carbon trading market, analyzing the influencing factors, and forecasting the carbon market price accordingly, we can better understand the fluctuation law of the carbon market and obtain a reference for formulating carbon market policies and mechanisms to improve the ability to regulate this market.</p>
<p>Carbon prices have important implications for governments, companies, and long-term investors. For governments, carbon pricing is one of the mechanisms used to reduce carbon emissions, and it can also be a source of revenue. Companies can use internal carbon pricing to assess the impact of mandatory carbon pricing on their businesses and to identify potential climate risks and revenue opportunities. Long-term investors are using carbon pricing to reevaluate their investment strategies. Therefore, regardless of the point of view, it is necessary to establish an accurate and stable carbon price forecasting system.</p>
</sec>
<sec id="s1-2">
<title>Literature Review</title>
<p>Most of the research methods used in carbon price prediction rely on historical data to build models to predict carbon prices. Carbon prices display high volatility and nonlinear structure, and many studies of carbon price prediction based on historical data have been conducted in recent years. The prediction methods can be divided into three categories: 1) statistical measurement methods; 2) artificial intelligence methods; and 3) decomposition integration hybrid forecasting methods.</p>
<sec id="s1-2-1">
<title>Statistical Measurement Method</title>
<p>As a classical time series forecasting method, statistical measurement methods, including linear regression models, autoregressive integrated moving averages (ARIMAs), generalized autoregressive conditional heteroscedasticity (GARCH) models, and gray model GM (1, 1) (<xref ref-type="bibr" rid="B7">Chevallier, 2009</xref>; <xref ref-type="bibr" rid="B4">Byun and Cho, 2013</xref>; <xref ref-type="bibr" rid="B40">Zhu and Wei, 2013</xref>) are widely used in carbon trading price prediction and volatility analysis. For example, Benz and Tr&#xfc;ck (2008) proposed the Markov transition and AR-GARCH model for stochastic modeling and analyzed the short-term price of the carbon dioxide emission quota of the EU ETS. Through the empirical results obtained, it was demonstrated that the prediction performance of the Markov state transition model is better than that of the GARCH model. <xref ref-type="bibr" rid="B40">Zhu and Wei (2013)</xref> combined least squares SVM with the ARIMA model, and the results showed that the developed model was more robust than the single-prediction model. <xref ref-type="bibr" rid="B41">Zhu B et&#x20;al. (2018)</xref> used grey correlation analysis to analyze the carbon price market. The traditional statistical model has high prediction accuracy and wide adaptability in linear and stable time series. However, because carbon prices show strong volatility, nonlinearity, and instability, traditional statistical measurement methods cannot capture internal structural characteristic data (<xref ref-type="bibr" rid="B21">Lu et&#x20;al., 2019</xref>). Therefore, accurate forecasting of carbon prices requires the use of a method with a strong nonlinear feature extraction ability that enables it to take into account potential nonlinear characteristics. In addition, the traditional statistical measurement method is more suitable for the long-term prediction of time series, and its short-term carbon price prediction performance is poor (<xref ref-type="bibr" rid="B6">Cheng and Wang, 2020</xref>).</p>
<p>Owing to the shortcomings of statistical models, <bold>
<italic>artificial intelligence methods</italic>
</bold> (<bold>AI</bold>) have gradually become widely used in time series prediction; these methods are suitable for nonlinear prediction without any assumption of data distribution (<xref ref-type="bibr" rid="B30">Wang et&#x20;al., 2020</xref>). Increasing evidence shows that the performance of AI in nonlinear time series is better than that of other models (<xref ref-type="bibr" rid="B38">Zhang et&#x20;al., 2017</xref>). AI, including back-propagation neural networks (BPs), multilayer perceptual neural networks (MLPs), least squares support vector regression (LSSVR), and hybrid prediction methods combined with optimization algorithms, have also been widely used in carbon price forecasting. <xref ref-type="bibr" rid="B2">Atsalakis (2016)</xref> combined a hybrid fuzzy controller called PATSOS with an adaptive neuro fuzzy inference system (ANFIS). The research shows that this method can produce accurate and timely prediction results. <xref ref-type="bibr" rid="B11">Fan et&#x20;al. (2015)</xref> studied the chaotic characteristics of the EU ETS, used the neural network model of MLP to predict carbon prices, and found that the forecasting accuracy of the model was significantly improved. <xref ref-type="bibr" rid="B28">Tian and Hao (2020)</xref> used phase space reconstruction technology and the ELM under the multiobjective grasshopper optimization algorithm (MOGOA-ELM) to predict the trend of the EU ETS and China&#x2019;s carbon prices. The empirical results show that this method can be used effectively to predict carbon prices.</p>
<p>In recent years, with the development of deep learning theory (DL) in image detection, audio detection, and other fields, DL has become the focus of many scholars (<xref ref-type="bibr" rid="B20">Liu et&#x20;al., 2021</xref>). The unique storage unit structure of deep learning allows it to retain past historical data and has significant advantages for processing time series data that feature long processing intervals and delays (<xref ref-type="bibr" rid="B39">Zhang B et&#x20;al., 2018</xref>). <xref ref-type="bibr" rid="B23">Niu et&#x20;al. (2020)</xref> combined LSTM and GRU to establish a deep learning recursive forecasting unit for forecasting multiple financial data. <xref ref-type="bibr" rid="B19">Liu et&#x20;al. (2020)</xref> proposed a new wind speed prediction model based on an error correction strategy and the LSTM algorithm to predict short-term wind speed. The experimental results demonstrated that its performance is better than that of other comparable models. However, application of deep learning frameworks to carbon price prediction is still very limited.</p>
<p>In addition to the selection and optimization of prediction methods, data preprocessing technology also plays an indispensable role in the prediction accuracy of the prediction model (<xref ref-type="bibr" rid="B31">Wang et&#x20;al., 2021</xref>). <bold>
<italic>Decomposition and integration methods</italic>
</bold>, including empirical mode decomposition (EMD), singular spectrum denoising (SSA), and variational mode decomposition (VMD) are widely used in time series data preprocessing. These methods aim to decompose and reconstruct the original time series data and extract the effective features of the time series. Decomposing the original time series into a series of simple patterns that exhibit strong regularity can significantly improve the prediction accuracy of time series. <xref ref-type="bibr" rid="B36">Wei et&#x20;al. (2018)</xref> used wavelet transform and kernel ELM to predict carbon prices. <xref ref-type="bibr" rid="B42">Zhu J et&#x20;al. (2018)</xref> explored an efficient prediction model based on VMD mode reconstruction and optimal combination and thereby greatly improved the prediction accuracy of carbon prices. However, the above decomposition methods still have some shortcomings. For example, in wavelet decomposition and VMD, it is necessary to determine the wavelet basis function and the decomposition level. Although in EMD it is not necessary to determine the number of decomposition levels, mode aliasing and insufficient noise separation cannot be solved (<xref ref-type="bibr" rid="B18">Jin et&#x20;al., 2020</xref>). Therefore, it is very important to extract the nonlinear peculiarities of carbon prices by using appropriate data preprocessing methods.</p>
<p>A single prediction model cannot achieve good performance on every dataset. Therefore, researchers began to focus on combination forecasting models. In essence, combination forecasting models combine different hybrid forecasting methods or single forecasting methods using weighting. In many experimental studies, it is found that the use of a combination of prediction methods produces better predictions than the use of a method that is based on a single-prediction model. The advantage of using combination models is that different time series may have different information sets, information features, and modeling structures, and the use of a combination of prediction methods can result in good performance in the case of such structure mutations. Although the use of a combination forecasting method to forecast time series is very common, use of a combination forecasting model to forecast carbon prices is still in its infancy.</p>
<p>The above analysis indicates that most research on carbon prices is driven by single or hybrid forecasting models and that it tends to emphasize prediction strategies that are based on certainty and to largely ignore the importance of uncertainty analysis of carbon prices. Regardless of the type of prediction model used, there are inherent and irreducible uncertainties in each prediction that will greatly increase the possibility of miscalculation (<xref ref-type="bibr" rid="B10">Du et&#x20;al., 2020</xref>). Therefore, quantification of the uncertainty of carbon price prediction plays an indispensable role in exploring the complexity of the carbon price market and strengthening the ability to conduct effective market anti-risk management.</p>
</sec>
</sec>
<sec id="s1-3">
<title>Objectives and Contributions</title>
<p>To supplement the existing research on carbon price prediction, an ensemble prediction system (EPS) based on the ICEEMDAN data preprocessing method, the deep learning algorithm (DL), the extreme learning machine (ELM), and the multiobjective dragonfly optimization algorithm (MODA) is developed and used to analyze the certainty and uncertainty of carbon prices. Specifically, ICEEMDAN is employed to decompose and reconstruct the original carbon price data and extract the effective features of the data, and the results are transferred into the submodels of EPS as training data (the submodels are ICEEMDAN-GBiLSTM, ICEEMDAN-CNN, and ICEEMDSAN-ELM). Using the MODA, the final carbon price point forecast results are then obtained through a weighted combination of the submodel prediction results. For interval prediction, the upper and lower bounds of the prediction interval are constructed based on the prediction value of ESP and the best fit distribution function of error, namely, the T location-scale (TLS) distribution. The main innovations presented in this study are as follows:<list list-type="simple">
<list-item>
<p>1) <bold>
<italic>An effective ensemble prediction system of carbon prices is developed.</italic>
</bold> Two hybrid prediction models based on a deep learning algorithm (ICEEMDAN-GBiLSTM and ICEEMDAN-CNN) and a feedforward neural network (ICEEMDAN-ELM) are combined to <bold>
<italic>overcome the inherent defects of a single hybrid prediction model.</italic>
</bold>
</p>
</list-item>
<list-item>
<p>2) A deep learning recurrent neural network, GBiLSTM, is first proposed as a prediction submodel of the EPS. <bold>
<italic>GBiLSTM combines two recursive deep learning algorithms; it can effectively deal with time series with long memory and increase the accuracy of carbon price forecasting.</italic>
</bold>
</p>
</list-item>
<list-item>
<p>3) The MODA is employed as an effective method of weighting the ensemble prediction system. It optimizes the weight coefficient of <bold>
<italic>the ensemble model from the perspective of prediction accuracy and prediction stability,</italic>
</bold> thereby overcoming the obvious defect that single objective optimization can only select one objective function.</p>
</list-item>
<list-item>
<p>4) <bold>
<italic>To overcome the nonlinearity and strong volatility of the original carbon price data, an effective time series preprocessing technique is developed.</italic>
</bold> ICEEMDAN sequence decomposition technology is employed to decompose and reconstruct the original carbon price data, extract the salient features of the data, and improve the prediction accuracy of the&#x20;EPS.</p>
</list-item>
<list-item>
<p>5) <bold>
<italic>By fitting the best error distribution, the uncertainty of the carbon price is mined.</italic>
</bold> In the past, the error distribution of a prediction was usually assumed to be a Gaussian distribution. In this study, <bold>
<italic>five types of parameter distribution functions are used to fit the prediction error</italic>
</bold>, the best error distribution function is found, and the ranges of carbon price interval prediction are constructed.</p>
</list-item>
</list>
</p>
<p>The remainder of the study is organized as follows. In Section <italic>Model Theory and Related Work</italic> and Section <italic>Ensemble Prediction System and its Interval Forecasting Framework</italic>, we introduce the theoretical method and the framework used in the proposed EPS. Section <italic>Experiment and Analysis</italic> describes the experimental data and the prediction performance evaluation index. The point prediction and interval prediction of the carbon price are then simulated. Section <italic>Discussion</italic> is a further discussion of EPS, and a summary of the study is presented in Section <italic>Conclusion</italic>.</p>
</sec>
</sec>
<sec id="s2">
<title>Model Theory and Related Work</title>
<p>This section introduces the corresponding theories and describes the functions of the data preprocessing module, the combination prediction module, and the uncertainty mining module of the EPS prediction system.</p>
<sec id="s2-1">
<title>Data Preprocessing</title>
<p>The data processing module includes the data feature extraction method, which is based on improved complex ensemble empirical mode decomposition with adaptive noise (ICEEMDAN), and the data feature selection method, which is based on the partial autocorrelation function (PACF).</p>
<sec id="s2-1-1">
<title>Data Feature Extraction</title>
<p>To improve the problem of mode aliasing in the traditional noise reduction method EMD and the slight residual noise in CEEMDAN, the ICEEMDAN technology is improved. The CEEMDAN method adds Gaussian white noise during the decomposition process, while the ICEEMDAN method adds a special type of white noise, <inline-formula id="inf1">
<mml:math id="m1">
<mml:mrow>
<mml:msub>
<mml:mi>E</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
</mml:mrow>
<mml:msup>
<mml:mi>&#x3c9;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>, that is, the k-th IMF component of the Gaussian white noise (M.E. <xref ref-type="bibr" rid="B29">Torres et&#x20;al., 2011</xref>; M.A. <xref ref-type="bibr" rid="B9">Colominas et&#x20;al., 2014</xref>). The local mean value of the added noise is calculated for each modal component, and the IMF is defined as the difference between the residual signal and the local mean.<list list-type="simple">
<list-item>
<p>1) The definition operator <inline-formula id="inf2">
<mml:math id="m2">
<mml:mrow>
<mml:msub>
<mml:mi>E</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mo>&#xb7;</mml:mo>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> represents the k-th IMF after EMD decomposition, and <inline-formula id="inf3">
<mml:math id="m3">
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mo>&#xb7;</mml:mo>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> represents the local mean value of the signal. There is <inline-formula id="inf4">
<mml:math id="m4">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">E</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="bold-italic">M</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. Operator means taking the mean value, and <italic>x</italic> represents the original data of the study, and then the local average value is calculated by EMD:</p>
</list-item>
</list>
<disp-formula id="e1">
<mml:math id="m5">
<mml:mrow>
<mml:msup>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold-italic">E</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3c9;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(1)</label>
</disp-formula>where <inline-formula id="inf5">
<mml:math id="m6">
<mml:mrow>
<mml:msup>
<mml:mi>&#x3c9;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> is the ith white noise added and <inline-formula id="inf6">
<mml:math id="m7">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the standard deviation of the noise. The first residual component <inline-formula id="inf7">
<mml:math id="m8">
<mml:mrow>
<mml:msub>
<mml:mtext>r</mml:mtext>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo>&#x2329;</mml:mo>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msup>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>&#x232a;</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is obtained by taking the local mean value. The first intrinsic mode function <italic>IMF</italic>
<sub>
<italic>1</italic>
</sub> value <inline-formula id="inf8">
<mml:math id="m9">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">d</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mtext>x</mml:mtext>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>r</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is calculated.<list list-type="simple">
<list-item>
<p>2) The value <inline-formula id="inf9">
<mml:math id="m10">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>d</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> of the second mode component <italic>IMF</italic>
<sub>
<italic>2</italic>
</sub> is calculated:</p>
</list-item>
</list>
<disp-formula id="e2">
<mml:math id="m11">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">d</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>r</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>r</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>r</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mrow>
<mml:mo>&#x2329;</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold-italic">M</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
</mml:mrow>
<mml:msub>
<mml:mi>r</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold-italic">E</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
</mml:mrow>
<mml:msup>
<mml:mi>&#x3c9;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x232a;</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(2)</label>
</disp-formula>
<list list-type="simple">
<list-item>
<p>3) The k-th residual is calculated:</p>
</list-item>
</list>
<disp-formula id="e3">
<mml:math id="m12">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">r</mml:mi>
<mml:mi mathvariant="bold-italic">k</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo>&#x2329;</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold-italic">M</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>r</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold-italic">E</mml:mi>
<mml:mi mathvariant="bold-italic">k</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3c9;</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>&#x232a;</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2,3</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x22ef;</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
<label>(3)</label>
</disp-formula>
</p>
<p>4) The value of the k-th mode component <italic>IMF</italic>
<sub>
<italic>k:</italic>
</sub>
<inline-formula id="inf10">
<mml:math id="m13">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">d</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi mathvariant="bold">k</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>r</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>r</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, is calculated, and <xref ref-type="disp-formula" rid="e3">Eq. 3</xref> is repeated until the residual satisfies the iteration termination condition, which is <bold>
<italic>Cauchy</italic>
</bold> convergence. The standard deviation between two adjacent <bold>
<italic>IMF</italic>
</bold> components <inline-formula id="inf11">
<mml:math id="m14">
<mml:mrow>
<mml:mi>&#x3c7;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mrow>
<mml:mo>&#x2016;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mtext>d</mml:mtext>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mtext>d</mml:mtext>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>&#x2016;</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>/</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mrow>
<mml:mo>&#x2016;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mtext>d</mml:mtext>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>&#x2016;</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is less than a specified&#x20;value.</p>
<p>In this study, ICEEMDAN is used to decompose the original carbon price data into several intrinsic mode functions (IMFs). The IMF with the highest frequency is removed, and the remaining IMFs are included. Through this method of deconstruction and reorganization, the problem of strong volatility and randomness of the original data is solved. The data features are effectively extracted, and the prediction veracity of the model is increased.</p>
</sec>
<sec id="s2-1-2">
<title>Data Feature Selection</title>
<p>The partial autocorrelation function (PACF) is an effective method for distinguishing the structural features of sequences (<xref ref-type="bibr" rid="B14">Jiang et&#x20;al., 2020</xref>). It can be used to calculate the partial correlation between the time series and its lag term. If <inline-formula id="inf12">
<mml:math id="m15">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3a6;</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is employed to represent the <italic>j</italic>-th regression coefficient in the k-order autoregressive equation, the model can be expressed as follows:<disp-formula id="e4">
<mml:math id="m16">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mtext>t</mml:mtext>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3a6;</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3a6;</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mo>&#x22c5;</mml:mo>
<mml:mo>&#x22c5;</mml:mo>
<mml:mo>&#x22c5;</mml:mo>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3a6;</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(4)</label>
</disp-formula>where <italic>x</italic>
<sub>
<italic>t</italic>
</sub> is the time series and <inline-formula id="inf13">
<mml:math id="m17">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3a6;</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the last coefficient. If <inline-formula id="inf14">
<mml:math id="m18">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3a6;</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is defined as a function of lag time <italic>k</italic>, then <inline-formula id="inf15">
<mml:math id="m19">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3a6;</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <italic>k &#x3d; 1,2...</italic> is named partial autocorrelation function.</p>
<p>In this study, PACF is used to find the lag terms that have the strongest correlation with the time series; these are then used as the input characteristics of the forecast&#x20;model.</p>
</sec>
</sec>
<sec id="s2-2">
<title>Ensemble Prediction Module</title>
<p>The prediction value calculated by the ensemble prediction system is obtained using an ensemble of the prediction results of different single-prediction components through the weighting strategy. In this section, the three submodes of the proposed EPS and the MODA weighting optimization strategy are introduced.</p>
<sec id="s2-2-1">
<title>Convolutional Neural Network</title>
<p>
<bold>A</bold> CNN is an incompletely connected DL network structure that is composed of two special neural networks: a convolution layer and a down sampling layer (<xref ref-type="bibr" rid="B35">Wang, 2020</xref>). The neurons in each layer of the CNN are locally connected, enabling them to realize hierarchical feature extraction and transformation of the&#x20;input. Neurons with the same connection weight are connected to different regions of the upper neural network; in this way, a translation-invariant neural network is obtained (<xref ref-type="bibr" rid="B34">Wang, 2018</xref>).<list list-type="simple">
<list-item>
<p>1) <bold>
<italic>The training of the Convolution Layer.</italic>
</bold> The CNN is connected to the local region of the feature surface by a convolution kernel. The output characteristic surface size of each convolution layer must meet the following requirement:</p>
</list-item>
</list>
<disp-formula id="e5">
<mml:math id="m20">
<mml:mrow>
<mml:mi>o</mml:mi>
<mml:mi mathvariant="bold-italic">MapN</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mi mathvariant="bold-italic">Map</mml:mi>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>C</mml:mi>
<mml:mi mathvariant="bold-italic">Window</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi mathvariant="bold-italic">Interval</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(5)</label>
</disp-formula>In <xref ref-type="disp-formula" rid="e5">Eq. 5</xref>, <bold>
<italic>oMapN</italic>
</bold> is the number of output feature surfaces of each convolution layer, <bold>
<italic>iMapN</italic>
</bold> is the number of input feature surfaces, <bold>
<italic>CWindow</italic>
</bold> is the size of the convolution kernel, and <bold>
<italic>CInterval</italic>
</bold> is the sliding step size of the convolution kernel.</p>
<p>In general, to ensure integral division in the above formula, it is necessary to train the number of parameters for each convolution layer of the CNN so as to satisfy the following condition:<disp-formula id="e6">
<mml:math id="m21">
<mml:mrow>
<mml:mtext>C</mml:mtext>
<mml:mi mathvariant="bold-italic">Params</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mi mathvariant="bold-italic">Map</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>C</mml:mi>
<mml:mi mathvariant="bold-italic">Window</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>o</mml:mi>
<mml:mi mathvariant="bold-italic">Map</mml:mi>
</mml:mrow>
</mml:math>
<label>(6)</label>
</disp-formula>where <bold>
<italic>CParams</italic>
</bold> is the number of parameters, <bold>
<italic>iMap</italic>
</bold> is the input feature surface, and <bold>
<italic>oMap</italic>
</bold> is the output feature surface.</p>
<p>The output value <inline-formula id="inf16">
<mml:math id="m22">
<mml:mrow>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mi>k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>o</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>can be obtained by the convolution layer; the formula is as follows:<disp-formula id="e7">
<mml:math id="m23">
<mml:mrow>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mrow>
<mml:mi mathvariant="bold-italic">nk</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold-italic">out</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">f</mml:mi>
<mml:mrow>
<mml:mi mathvariant="bold-italic">cov</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi mathvariant="bold-italic">h</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold-italic">in</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#xd7;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">w</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="bold-italic">h</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold-italic">in</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#xd7;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">w</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mtext>h</mml:mtext>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold-italic">in</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#xd7;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">w</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mo>&#x22ef;</mml:mo>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">b</mml:mi>
<mml:mi mathvariant="bold-italic">n</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(7)</label>
</disp-formula>where <inline-formula id="inf17">
<mml:math id="m24">
<mml:mrow>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> is the input value, <inline-formula id="inf18">
<mml:math id="m25">
<mml:mrow>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the offset value of the output characteristic surface <italic>n</italic>, and<inline-formula id="inf19">
<mml:math id="m26">
<mml:mrow>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mi>cov</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mo>&#x22c5;</mml:mo>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the excitation function. The excitation function is usually the ReLU function, and the formula for its calculation is as follows:<disp-formula id="e8">
<mml:math id="m27">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">f</mml:mi>
<mml:mrow>
<mml:mi mathvariant="bold-italic">cov</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="bold-italic">MAX</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(8)</label>
</disp-formula>
<list list-type="simple">
<list-item>
<p>2) <bold>
<italic>The output of the Pooling Layer.</italic>
</bold> The pooling layer is also composed of several feature surfaces, and the number of feature surfaces does not change. The output value of the pooling layer&#x20;is</p>
</list-item>
</list>
<disp-formula id="e9">
<mml:math id="m28">
<mml:mrow>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">t</mml:mi>
<mml:mrow>
<mml:mi mathvariant="bold-italic">nl</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold-italic">out</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="bold-italic">f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold-italic">sub</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
</mml:mrow>
<mml:mi>q</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(9)</label>
</disp-formula>where <inline-formula id="inf20">
<mml:math id="m29">
<mml:mrow>
<mml:msubsup>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> is the output value of the <italic>q</italic>-th neuron on the n-th input characteristic surface of the pooling layer and <inline-formula id="inf21">
<mml:math id="m30">
<mml:mrow>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mo>&#x22c5;</mml:mo>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is a function that takes either the maximum value or the mean value. The size <bold>
<italic>DoMapN</italic>
</bold> of each output feature surface of the pooling layer is<disp-formula id="e10">
<mml:math id="m31">
<mml:mrow>
<mml:mi mathvariant="bold-italic">DoPapN</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>o</mml:mi>
<mml:mi mathvariant="bold-italic">Map</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mi mathvariant="bold-italic">Window</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(10)</label>
</disp-formula>
<list list-type="simple">
<list-item>
<p>3) Full connection layer output. In the CNN structure, one or more fully connected layers are connected after the multiple convolution layers and the pooling layers are obtained. The ReLU function is also used in the excitation function of the whole connected&#x20;layer.</p>
</list-item>
</list>
</p>
<p>In this study, CNN, as a component of the combined forecasting system, forecasts the carbon&#x20;price.</p>
</sec>
<sec id="s2-2-2">
<title>Deep Learning Recursive Network Structure (GBiLSTM)</title>
<p>In this study, we developed a deep learning recurrent network structure, which is a hybrid of BiLSTM and GRU. The structure diagram is shown in <xref ref-type="fig" rid="F1">Figure&#x20;1</xref>.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>Flowchart of the proposed bidirectional long short-term memory-gated recurrent unit (GBiLSTM) model.</p>
</caption>
<graphic xlink:href="fenvs-09-740093-g001.tif"/>
</fig>
<sec id="s2-2-2-1">
<title>Bidirectional Long Short Term Memory Neural Network</title>
<p>BiLSTM is an improved network of LSTM. LSTM cannot capture information from back to front; however, BiLSTM can solve this problem. When bidirectional sequence information is captured, the time series can be predicted more accurately (<xref ref-type="bibr" rid="B12">Hochreiter and Schmidhuber, 1997</xref>).<list list-type="simple">
<list-item>
<p>1) The LSTM mechanism consists of three memory gates: an input gate (<italic>i</italic>
<sub>
<italic>t</italic>
</sub>), a forgetting gate (<italic>f</italic>
<sub>
<italic>t</italic>
</sub>), and an output gate (<italic>O</italic>
<sub>
<italic>t</italic>
</sub>). The specific expression is as follows:</p>
</list-item>
</list>
<disp-formula id="e11">
<mml:math id="m32">
<mml:mrow>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">i</mml:mi>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold-italic">H</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">B</mml:mi>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">f</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold-italic">H</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">B</mml:mi>
<mml:mi mathvariant="italic">f</mml:mi>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">o</mml:mi>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mi>o</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi>o</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold-italic">H</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">B</mml:mi>
<mml:mtext>o</mml:mtext>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mrow>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">c</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">f</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x2297;</mml:mo>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">H</mml:mi>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x2297;</mml:mo>
<mml:mi mathvariant="bold-italic">tanh</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(11)</label>
</disp-formula>where <italic>x</italic>
<sub>
<italic>t</italic>
</sub>, <inline-formula id="inf22">
<mml:math id="m33">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula>, and <inline-formula id="inf23">
<mml:math id="m34">
<mml:mrow>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> represent the input sample, the <bold>
<italic>sigmoid</italic>
</bold> activation function, and the storage unit of time <italic>t</italic>, respectively, and (<italic>B</italic>
<sub>
<italic>f</italic>
</sub>, <italic>B</italic>
<sub>
<italic>i</italic>
</sub>, <italic>B</italic>
<sub>
<italic>o</italic>
</sub>) and (<inline-formula id="inf24">
<mml:math id="m35">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf25">
<mml:math id="m36">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf26">
<mml:math id="m37">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mi>o</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>) represent the deviation and the weight matrix, respectively, of each gate. The symbol <inline-formula id="inf27">
<mml:math id="m38">
<mml:mo>&#x2297;</mml:mo>
</mml:math>
</inline-formula> represents the corresponding multiplication of elements. First, <inline-formula id="inf28">
<mml:math id="m39">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">H</mml:mi>
<mml:mrow>
<mml:mtext>t</mml:mtext>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf29">
<mml:math id="m40">
<mml:mrow>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf30">
<mml:math id="m41">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mtext>t</mml:mtext>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> transmit input information to the LSTM unit. The LSTM gate then interacts with<inline-formula id="inf31">
<mml:math id="m42">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mtext>t</mml:mtext>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>. After a new cell state <inline-formula id="inf32">
<mml:math id="m43">
<mml:mrow>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is established. In this stage, <inline-formula id="inf33">
<mml:math id="m44">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">f</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> determines which information needs to be stored or deleted and then updates the cell status.<list list-type="simple">
<list-item>
<p>2) Because BiLSTM transmits time series data to LSTM from both the forward and backward directions, it has two output layers: a forward layer <inline-formula id="inf34">
<mml:math id="m45">
<mml:mrow>
<mml:msubsup>
<mml:mi>H</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>f</mml:mi>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:msubsup>
<mml:mi>o</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>f</mml:mi>
</mml:msubsup>
<mml:mo>&#x2297;</mml:mo>
<mml:mi mathvariant="bold-italic">tanh</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi>c</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>f</mml:mi>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and a backward layer <inline-formula id="inf35">
<mml:math id="m46">
<mml:mrow>
<mml:msubsup>
<mml:mi>H</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>b</mml:mi>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:msubsup>
<mml:mi>o</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>b</mml:mi>
</mml:msubsup>
<mml:mo>&#x2297;</mml:mo>
<mml:mi mathvariant="bold-italic">tanh</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi>c</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>b</mml:mi>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
</list-item>
<list-item>
<p>3) The final predicted output value <inline-formula id="inf36">
<mml:math id="m47">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is obtained by integrating the forward layer and the backward layer; in <xref ref-type="disp-formula" rid="e12">Eq. 12</xref>, <inline-formula id="inf37">
<mml:math id="m48">
<mml:mi>&#x3b1;</mml:mi>
</mml:math>
</inline-formula>and <inline-formula id="inf38">
<mml:math id="m49">
<mml:mi>&#x3b2;</mml:mi>
</mml:math>
</inline-formula>are numerical factors that satisfy the equation <inline-formula id="inf39">
<mml:math id="m50">
<mml:mrow>
<mml:mi>&#x3b1;</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3b2;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> (<xref ref-type="bibr" rid="B24">Shi et&#x20;al., 2015</xref>).</p>
</list-item>
</list>
<disp-formula id="e12">
<mml:math id="m51">
<mml:mrow>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">H</mml:mi>
<mml:mtext>t</mml:mtext>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>&#x3b1;</mml:mi>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">H</mml:mi>
<mml:mtext>t</mml:mtext>
<mml:mtext>f</mml:mtext>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3b2;</mml:mi>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">H</mml:mi>
<mml:mtext>t</mml:mtext>
<mml:mtext>b</mml:mtext>
</mml:msubsup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">H</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(12)</label>
</disp-formula>
</p>
</sec>
<sec id="s2-2-2-2">
<title>Gated Recurrent Unit</title>
<p>GRU is an effective variant of LSTM; its structure is simpler than that of LSTM, and it can well capture the nonlinear relationship between sequence data, thereby effectively alleviating the problems of traditional RNN gradient disappearance (<xref ref-type="bibr" rid="B8">Chung et&#x20;al., 2014</xref>).<list list-type="simple">
<list-item>
<p>1) The GRU model has two gating units: an update gate <inline-formula id="inf40">
<mml:math id="m52">
<mml:mrow>
<mml:msub>
<mml:mtext>z</mml:mtext>
<mml:mtext>t</mml:mtext>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and a reset gate <inline-formula id="inf41">
<mml:math id="m53">
<mml:mrow>
<mml:msub>
<mml:mtext>r</mml:mtext>
<mml:mtext>t</mml:mtext>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>. An update gate is employed to equilibrate the historical information. The smaller the value of the update gate is, the more concentrated the output of the model on the information of the previous hidden layer <inline-formula id="inf42">
<mml:math id="m54">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">h</mml:mi>
<mml:mtext>t</mml:mtext>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is</p>
</list-item>
</list>
<disp-formula id="e13">
<mml:math id="m55">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">Z</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">W</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
<mml:mo>.</mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">h</mml:mi>
<mml:mrow>
<mml:mi mathvariant="bold-italic">t</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
<mml:mtext>,</mml:mtext>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi mathvariant="bold-italic">t</mml:mi>
</mml:msub>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(13)</label>
</disp-formula>
<disp-formula id="e14">
<mml:math id="m56">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">r</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">W</mml:mi>
<mml:mi>r</mml:mi>
</mml:msub>
<mml:mo>.</mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">h</mml:mi>
<mml:mrow>
<mml:mi mathvariant="bold-italic">t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi mathvariant="bold-italic">t</mml:mi>
</mml:msub>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(14)</label>
</disp-formula>In <xref ref-type="disp-formula" rid="e13">Eqs 13</xref>, <xref ref-type="disp-formula" rid="e14">14</xref>, <inline-formula id="inf43">
<mml:math id="m57">
<mml:mtext>W</mml:mtext>
</mml:math>
</inline-formula>is the model weight, and <inline-formula id="inf44">
<mml:math id="m58">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula> is the activation function.<list list-type="simple">
<list-item>
<p>2) By resetting the gate <inline-formula id="inf45">
<mml:math id="m59">
<mml:mrow>
<mml:msub>
<mml:mi>r</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, the candidate vector <inline-formula id="inf46">
<mml:math id="m60">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">h</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="italic">tanh</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="bold-italic">W</mml:mi>
<mml:mo>.</mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">r</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x2217;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">h</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> can be calculated. Taking the value of the update gate as the weight, <inline-formula id="inf47">
<mml:math id="m61">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi mathvariant="italic">h</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> to be added and the state of the hidden layer at the previous time step are recorded as the output of the GRU network at time step <italic>t</italic>, as follows:</p>
</list-item>
</list>
<disp-formula id="e15">
<mml:math id="m62">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">h</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">Z</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2217;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">h</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">Z</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x2217;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">h</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(15)</label>
</disp-formula>
<list list-type="simple">
<list-item>
<p>4) A set of training samples is input into the GRU; finally, the final output <italic>o</italic> is obtained by adding the fully connected layer after the GRU&#x20;layer.</p>
</list-item>
</list>
<disp-formula id="e17">
<mml:math id="m63">
<mml:mrow>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">h</mml:mi>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="bold-italic">GRU</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi mathvariant="italic">x</mml:mi>
<mml:mrow>
<mml:mtext>t</mml:mtext>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mtext>,x</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mtext>,</mml:mtext>
<mml:mn>...</mml:mn>
<mml:msub>
<mml:mrow>
<mml:mtext>,</mml:mtext>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext>t</mml:mtext>
<mml:mo>-</mml:mo>
<mml:mtext>w</mml:mtext>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mtext>o</mml:mtext>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="bold-italic">V</mml:mi>
<mml:mo>.</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">h</mml:mi>
<mml:mrow>
<mml:mi mathvariant="bold-italic">end</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(17)</label>
</disp-formula>
</p>
<p>In this study, a deep learning recursive network structure (GBiLSTM) based on BiLSTM and GRU is constructed. To reduce fitting error, the time series are trained by the BiLSTM layer and then transferred to the GRU network. Through this double deep learning layer network structure, we can better fit the carbon price data and reduce the prediction&#x20;error.</p>
</sec>
</sec>
<sec id="s2-2-3">
<title>Extreme Learning Machine</title>
<p>ELM is a type of feedforward neural network. On the premise of randomly selecting the input layer weight and the hidden layer neuron threshold, the output weight of the ELM can be obtained through a one-step calculation. ELM has the advantages of higher network generalization ability and strong nonlinear fitting ability (G B <xref ref-type="bibr" rid="B13">Huang et&#x20;al., 2006</xref>; <xref ref-type="bibr" rid="B15">Jiang et&#x20;al., 2021a</xref>; <xref ref-type="bibr" rid="B16">Jiang et&#x20;al., 2021b</xref>).<list list-type="simple">
<list-item>
<p>1) For N different inputs <inline-formula id="inf48">
<mml:math id="m64">
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi mathvariant="italic">x</mml:mi>
<mml:mtext>i</mml:mtext>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mtext>,</mml:mtext>
<mml:mi mathvariant="italic">t</mml:mi>
</mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf49">
<mml:math id="m65">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="italic">x</mml:mi>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:msub>
<mml:mo>&#x2208;</mml:mo>
<mml:msup>
<mml:mi mathvariant="italic">R</mml:mi>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="italic">t</mml:mi>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:msub>
<mml:mo>&#x2208;</mml:mo>
<mml:msup>
<mml:mi mathvariant="italic">R</mml:mi>
<mml:mi mathvariant="italic">p</mml:mi>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="italic">1</mml:mi>
<mml:mtext>,</mml:mtext>
<mml:mn>2</mml:mn>
<mml:mtext>,</mml:mtext>
<mml:mn>....</mml:mn>
<mml:mtext>,</mml:mtext>
<mml:mi mathvariant="italic">N</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, the ELM with <italic>L</italic> nodes and the excitation function <italic>f</italic> (<italic>x</italic>) can be expressed&#x20;as</p>
</list-item>
</list>
<disp-formula id="e19">
<mml:math id="m66">
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b2;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mstyle>
<mml:mi mathvariant="bold-italic">f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">w</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>.</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">b</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mtext>y</mml:mtext>
<mml:mtext>j</mml:mtext>
</mml:msub>
<mml:mtext>&#x2002;</mml:mtext>
<mml:mtext>&#x2002;</mml:mtext>
<mml:mtext>&#x2002;</mml:mtext>
<mml:mtext>&#x2002;</mml:mtext>
<mml:mtext>&#x2002;</mml:mtext>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mtext>,</mml:mtext>
<mml:mn>....</mml:mn>
<mml:mtext>,</mml:mtext>
<mml:mi mathvariant="italic">N</mml:mi>
</mml:mrow>
</mml:math>
<label>(19)</label>
</disp-formula>where <inline-formula id="inf50">
<mml:math id="m67">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">w</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:msub>
<mml:mi mathvariant="italic">w</mml:mi>
<mml:mrow>
<mml:mtext>i</mml:mtext>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mtext>,</mml:mtext>
<mml:mi mathvariant="italic">w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext>i</mml:mtext>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mtext>,</mml:mtext>
<mml:mn>...</mml:mn>
<mml:msub>
<mml:mrow>
<mml:mtext>,</mml:mtext>
<mml:mi mathvariant="italic">w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">in</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mtext>T</mml:mtext>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> is the weight connecting the <italic>i</italic>-th hidden layer node and the input node, <inline-formula id="inf51">
<mml:math id="m68">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="italic">&#x3b2;</mml:mi>
<mml:mtext>i</mml:mtext>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the connection weight vector, and <inline-formula id="inf52">
<mml:math id="m69">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="italic">y</mml:mi>
<mml:mtext>j</mml:mtext>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the output value of the <italic>j</italic>-th node. The training of the network is equivalent to approximating <italic>N</italic> training samples with zero error; that is, <inline-formula id="inf53">
<mml:math id="m70">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>w</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>b</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b2;</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>make<disp-formula id="e20">
<mml:math id="m71">
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>L</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b2;</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mstyle>
<mml:mi mathvariant="bold-italic">f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">w</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>.</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>b</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mtext>,</mml:mtext>
<mml:mo>&#x2026;</mml:mo>
<mml:mtext>,</mml:mtext>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
<label>(20)</label>
</disp-formula>
<disp-formula id="e21">
<mml:math id="m72">
<mml:mrow>
<mml:mi mathvariant="bold-italic">H</mml:mi>
<mml:mi>&#x3b2;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="bold-italic">T</mml:mi>
<mml:mo>;</mml:mo>
</mml:mrow>
</mml:math>
<label>(21)</label>
</disp-formula>
<disp-formula id="e22">
<mml:math id="m73">
<mml:mrow>
<mml:mi mathvariant="bold-italic">H</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi mathvariant="bold-italic">f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">w</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">b</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mn>...</mml:mn>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi mathvariant="bold-italic">f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">w</mml:mi>
<mml:mi mathvariant="bold-italic">L</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">b</mml:mi>
<mml:mi mathvariant="bold-italic">L</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mspace width="2.2em"/>
<mml:mo>&#x22ee;</mml:mo>
</mml:mtd>
<mml:mtd>
</mml:mtd>
<mml:mtd>
<mml:mspace width="2.2em"/>
<mml:mo>&#x22ee;</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi mathvariant="bold-italic">f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">w</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi mathvariant="bold-italic">N</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">b</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mn>...</mml:mn>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi mathvariant="bold-italic">f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">w</mml:mi>
<mml:mi mathvariant="bold-italic">L</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi mathvariant="bold-italic">N</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">b</mml:mi>
<mml:mi mathvariant="bold-italic">L</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold-italic">N</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mi mathvariant="bold-italic">L</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(22)</label>
</disp-formula>In <xref ref-type="disp-formula" rid="e20">Eq. 20</xref> through <xref ref-type="disp-formula" rid="e22">Eq. 22</xref>, <inline-formula id="inf54">
<mml:math id="m74">
<mml:mrow>
<mml:mi>&#x3b2;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3b2;</mml:mi>
<mml:mn>1</mml:mn>
<mml:mi>T</mml:mi>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mn>...</mml:mn>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>&#x3b2;</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>T</mml:mi>
</mml:msubsup>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>;</mml:mo>
<mml:mi>T</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi>t</mml:mi>
<mml:mn>1</mml:mn>
<mml:mi>T</mml:mi>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>t</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>T</mml:mi>
</mml:msubsup>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, and the <italic>i</italic>-th column of <italic>H</italic> represents the output vector of the <italic>i</italic>-th&#x20;hidden layer node corresponding to the <italic>i</italic>-th hidden layer neuron of the input <inline-formula id="inf55">
<mml:math id="m75">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mn>...</mml:mn>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>N</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>.<list list-type="simple">
<list-item>
<p>2) The input connection weight <italic>W</italic> and the hidden layer node bias <italic>b</italic> can be randomly selected at the beginning of training, and the output connection weight <inline-formula id="inf56">
<mml:math id="m76">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b2;</mml:mi>
<mml:mo stretchy="true">&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> can be solved by solving the linear <xref ref-type="disp-formula" rid="e23">Eq.&#x20;23</xref>.</p>
</list-item>
</list>
<disp-formula id="e23">
<mml:math id="m77">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>min</mml:mi>
</mml:mrow>
<mml:mi>&#x3b2;</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>&#x2016;</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold-italic">H</mml:mi>
<mml:mi>&#x3b2;</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="bold-italic">T</mml:mi>
</mml:mrow>
<mml:mo>&#x2016;</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(23)</label>
</disp-formula>
<list list-type="simple">
<list-item>
<p>3) The solution is <inline-formula id="inf57">
<mml:math id="m78">
<mml:mrow>
<mml:mrow>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">&#x3b2;</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="bold-italic">H</mml:mi>
</mml:mrow>
<mml:mtext>&#x2020;</mml:mtext>
</mml:msup>
<mml:mi mathvariant="bold-italic">T</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>; <inline-formula id="inf58">
<mml:math id="m79">
<mml:mrow>
<mml:msup>
<mml:mi mathvariant="italic">H</mml:mi>
<mml:mtext>&#x2020;</mml:mtext>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> is the Moore-Penrose generalized inverse of the hidden layer output matrix&#x20;<italic>H</italic>.</p>
</list-item>
</list>
</p>
<p>In this study, ELM is used as an excellent traditional neural network prediction component of the combined forecasting system to predict carbon prices.</p>
</sec>
<sec id="s2-2-4">
<title>Combination Strategy</title>
<p>It is generally believed that no single prediction model can achieve the best prediction performance for all datasets. Combining the values predicted by different prediction models usually reduces the overall risk of incorrect model selection. It is hoped that the diversity of models can help improve the final prediction results. However, the previously developed average weighting and weighted weighting methods cannot guarantee the global optimality of the results (<xref ref-type="bibr" rid="B33">Wang, Y et&#x20;al., 2018</xref>), and it is necessary to find an adaptive variable weight combination strategy.</p>
<p>In this study, the MODA algorithm is used to weigh the three prediction components. For the weighting strategy, we formulated the MODA algorithm as a linear programming (LP) problem to minimize the loss function. These theories are introduced in detail below:</p>
<sec id="s2-2-4-1">
<title>Ensemble Method</title>
<p>Owing to the different weights given to each individual component of the ensemble prediction system, the formula used in the ensemble forecasting method is as follows:<disp-formula id="e24">
<mml:math id="m80">
<mml:mrow>
<mml:mi mathvariant="bold-italic">f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3c9;</mml:mi>
<mml:mtext>j</mml:mtext>
</mml:msub>
</mml:mrow>
</mml:mstyle>
<mml:mi mathvariant="bold-italic">f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1,2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mn>...</mml:mn>
</mml:mrow>
</mml:math>
<label>(24)</label>
</disp-formula>where <inline-formula id="inf59">
<mml:math id="m81">
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the final output, <inline-formula id="inf60">
<mml:math id="m82">
<mml:mrow>
<mml:mi mathvariant="bold-italic">f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the prediction component of the EPS, <italic>m</italic> is the number of submodels, and <inline-formula id="inf61">
<mml:math id="m83">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the weight of the component models. The experimental results demonstrate that the ensemble model can obtain ideal results when these weights are in the range of [&#x2212;2,&#x20;2].</p>
</sec>
<sec id="s2-2-4-2">
<title>Multiobjective Dragonfly Optimization algorithm</title>
<p>The dragonfly algorithm is a population-based heuristic intelligent algorithm that is easy to understand and implement (<xref ref-type="bibr" rid="B22">Mirjalili, 2016</xref>). The dragonfly algorithm is inspired by the static and dynamic group behaviors of dragonflies. In the static group behavior, the group preys; in the dynamic group behavior, the group migrates. These two behaviors are very similar to the two important stages in heuristic optimization algorithms: exploration and development. In this research, the MODA is applied to increase the accuracy and stability of the prediction system (<xref ref-type="bibr" rid="B25">Song and Li, 2017</xref>).</p>
<p>The mathematical expression methods are as follows:<list list-type="simple">
<list-item>
<p>1) The degree of separation refers to avoiding collisions between dragonflies and adjacent individuals.</p>
</list-item>
</list>
<disp-formula id="e25">
<mml:math id="m84">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">S</mml:mi>
<mml:mi mathvariant="bold-italic">i</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold-italic">j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi mathvariant="bold-italic">N</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mi mathvariant="bold-italic">j</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
<label>(25)</label>
</disp-formula>
<list list-type="simple">
<list-item>
<p>2) Alignment means that the trends in movement speed are the same in adjacent individuals.</p>
</list-item>
</list>
<disp-formula id="e26">
<mml:math id="m85">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mi mathvariant="bold-italic">i</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:msubsup>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">V</mml:mi>
<mml:mi mathvariant="bold-italic">i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
<mml:mi mathvariant="bold-italic">N</mml:mi>
</mml:mfrac>
</mml:mrow>
</mml:math>
<label>(26)</label>
</disp-formula>
<list list-type="simple">
<list-item>
<p>3) Cohesion refers to the tendency of dragonflies to gather near the center of adjacent individuals.</p>
</list-item>
</list>
<disp-formula id="e27">
<mml:math id="m86">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">C</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:msubsup>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
<mml:mi mathvariant="bold-italic">N</mml:mi>
</mml:mfrac>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
</mml:math>
<label>(27)</label>
</disp-formula>
<list list-type="simple">
<list-item>
<p>4) Food attraction is the degree of attraction of dragonflies to&#x20;food.</p>
</list-item>
</list>
<disp-formula id="e28">
<mml:math id="m87">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">F</mml:mi>
<mml:mi mathvariant="bold-italic">i</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo>&#x2b;</mml:mo>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
</mml:math>
<label>(28)</label>
</disp-formula>
<list list-type="simple">
<list-item>
<p>5) The repulsive force of natural enemies refers to the repellence of the group to natural enemies when dragonflies encounter natural enemies.</p>
</list-item>
</list>
<disp-formula id="e29">
<mml:math id="m88">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">E</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo>&#x2212;</mml:mo>
</mml:msup>
<mml:mo>&#x2b;</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
</mml:math>
<label>(29)</label>
</disp-formula>In <xref ref-type="disp-formula" rid="e25">Eq. 25</xref> through <xref ref-type="disp-formula" rid="e29">Eq. 29</xref>, <italic>X</italic> is the position of the current dragonfly individual, <inline-formula id="inf62">
<mml:math id="m89">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mi mathvariant="bold-italic">j</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> represents the position of the <italic>j</italic>-th adjacent dragonfly, <inline-formula id="inf63">
<mml:math id="m90">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">V</mml:mi>
<mml:mi mathvariant="bold-italic">j</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> represents the speed of the <italic>j</italic>-th adjacent dragonfly, <italic>N</italic> represents the number of individuals adjacent to the <italic>i</italic>-th dragonfly individual, <inline-formula id="inf64">
<mml:math id="m91">
<mml:mrow>
<mml:msup>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo>&#x2b;</mml:mo>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> indicates the location of the food source, and <inline-formula id="inf65">
<mml:math id="m92">
<mml:mrow>
<mml:msup>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo>-</mml:mo>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> indicates the position of the natural&#x20;enemy.</p>
<p>Based on the above five behaviors, the step length and the position of the next generation of dragonflies are calculated as follows:<disp-formula id="e30">
<mml:math id="m93">
<mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mrow>
<mml:mi mathvariant="bold-italic">t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="italic">s</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">S</mml:mi>
<mml:mi mathvariant="bold-italic">i</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi mathvariant="italic">a</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mi mathvariant="bold-italic">i</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi mathvariant="italic">c</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">C</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi mathvariant="italic">f</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">F</mml:mi>
<mml:mi mathvariant="bold-italic">i</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi mathvariant="italic">e</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">E</mml:mi>
<mml:mi mathvariant="bold-italic">i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3c9;</mml:mi>
<mml:mi>&#x394;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(30)</label>
</disp-formula>
<disp-formula id="e31">
<mml:math id="m94">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x394;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(31)</label>
</disp-formula>
</p>
<p>Whether dragonflies are adjacent to each other can be judged by the Euclidean distance, which is similar to a circle with a radius of <italic>r</italic> around each dragonfly, and all individuals in the circle are adjacent. To speed up the convergence, the radius <italic>r</italic> should gradually increase during the iterative process and should finally include the entire search space (<xref ref-type="bibr" rid="B27">Sun et&#x20;al., 2018</xref>). At the beginning of the iteration, the radius <italic>r</italic> is very small, and some individuals may have no adjacent individuals. To enhance the search power of the algorithm, the random walk is adopted to replace the step update formula, as shown below.<disp-formula id="e32">
<mml:math id="m95">
<mml:mrow>
<mml:mi mathvariant="bold-italic">Levy</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.01</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">r</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">r</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>&#x7c;</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>&#x3b2;</mml:mi>
</mml:mfrac>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
<label>(32)</label>
</disp-formula>In <xref ref-type="disp-formula" rid="e32">Eq. 32</xref>, <inline-formula id="inf66">
<mml:math id="m96">
<mml:mrow>
<mml:msub>
<mml:mtext>r</mml:mtext>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf67">
<mml:math id="m97">
<mml:mrow>
<mml:msub>
<mml:mtext>r</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> represent random numbers between [0,1], <inline-formula id="inf68">
<mml:math id="m98">
<mml:mi>&#x3b2;</mml:mi>
</mml:math>
</inline-formula> is a constant (here, 3/2), and <inline-formula id="inf69">
<mml:math id="m99">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula>is calculated as follows:<disp-formula id="e33">
<mml:math id="m100">
<mml:mrow>
<mml:mtext>&#x3c3;</mml:mtext>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x393;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3b2;</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi mathvariant="bold-italic">sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>&#x3b2;</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x393;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3b2;</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>&#x3b2;</mml:mi>
<mml:msup>
<mml:mn>2</mml:mn>
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x3b2;</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:mfrac>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi mathvariant="bold">&#x3b2;</mml:mi>
</mml:mfrac>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
<label>(33)</label>
</disp-formula>
</p>
<p>The corresponding position update formula can be derived as shown in the following formula:<disp-formula id="e34">
<mml:math id="m101">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi mathvariant="italic">Levy</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="bold-italic">d</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#xd7;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(34)</label>
</disp-formula>In <xref ref-type="disp-formula" rid="e34">Eq. 34</xref>, <italic>d</italic> represents the dimension of the position vector. In MODA, the nondominated Pareto optimal solution that is obtained in the optimization process is stored and retrieved through the storage unit of the external archive. More importantly, to improve the distribution of solutions in the document and maintain the diversity of Pareto solution sets, the algorithm uses a roulette method with probability <inline-formula id="inf70">
<mml:math id="m102">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">P</mml:mi>
<mml:mi mathvariant="bold-italic">i</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="bold-italic">c</mml:mi>
<mml:mo>/</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">N</mml:mi>
<mml:mi mathvariant="bold-italic">i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> to keep the nondominated solution sets well distributed. <italic>N</italic>
<sub>
<italic>i</italic>
</sub> represents the number of solutions near the <italic>i</italic>-th solution, and <italic>c</italic> is a constant.</p>
</sec>
<sec id="s2-2-4-3">
<title>Objective Function of MODA</title>
<p>Generally, the multiobjective optimization problem can be regarded as the solution of the constraint problem. The constraint problem with J inequalities and K equations can be written as follows:<disp-formula id="e35">
<mml:math id="m103">
<mml:mrow>
<mml:mi mathvariant="bold-italic">Min</mml:mi>
<mml:mi>F</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>o</mml:mi>
<mml:mi>b</mml:mi>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>o</mml:mi>
<mml:mi>b</mml:mi>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mo>&#x22c5;</mml:mo>
<mml:mo>&#x22c5;</mml:mo>
<mml:mo>&#x22c5;</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>o</mml:mi>
<mml:mi>b</mml:mi>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mi>T</mml:mi>
</mml:msup>
</mml:mrow>
</mml:math>
<label>(35)</label>
</disp-formula>
<disp-formula id="e36">
<mml:math id="m104">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>.</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>.</mml:mo>
<mml:mtable>
<mml:mtr>
<mml:mtd>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
</mml:mtd>
</mml:mtr>
</mml:mtable>
<mml:msub>
<mml:mi mathvariant="bold-italic">g</mml:mi>
<mml:mtext>j</mml:mtext>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2265;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1,2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mn>...</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>J</mml:mi>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">h</mml:mi>
<mml:mtext>k</mml:mtext>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1,2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mn>...</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>K</mml:mi>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
<mml:mtable>
<mml:mtr>
<mml:mtd>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
</mml:mtd>
</mml:mtr>
</mml:mtable>
<mml:mi>x</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi>&#x3a9;</mml:mi>
</mml:mrow>
</mml:math>
<label>(36)</label>
</disp-formula>where <inline-formula id="inf71">
<mml:math id="m105">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>o</mml:mi>
<mml:mi>b</mml:mi>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>o</mml:mi>
<mml:mi>b</mml:mi>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mo>&#x22c5;</mml:mo>
<mml:mo>&#x22c5;</mml:mo>
<mml:mo>&#x22c5;</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>o</mml:mi>
<mml:mi>b</mml:mi>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mi>T</mml:mi>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> is the decision vector.</p>
<p>In this study, the objective of the optimization algorithm is to determine the weight of each single-prediction component<inline-formula id="inf72">
<mml:math id="m106">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> to minimize the error between the final combined forecast value <inline-formula id="inf73">
<mml:math id="m107">
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and the real value of the carbon price Y. The optimization algorithm can be expressed as follows:<disp-formula id="e37">
<mml:math id="m108">
<mml:mrow>
<mml:mi mathvariant="bold-italic">Min</mml:mi>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">obf</mml:mi>
</mml:mrow>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="bold-italic">std</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>Y</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="bold-italic">std</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mstyle>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>Y</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">obf</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="bold-italic">MAPE</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>Y</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="bold-italic">MAPE</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mstyle>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>Y</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(37)</label>
</disp-formula>
<disp-formula id="equ1">
<mml:math id="m109">
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>.</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>.</mml:mo>
<mml:mtable>
<mml:mtr>
<mml:mtd>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
</mml:mtd>
</mml:mtr>
</mml:mtable>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>&#x2264;</mml:mo>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>J</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mstyle>
<mml:mo>&#x2264;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>Therefore, we can solve the component weight <inline-formula id="inf74">
<mml:math id="m110">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>:<disp-formula id="e38">
<mml:math id="m111">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3c9;</mml:mi>
<mml:mtext>j</mml:mtext>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="bold-italic">arg</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi mathvariant="bold-italic">Std</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>Y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi mathvariant="bold-italic">MAPE</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>Y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(38)</label>
</disp-formula>
<disp-formula id="equ2">
<mml:math id="m112">
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>.</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>.</mml:mo>
<mml:mtable>
<mml:mtr>
<mml:mtd>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
</mml:mtd>
</mml:mtr>
</mml:mtable>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>&#x2264;</mml:mo>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>J</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mstyle>
<mml:mo>&#x2264;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>Through continuous iteration of the MODA optimization algorithm, the weight vector <inline-formula id="inf75">
<mml:math id="m113">
<mml:mrow>
<mml:mi>&#x3c9;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mn>...</mml:mn>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> that minimizes the error between the combination forecast value <inline-formula id="inf76">
<mml:math id="m114">
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and the real value of carbon price Y is obtained. In this study, m &#x3d;&#x20;3<italic>.</italic>
</p>
</sec>
</sec>
</sec>
<sec id="s2-3">
<title>Uncertainty Mining Module</title>
<p>The uncertainty information of point prediction results can be used to more deeply analyze the characteristics of carbon prices. In this article, an innovative interval prediction scheme based on prediction error distribution modeling in the training stage is proposed. Unlike previous research in which it is assumed that the prediction error follows a Gaussian distribution, this article uses maximum likelihood estimation (MLE) to conduct statistical research on carbon price error data and to explore its distribution. Among the five distribution functions developed, the function that best fits the distribution of carbon price prediction error is found. Based on its probability distribution function (PDF), the upper and lower bounds of the carbon price prediction interval are constructed. The details of the five distribution functions and interval prediction methods are given&#x20;below.</p>
<sec id="s2-3-1">
<title>Distribution Function</title>
<p>The probability distribution function plays a very important role in resource evaluation and interval prediction. This study attempts to use different DFSs to fit the distribution function of prediction error, hoping to analyze the time series in a new way and to mine its uncertainty characteristics. In this section, five types of model prediction error distribution functions (stable, extreme value, normal, logistic, and t location-scale (TLS) functions) are introduced. The relevant probability density functions are shown in <xref ref-type="table" rid="T1">Table&#x20;1</xref>.</p>
<table-wrap id="T1" position="float">
<label>TABLE 1</label>
<caption>
<p>Probability distribution function (PDF) of the five distribution functions used in the&#x20;study.</p>
</caption>
<table>
<tbody valign="top">
<tr>
<td align="left">
<bold>Distribution functions</bold>
</td>
<td align="center">PDF</td>
<td align="center">
<bold>Parameters</bold>
</td>
</tr>
<tr>
<td rowspan="2" align="left">
<bold>Extreme value</bold>
</td>
<td rowspan="2" align="center">
<inline-formula id="inf77">
<mml:math id="m115">
<mml:mrow>
<mml:mi mathvariant="bold-italic">f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mo>;</mml:mo>
<mml:mi mathvariant="bold-italic">&#x3bc;</mml:mi>
<mml:mo>;</mml:mo>
<mml:mi mathvariant="bold-italic">&#x3c3;</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mi mathvariant="bold-italic">&#x3c3;</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mi mathvariant="bold-italic">exp</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="bold-italic">&#x3bc;</mml:mi>
</mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c3;</mml:mi>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi mathvariant="bold-italic">exp</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="bold-italic">exp</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="bold-italic">&#x3bc;</mml:mi>
</mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c3;</mml:mi>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>&#x3e;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="left">
<inline-formula id="inf78">
<mml:math id="m116">
<mml:mrow>
<mml:mi>&#x3bc;</mml:mi>
<mml:mo>&#x3e;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>location parameter</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf79">
<mml:math id="m117">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3e;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>scale parameter</td>
</tr>
<tr>
<td rowspan="2" align="left">
<bold>Logistic</bold>
</td>
<td rowspan="2" align="center">
<inline-formula id="inf80">
<mml:math id="m118">
<mml:mrow>
<mml:mi mathvariant="bold-italic">f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mo>;</mml:mo>
<mml:mi mathvariant="bold-italic">&#x3bc;</mml:mi>
<mml:mo>;</mml:mo>
<mml:mi mathvariant="bold-italic">&#x3c3;</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="bold-italic">exp</mml:mi>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="bold-italic">&#x3bc;</mml:mi>
</mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c3;</mml:mi>
</mml:mfrac>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mtext>&#x3c3;</mml:mtext>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2b;</mml:mo>
<mml:mtext>exp</mml:mtext>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mtext>x</mml:mtext>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x3bc;</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mtext>&#x3c3;</mml:mtext>
</mml:mfrac>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="left">
<inline-formula id="inf81">
<mml:math id="m119">
<mml:mrow>
<mml:mi>&#x3bc;</mml:mi>
<mml:mo>&#x3e;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>location parameter</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf82">
<mml:math id="m120">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3e;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>scale parameter</td>
</tr>
<tr>
<td rowspan="2" align="left">
<bold>Normal</bold>
</td>
<td rowspan="2" align="center">
<inline-formula id="inf83">
<mml:math id="m121">
<mml:mrow>
<mml:mtext>f</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mtext>x;&#x3bc;;&#x3c3;</mml:mtext>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mtext>&#x3c3;</mml:mtext>
<mml:msqrt>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:mtext>exp</mml:mtext>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x3bc;</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msup>
<mml:mtext>&#x3c3;</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="left">
<inline-formula id="inf84">
<mml:math id="m122">
<mml:mrow>
<mml:mi>&#x3bc;</mml:mi>
<mml:mo>&#x3e;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>location parameter</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf85">
<mml:math id="m123">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3e;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>scale parameter</td>
</tr>
<tr>
<td rowspan="2" align="left">
<bold>Stable</bold>
</td>
<td rowspan="2" align="center">
<inline-formula id="inf86">
<mml:math id="m124">
<mml:mrow>
<mml:mtext>f</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
</mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>;</mml:mo>
<mml:mi>&#x3b3;</mml:mi>
<mml:mo>;</mml:mo>
<mml:mi>&#x3b1;</mml:mi>
<mml:mo>;</mml:mo>
<mml:mi>&#x3b2;</mml:mi>
<mml:mo>;</mml:mo>
<mml:mi>&#x3b4;</mml:mi>
<mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mtext>exp</mml:mtext>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mi>&#x3b3;</mml:mi>
<mml:mrow>
<mml:mi>&#x3b1;</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>&#x7c;</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3b1;</mml:mi>
<mml:mo>[</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x3b2;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
</mml:mrow>
<mml:mi>tan</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
</mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
</mml:mrow>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3b4;</mml:mi>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="left">
<inline-formula id="inf87">
<mml:math id="m125">
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>&#x3c;</mml:mo>
<mml:mi>&#x3b1;</mml:mi>
<mml:mo>&#x3c;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>;<inline-formula id="inf88">
<mml:math id="m126">
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2264;</mml:mo>
<mml:mi>&#x3b2;</mml:mi>
<mml:mo>&#x2264;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> shape parameter</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf89">
<mml:math id="m127">
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>&#x3c;</mml:mo>
<mml:mi>&#x3b3;</mml:mi>
<mml:mo>&#x3c;</mml:mo>
<mml:mi>&#x221e;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>;<inline-formula id="inf90">
<mml:math id="m128">
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x221e;</mml:mi>
<mml:mo>&#x3c;</mml:mo>
<mml:mi>&#x3b4;</mml:mi>
<mml:mo>&#x3c;</mml:mo>
<mml:mi>&#x221e;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> scale parameter</td>
</tr>
<tr>
<td rowspan="3" align="left">
<bold>T Location-Scale</bold>
</td>
<td rowspan="3" align="center">
<inline-formula id="inf91">
<mml:math id="m129">
<mml:mrow>
<mml:mtext>f</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mtext>x;&#x3bc;,&#x3c3;,&#x3c5;</mml:mtext>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mtext>&#x393;</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mtext>&#x3c5;</mml:mtext>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mtext>&#x3c3;</mml:mtext>
<mml:msqrt>
<mml:mrow>
<mml:mtext>&#x3c5;</mml:mtext>
<mml:mi>&#x3c0;</mml:mi>
<mml:mtext>&#x393;</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mtext>&#x3c5;</mml:mtext>
<mml:mn>2</mml:mn>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2b;</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mtext>&#x3c5;</mml:mtext>
</mml:mfrac>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mtext>x</mml:mtext>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x3bc;</mml:mi>
</mml:mrow>
<mml:mtext>&#x3c3;</mml:mtext>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mtext>&#x3c5;</mml:mtext>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:mfrac>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="left">
<inline-formula id="inf92">
<mml:math id="m130">
<mml:mrow>
<mml:mi>&#x3c5;</mml:mi>
<mml:mo>&#x3e;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> shape parameter</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf93">
<mml:math id="m131">
<mml:mrow>
<mml:mi>&#x3bc;</mml:mi>
<mml:mo>&#x3e;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>location parameter</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf94">
<mml:math id="m132">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3e;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>scale parameter</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s2-3-2">
<title>Interval Prediction Theory</title>
<p>Under the significance level <inline-formula id="inf95">
<mml:math id="m133">
<mml:mi>&#x3b1;</mml:mi>
</mml:math>
</inline-formula>, for the limit of the model prediction error interval (<italic>I</italic>
<sub>
<italic>min</italic>
</sub> and <italic>I</italic>
<sub>
<italic>max</italic>
</sub>), the probability formula of the prediction model error value <inline-formula id="inf96">
<mml:math id="m134">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and the prediction error true value <inline-formula id="inf97">
<mml:math id="m135">
<mml:mrow>
<mml:msub>
<mml:mi>Y</mml:mi>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> can be expressed as follows (<xref ref-type="bibr" rid="B26">Song et&#x20;al., 2015</xref>):<disp-formula id="e39">
<mml:math id="m136">
<mml:mrow>
<mml:mi mathvariant="bold-italic">P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:mrow>
<mml:mi>min</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2264;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">Y</mml:mi>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2264;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:mrow>
<mml:mi>max</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:math>
<label>(39)</label>
</disp-formula>
</p>
<p>Since the error value of the prediction model is a random variable, Eq. 50 can also be written as follows:<disp-formula id="e40">
<mml:math id="m137">
<mml:mrow>
<mml:mi mathvariant="bold-italic">P</mml:mi>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:mrow>
<mml:mi>min</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2264;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">Y</mml:mi>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2264;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:mrow>
<mml:mi>max</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:mi mathvariant="bold-italic">E</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>Y</mml:mi>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:math>
<label>(40)</label>
</disp-formula>
</p>
<p>In addition, we assume that the prediction error of the future prediction model has the same distribution function as that of the historical prediction model. Therefore, the probability distribution function (PDF) based on the historical error data of the prediction system can be regarded as a distribution function of future prediction error (<xref ref-type="bibr" rid="B5">Chen and Liu, 2021</xref>). Thus, the upper and lower bounds of the function at a certain confidence level can be calculated.<disp-formula id="e41">
<mml:math id="m138">
<mml:mrow>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:mrow>
<mml:mi>min</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:mrow>
<mml:mi>max</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:mrow>
<mml:mi>min</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2264;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">Y</mml:mi>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2264;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:mrow>
<mml:mi>max</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mstyle displaystyle="true">
<mml:mrow>
<mml:munderover>
<mml:mo>&#x222b;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mi>min</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mi>max</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:munderover>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x398;</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>d</mml:mi>
<mml:mi>x</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(41)</label>
</disp-formula>
</p>
<p>The above equation can also be written as<disp-formula id="e42">
<mml:math id="m139">
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>min</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>max</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>min</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">y</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
</mml:mtd>
</mml:mtr>
</mml:mtable>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">y</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>max</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(42)</label>
</disp-formula>
<disp-formula id="e43">
<mml:math id="m140">
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mrow>
<mml:munderover>
<mml:mo>&#x222b;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mi>min</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:munderover>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x398;</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
<mml:mi>d</mml:mi>
<mml:mi>x</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mtext>F</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:math>
<label>(43)</label>
</disp-formula>
<disp-formula id="e44">
<mml:math id="m141">
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mrow>
<mml:munderover>
<mml:mo>&#x222b;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover>
<mml:mi>I</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>max</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:munderover>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x398;</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
<mml:mi>d</mml:mi>
<mml:mi>x</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mtext>F</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:math>
<label>(44)</label>
</disp-formula>
</p>
<p>After the optimal statistical distribution of the prediction error is determined, the upper <inline-formula id="inf98">
<mml:math id="m142">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>U</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula>and lower <inline-formula id="inf99">
<mml:math id="m143">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>L</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> bounds of the carbon price prediction interval can be constructed.<disp-formula id="e45">
<mml:math id="m144">
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">L</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">U</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>I</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>min</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>I</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>max</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(45)</label>
</disp-formula>
</p>
<p>In <xref ref-type="disp-formula" rid="e45">Eq. 45</xref>, <inline-formula id="inf100">
<mml:math id="m145">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the carbon price predicted by the carbon price prediction&#x20;model.</p>
</sec>
</sec>
</sec>
<sec id="s3">
<title>Ensemble Prediction System and its Interval Forecasting Framework</title>
<p>This section introduces in detail the specific process used in this study. A brief overview of EPS and its uncertainty exploration is shown in <xref ref-type="fig" rid="F2">Figure&#x20;2</xref>.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>EPS system and its interval prediction model framework.</p>
</caption>
<graphic xlink:href="fenvs-09-740093-g002.tif"/>
</fig>
<sec id="s3-1">
<title>Step 1: Data Preprocessing and Feature Selection Module</title>
<p>In this article, ICEEMDAN technology is employed to decompose and reconstruct the original carbon price data. ICEEMDAN decomposes the original carbon price into several IMFs and residual terms. The IMF with the highest frequency is then eliminated, and the remaining IMFs are reorganized to extract the effective features of the data. For multivariate time series, effective feature selection is also very important. In this study, partial autocorrelation analysis (PACF) is employed to determine the input feature length of carbon price prediction to achieve feature selection.</p>
</sec>
<sec id="s3-2">
<title>Step 2: EPS of Model Components</title>
<p>Owing to the high randomness, volatility, and instability of carbon price data, it is not easy to find its rules of motion, and the single hybrid prediction model has inherent defects. Therefore, the use of a combination of hybrid forecasting models is an effective means of obtaining satisfactory prediction performance and improving prediction accuracy. In this study, two deep learning hybrid models (ICEEMDAN-GBiLSTM and ICEEMDAN-CNN) and a feedforward neural network (ICEEMDAN-ELM) are used as the prediction components of the EPS. They have high prediction accuracy and good learning ability for time series.</p>
</sec>
<sec id="s3-3">
<title>Step 3: Component Ensemble Strategy</title>
<p>Given the advantages and disadvantages of different hybrid models, it is very important to select a weight combination strategy with strong adaptability and good fusion effect to compensate for the defects of the individual hybrid models and improve the performance and accuracy of carbon price prediction. Therefore, the MODA is selected to determine the fusion weight among the three prediction model components.</p>
</sec>
<sec id="s3-4">
<title>Step 4: Exploring Uncertainty</title>
<p>Quantifying the uncertainty associated with carbon price prediction is a considerable challenge. In this study, a new interval prediction scheme based on forecasting error distribution modeling in the model training stage is proposed. Unlike previous research based on the assumption that the prediction error follows a Gaussian distribution, this article uses MLE to conduct statistical research on carbon price error data and to explore its distribution. Among the five DFs developed, the function that best fits the distribution of carbon price prediction error is found. After confirming that the optimal fit to the distribution of forecast error is provided by the t location-scale, the upper and lower bounds of the carbon price prediction interval are constructed based on its&#x20;PDF.</p>
</sec>
</sec>
<sec id="s4">
<title>Experiment and Analysis</title>
<p>This section will introduce the experimental setup and analysis in detail, including the simulation experiment dataset and three different groups of empirical experiments that are used to verify the prediction performance of&#x20;EPS.</p>
<sec id="s4-1">
<title>Data Selection and Analysis</title>
<p>In this article, three datasets based on the carbon price market (the EU Emission Trading System (EU ETS), the Shenzhen (SZ), and the Beijing (BJ) datasets) are used as experimental data. The dataset can be downloaded from the wind database (<ext-link ext-link-type="uri" xlink:href="http://www.wind.com.cn/">http://www.wind.com.cn/</ext-link>). The first 80% of each dataset is used as the training set, and the last 20% is used as the test set. Specifically, for the EU emission trading system dataset, a total of 1,000 daily quota settlement prices from July 10, 2013 to May 3, 2017 are selected. For the Shenzhen and Beijing datasets, this study used daily spot carbon price data collected from January 14, 2014 to February 7, 2017, including 800 data points. Detailed statistical descriptions of the three datasets are given in <xref ref-type="table" rid="T2">Table&#x20;2</xref>. In addition, in constructing the model input vector, we adopted a rolling acquisition mechanism.</p>
<table-wrap id="T2" position="float">
<label>TABLE 2</label>
<caption>
<p>: Statistical description of the carbon prices reported at three&#x20;sites.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th colspan="2" align="left">Statistical Indicators</th>
<th align="center">Number</th>
<th align="center">Max</th>
<th align="center">Min</th>
<th align="center">Mean</th>
<th align="center">Std</th>
</tr>
<tr>
<th colspan="2" align="left">Equation</th>
<th align="center">&#x2014;</th>
<th align="center">&#x2014;</th>
<th align="center">&#x2014;</th>
<th align="center">
<inline-formula id="inf101">
<mml:math id="m146">
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>n</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac bevelled="true">
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:mfrac>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf102">
<mml:math id="m147">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>N</mml:mi>
</mml:mfrac>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#xaf;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td rowspan="3" align="left">
<bold>EU ETS</bold>
</td>
<td align="left">Total</td>
<td align="center">1,000</td>
<td align="char" char=".">8.67</td>
<td align="char" char=".">3.93</td>
<td align="char" char=".">6.01</td>
<td align="char" char=".">1.24</td>
</tr>
<tr>
<td align="left">Training</td>
<td align="center">800</td>
<td align="char" char=".">8.67</td>
<td align="char" char=".">4.02</td>
<td align="char" char=".">6.24</td>
<td align="char" char=".">1.26</td>
</tr>
<tr>
<td align="left">Testing</td>
<td align="center">200</td>
<td align="char" char=".">6.54</td>
<td align="char" char=".">3.93</td>
<td align="char" char=".">5.08</td>
<td align="char" char=".">0.57</td>
</tr>
<tr>
<td rowspan="3" align="left">
<bold>BJ</bold>
</td>
<td align="left">Total</td>
<td align="center">800</td>
<td align="char" char=".">77</td>
<td align="char" char=".">30.63</td>
<td align="char" char=".">49.56</td>
<td align="char" char=".">7.08</td>
</tr>
<tr>
<td align="left">Training</td>
<td align="center">640</td>
<td align="char" char=".">77</td>
<td align="char" char=".">30.63</td>
<td align="char" char=".">48.96</td>
<td align="char" char=".">7.65</td>
</tr>
<tr>
<td align="left">Testing</td>
<td align="center">160</td>
<td align="char" char=".">69</td>
<td align="char" char=".">39.45</td>
<td align="char" char=".">51.98</td>
<td align="char" char=".">3.02</td>
</tr>
<tr>
<td rowspan="3" align="left">
<bold>SZ</bold>
</td>
<td align="left">Total</td>
<td align="center">800</td>
<td align="char" char=".">88.45</td>
<td align="char" char=".">17.83</td>
<td align="char" char=".">43.13</td>
<td align="char" char=".">16.05</td>
</tr>
<tr>
<td align="left">Training</td>
<td align="center">640</td>
<td align="char" char=".">88.45</td>
<td align="char" char=".">18.98</td>
<td align="char" char=".">46.92</td>
<td align="char" char=".">15.58</td>
</tr>
<tr>
<td align="left">Testing</td>
<td align="center">160</td>
<td align="char" char=".">42.81</td>
<td align="char" char=".">17.83</td>
<td align="char" char=".">28.01</td>
<td align="char" char=".">5.39</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s4-2">
<title>Model Parameter Setting</title>
<p>The model parameters determine the performance of the prediction system to a large extent. The different parameters of the EPS proposed in this study are obtained by referring to the literature and to the results of the experiments conducted in this study. The parameter settings for each component of the ensemble system are listed in <xref ref-type="table" rid="T3">Table&#x20;3</xref>; this information is valuable and useful because it provides a reference for future research.</p>
<table-wrap id="T3" position="float">
<label>TABLE 3</label>
<caption>
<p>Model parameters.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Model</th>
<th align="center">Parameters</th>
<th align="center">Default value</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td rowspan="5" align="left">
<bold>MODA</bold>
</td>
<td align="left">Maximum number of iterations</td>
<td align="center">50</td>
</tr>
<tr>
<td align="left">Maximum number of archives</td>
<td align="center">500</td>
</tr>
<tr>
<td align="left">Dragonfly number</td>
<td align="center">30</td>
</tr>
<tr>
<td align="left">Upper and lower limits of the weight coefficient</td>
<td align="center">[&#x2212;2,2]</td>
</tr>
<tr>
<td align="left">Objective functions</td>
<td align="center">
<inline-formula id="inf103">
<mml:math id="m148">
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>o</mml:mi>
<mml:mi>b</mml:mi>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>S</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>Y</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>o</mml:mi>
<mml:mi>b</mml:mi>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>M</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>Y</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td rowspan="3" align="left">
<bold>ICEEMDAN</bold>
</td>
<td align="left">Noise standard deviation</td>
<td align="center">0.05</td>
</tr>
<tr>
<td align="left">Number of realizations</td>
<td align="center">50</td>
</tr>
<tr>
<td align="left">Maximum number of sifting iterations</td>
<td align="center">500</td>
</tr>
<tr>
<td rowspan="5" align="left">
<bold>ELM</bold>
</td>
<td align="left">Input nodes number</td>
<td align="center">Based on PAC</td>
</tr>
<tr>
<td align="left">Output nodes number</td>
<td align="center">1</td>
</tr>
<tr>
<td align="left">Hidden nodes number</td>
<td align="center">5</td>
</tr>
<tr>
<td align="left">Learning rate</td>
<td align="center">0.001</td>
</tr>
<tr>
<td align="left">Iterations number</td>
<td align="center">200</td>
</tr>
<tr>
<td rowspan="6" align="left">
<bold>GBiLSTM</bold>
</td>
<td align="left">Number of inputs</td>
<td align="center">Based on PAC</td>
</tr>
<tr>
<td align="left">Number of hidden units</td>
<td align="center">200</td>
</tr>
<tr>
<td align="left">Number of outputs</td>
<td align="center">1</td>
</tr>
<tr>
<td align="left">Maximum iteration</td>
<td align="center">250</td>
</tr>
<tr>
<td align="left">Initial learning rate</td>
<td align="center">0.01</td>
</tr>
<tr>
<td align="left">Training algorithm</td>
<td align="center">Adam</td>
</tr>
<tr>
<td rowspan="6" align="left">
<bold>CNN</bold>
</td>
<td align="left">Number of inputs</td>
<td align="center">Based on PAC</td>
</tr>
<tr>
<td align="left">Number of outputs</td>
<td align="center">1</td>
</tr>
<tr>
<td align="left">The kernel sizes</td>
<td align="center">
<inline-formula id="inf104">
<mml:math id="m149">
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td align="left">The activation function</td>
<td align="center">Relu</td>
</tr>
<tr>
<td align="left">The max pooling layer size</td>
<td align="center">2</td>
</tr>
<tr>
<td align="left">Training algorithm</td>
<td align="center">Adam</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s4-3">
<title>Evaluation Index System</title>
<p>To quantify the performance of the developed system, this study constructs an evaluation system using a variety of error evaluation criteria. The system is evaluated and analyzed based on the deterministic point estimation evaluation index and the probabilistic interval estimation evaluation index (<xref ref-type="bibr" rid="B32">Wang R. et&#x20;al., 2018</xref>; <xref ref-type="bibr" rid="B17">Jiang et&#x20;al., 2021</xref>). In the deterministic prediction part, four evaluation indicators, MAE, RMSE, MAPE, and IA, are used. MAE can better express the prediction error under actual conditions. RMSE reflects the deviation between the prediction value and the true value. MAPE expresses the accuracy of prediction using the ratio of error to true value. IA is applied to measure the concordance between the predicted value and the actual value. During interval prediction and evaluation, the three general indicators FICP, FINAW, and AWD are employed to evaluate the quality of the prediction interval. FICP reflects the possibility that the original value falls within the forecast period. FINAW measures the width of the prediction interval. AWD represents the degree of deviation between the observed value and the prediction interval. For FICP, unlike other indicators, a larger value indicates better performance of the model. <xref ref-type="table" rid="T4">Table&#x20;4</xref> lists the details of the above evaluation indicators.</p>
<table-wrap id="T4" position="float">
<label>TABLE 4</label>
<caption>
<p>Basic evaluation metrics.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Metric</th>
<th align="center">Definition</th>
<th align="center">Equation</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">
<bold>MAE</bold>
</td>
<td align="left">The mean absolute error</td>
<td align="center">
<inline-formula id="inf105">
<mml:math id="m150">
<mml:mrow>
<mml:mi mathvariant="bold-italic">MAE</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>N</mml:mi>
</mml:mfrac>
<mml:mstyle displaystyle="true">
<mml:msubsup>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>&#x7c;</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td align="left">
<bold>RMSE</bold>
</td>
<td align="left">Root mean squared error</td>
<td align="center">
<inline-formula id="inf106">
<mml:math id="m151">
<mml:mrow>
<mml:mi mathvariant="bold-italic">RMSE</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>N</mml:mi>
</mml:mfrac>
<mml:msup>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:msubsup>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td align="left">
<bold>MAPE</bold>
</td>
<td align="left">The mean absolute percentage error</td>
<td align="center">
<inline-formula id="inf107">
<mml:math id="m152">
<mml:mrow>
<mml:mi mathvariant="bold-italic">MAPE</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>N</mml:mi>
</mml:mfrac>
<mml:mstyle displaystyle="true">
<mml:msubsup>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>&#x7c;</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>100</mml:mn>
<mml:mo>%</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td align="left">
<bold>IA</bold>
</td>
<td align="left">Concordance index</td>
<td align="center">
<inline-formula id="inf108">
<mml:math id="m153">
<mml:mrow>
<mml:mi mathvariant="bold-italic">IA</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:msubsup>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:msubsup>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>&#xaf;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>&#x7c;</mml:mo>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>&#xaf;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>&#x7c;</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td align="left">
<bold>FICP</bold>
</td>
<td align="left">Forecast interval coverage probability</td>
<td align="center">
<inline-formula id="inf109">
<mml:math id="m154">
<mml:mrow>
<mml:mi mathvariant="bold-italic">FICP</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mstyle displaystyle="true">
<mml:msubsup>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>100</mml:mn>
<mml:mo>%</mml:mo>
<mml:mo>/</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td rowspan="2" align="left">
<bold>FINAW</bold>
</td>
<td rowspan="2" align="left">Forecast interval normalized average width</td>
<td align="center">
<inline-formula id="inf110">
<mml:math id="m155">
<mml:mrow>
<mml:mi mathvariant="bold-italic">FINAW</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mstyle displaystyle="true">
<mml:msubsup>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>U</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>100</mml:mn>
<mml:mo>%</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf111">
<mml:math id="m156">
<mml:mrow>
<mml:mi mathvariant="bold-italic">AWD</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mstyle displaystyle="true">
<mml:msubsup>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>W</mml:mi>
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>/</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td align="left">
<bold>AWD</bold>
</td>
<td align="left">Accumulated width deviation of testing dataset</td>
<td align="center">
<inline-formula id="inf112">
<mml:math id="m157">
<mml:mrow>
<mml:mi mathvariant="bold-italic">AW</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">D</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>/</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>U</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mtext>&#x2002;</mml:mtext>
<mml:mtext>&#x2002;</mml:mtext>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3c;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2264;</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2264;</mml:mo>
<mml:msub>
<mml:mi>U</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>U</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>/</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>U</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3e;</mml:mo>
<mml:msub>
<mml:mi>U</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn>
<p>
<italic>Note:</italic> This table lists the full names and calculation methods of the evaluation indices included in the evaluation system. <italic>N</italic> is the size of the test sample, <inline-formula id="inf113">
<mml:math id="m158">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>&#xaf;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> is the average value of<inline-formula id="inf114">
<mml:math id="m159">
<mml:mi>y</mml:mi>
</mml:math>
</inline-formula>, <inline-formula id="inf115">
<mml:math id="m160">
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the <italic>i</italic>-th actual value, and <inline-formula id="inf116">
<mml:math id="m161">
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the <italic>i</italic>-th forecast value. <italic>U</italic>
<sub>
<italic>i</italic>
</sub> and <italic>L</italic>
<sub>
<italic>i</italic>
</sub> represent the upper and lower limits, respectively, of the prediction interval. <italic>C</italic>
<sub>
<italic>i</italic>
</sub> represents the number of true values contained in the construction interval <italic>[U</italic>
<sub>
<italic>i</italic>
</sub>, <italic>L</italic>
<sub>
<italic>i</italic>
</sub>] and is the <italic>i</italic>-th prediction interval.</p>
</fn>
</table-wrap-foot>
</table-wrap>
</sec>
<sec id="s4-4">
<title>
<italic>Experiment 1:</italic> Comparison of Different Data Processing Methods</title>
<p>In this experiment, the original carbon price series and the data based on ICEEMDAN, EEMD, and singular spectrum decomposition (SSA) are used as the training input for different prediction models. The purpose of this experiment is to explore the effect of using different signal decomposition techniques on the prediction accuracy of prediction models. <xref ref-type="table" rid="T6">Table&#x20;6</xref> compares the results obtained using the corresponding models.</p>
<sec id="s4-4-1">
<title>Feature Selection Analysis</title>
<p>The prediction performance of both machine learning and deep learning methods is closely related to the input variables. The PACF method is used to select appropriate features as the best input of the prediction model. The best input characteristics obtained from the PACF results of each subsequence are shown in <xref ref-type="table" rid="T5">Table&#x20;5</xref>. (In the follow-up experiments, the input units of each prediction model were obtained according to the PACF results.)</p>
<table-wrap id="T5" position="float">
<label>TABLE 5</label>
<caption>
<p>Optimal input characteristics based on the PACF.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Site</th>
<th align="center">Input combination</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">
<bold>EU ETS</bold>
</td>
<td align="center">
<inline-formula id="inf117">
<mml:math id="m162">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>4</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>6</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>7</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>8</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>9</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td align="left">
<bold>SZ</bold>
</td>
<td align="center">
<inline-formula id="inf118">
<mml:math id="m163">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>4</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>6</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>7</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>8</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>9</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td align="left">
<bold>BJ</bold>
</td>
<td align="center">
<inline-formula id="inf119">
<mml:math id="m164">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>4</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>6</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>7</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>8</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>9</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s4-4-2">
<title>Prediction Results Obtained Using the Different Data Preprocessing Methods</title>
<p>To verify the effectiveness of the ICEEMDAN data preprocessing method in data feature extraction, in this experiment the performance of ICEEMDAN is compared with that of the classical feature extraction methods EEMD and SSA. The detailed results are described&#x20;below.</p>
<p>From the results in <xref ref-type="table" rid="T6">Table&#x20;6</xref>, it can be seen that the use of data preprocessing technology can effectively ameliorate the prediction ability of the prediction model. MAPE, MAE, RMSE, and IA were adopted to evaluate the prediction ability of the model based on the prediction accuracy and the fitting situation. For the data on the three carbon trading markets, the MAPE, the MAE, and the RMSE of the prediction model are significantly lower than those of the model that was directly trained on the original dataset, regardless of which data preprocessing technology is adopted. When the original dataset is used directly, the <inline-formula id="inf120">
<mml:math id="m165">
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mi>a</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>l</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> of the prediction model under the EU ETS, SZ, and BJ datasets is between 3 and 14%, while the <inline-formula id="inf121">
<mml:math id="m166">
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>of the prediction model is reduced to 1&#x2013;5% when ICEEMDAN noise reduction technology is used. This is sufficient to indicate the necessity for data preprocessing.</p>
<table-wrap id="T6" position="float">
<label>TABLE 6</label>
<caption>
<p>Comparison of the performances of prediction models based on different data feature extraction techniques.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Model</th>
<th colspan="4" align="center">EU ETS</th>
<th colspan="4" align="center">SZ</th>
<th colspan="4" align="center">BJ</th>
</tr>
<tr>
<th align="left">&#x2014;</th>
<th align="center">MAPE</th>
<th align="center">RMSE</th>
<th align="center">MAE</th>
<th align="center">IA</th>
<th align="center">MAPE</th>
<th align="center">RMSE</th>
<th align="center">MAE</th>
<th align="center">IA</th>
<th align="center">MAPE</th>
<th align="center">RMSE</th>
<th align="center">MAE</th>
<th align="center">IA</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">
<bold>ELM</bold>
</td>
<td align="char" char=".">3.1190</td>
<td align="char" char=".">0.2148</td>
<td align="char" char=".">0.1583</td>
<td align="char" char=".">0.9574</td>
<td align="char" char=".">13.3955</td>
<td align="char" char=".">4.4118</td>
<td align="char" char=".">3.5126</td>
<td align="char" char=".">0.7965</td>
<td align="char" char=".">3.9351</td>
<td align="char" char=".">4.0281</td>
<td align="char" char=".">2.5107</td>
<td align="char" char=".">0.7225</td>
</tr>
<tr>
<td align="left">
<bold>CNN</bold>
</td>
<td align="char" char=".">3.1581</td>
<td align="char" char=".">0.1659</td>
<td align="char" char=".">0.1659</td>
<td align="char" char=".">0.9544</td>
<td align="char" char=".">12.5333</td>
<td align="char" char=".">4.4030</td>
<td align="char" char=".">3.3666</td>
<td align="char" char=".">0.8007</td>
<td align="char" char=".">3.7483</td>
<td align="char" char=".">3.8029</td>
<td align="char" char=".">2.4362</td>
<td align="char" char=".">0.7236</td>
</tr>
<tr>
<td align="left">
<bold>GBiLSTM</bold>
</td>
<td align="char" char=".">3.3159</td>
<td align="char" char=".">0.2241</td>
<td align="char" char=".">0.1744</td>
<td align="char" char=".">0.9523</td>
<td align="char" char=".">13.1742</td>
<td align="char" char=".">4.3402</td>
<td align="char" char=".">3.4707</td>
<td align="char" char=".">0.7955</td>
<td align="char" char=".">3.7406</td>
<td align="char" char=".">3.7846</td>
<td align="char" char=".">2.4355</td>
<td align="char" char=".">0.7239</td>
</tr>
<tr>
<td align="left">
<bold>EEMD-ELM</bold>
</td>
<td align="char" char=".">1.8531</td>
<td align="char" char=".">0.1235</td>
<td align="char" char=".">0.0942</td>
<td align="char" char=".">0.9857</td>
<td align="char" char=".">7.8973</td>
<td align="char" char=".">2.8796</td>
<td align="char" char=".">2.2638</td>
<td align="char" char=".">0.9016</td>
<td align="char" char=".">2.9211</td>
<td align="char" char=".">2.2281</td>
<td align="char" char=".">1.5135</td>
<td align="char" char=".">0.8443</td>
</tr>
<tr>
<td align="left">
<bold>EEMD-CNN</bold>
</td>
<td align="char" char=".">1.8892</td>
<td align="char" char=".">0.1249</td>
<td align="char" char=".">0.0962</td>
<td align="char" char=".">0.9880</td>
<td align="char" char=".">7.8426</td>
<td align="char" char=".">2.9683</td>
<td align="char" char=".">2.2237</td>
<td align="char" char=".">0.9025</td>
<td align="char" char=".">2.9058</td>
<td align="char" char=".">2.2111</td>
<td align="char" char=".">1.5042</td>
<td align="char" char=".">0.8483</td>
</tr>
<tr>
<td align="left">
<bold>EEMD-GBiLSTM</bold>
</td>
<td align="char" char=".">2.2015</td>
<td align="char" char=".">0.1456</td>
<td align="char" char=".">0.1117</td>
<td align="char" char=".">0.9829</td>
<td align="char" char=".">7.6835</td>
<td align="char" char=".">2.5719</td>
<td align="char" char=".">2.0024</td>
<td align="char" char=".">0.9029</td>
<td align="char" char=".">2.8966</td>
<td align="char" char=".">2.2051</td>
<td align="char" char=".">1.4991</td>
<td align="char" char=".">0.8495</td>
</tr>
<tr>
<td align="left">
<bold>SSA-ELM</bold>
</td>
<td align="char" char=".">1.4597</td>
<td align="char" char=".">0.1021</td>
<td align="char" char=".">0.0734</td>
<td align="char" char=".">0.9922</td>
<td align="char" char=".">5.8985</td>
<td align="char" char=".">2.2233</td>
<td align="char" char=".">1.6648</td>
<td align="char" char=".">0.9506</td>
<td align="char" char=".">1.5922</td>
<td align="char" char=".">1.4362</td>
<td align="char" char=".">0.8198</td>
<td align="char" char=".">0.9318</td>
</tr>
<tr>
<td align="left">
<bold>SSA-CNN</bold>
</td>
<td align="char" char=".">1.5684</td>
<td align="char" char=".">0.1100</td>
<td align="char" char=".">0.0795</td>
<td align="char" char=".">0.9905</td>
<td align="char" char=".">5.9112</td>
<td align="char" char=".">2.2256</td>
<td align="char" char=".">1.6727</td>
<td align="char" char=".">0.9504</td>
<td align="char" char=".">1.5435</td>
<td align="char" char=".">1.4568</td>
<td align="char" char=".">0.8079</td>
<td align="char" char=".">0.9139</td>
</tr>
<tr>
<td align="left">
<bold>SSA-GBiLSTM</bold>
</td>
<td align="char" char=".">2.0231</td>
<td align="char" char=".">0.1293</td>
<td align="char" char=".">0.1009</td>
<td align="char" char=".">0.9861</td>
<td align="char" char=".">5.8622</td>
<td align="char" char=".">2.2085</td>
<td align="char" char=".">1.6519</td>
<td align="char" char=".">0.9513</td>
<td align="char" char=".">1.5479</td>
<td align="char" char=".">1.4577</td>
<td align="char" char=".">0.8085</td>
<td align="char" char=".">0.9137</td>
</tr>
<tr>
<td align="left">
<bold>ICEE-ELM</bold>
</td>
<td align="char" char=".">1.2806</td>
<td align="char" char=".">0.0915</td>
<td align="char" char=".">0.0647</td>
<td align="char" char=".">0.9934</td>
<td align="char" char=".">4.5032</td>
<td align="char" char=".">1.7141</td>
<td align="char" char=".">1.2424</td>
<td align="char" char=".">0.9644</td>
<td align="char" char=".">1.0571</td>
<td align="char" char=".">1.2013</td>
<td align="char" char=".">0.5561</td>
<td align="char" char=".">0.9397</td>
</tr>
<tr>
<td align="left">
<bold>ICEE-CNN</bold>
</td>
<td align="char" char=".">1.5162</td>
<td align="char" char=".">0.1075</td>
<td align="char" char=".">0.0771</td>
<td align="char" char=".">0.9910</td>
<td align="char" char=".">5.6835</td>
<td align="char" char=".">2.5719</td>
<td align="char" char=".">1.3624</td>
<td align="char" char=".">0.9529</td>
<td align="char" char=".">1.0624</td>
<td align="char" char=".">1.2485</td>
<td align="char" char=".">0.5596</td>
<td align="char" char=".">0.9359</td>
</tr>
<tr>
<td align="left">
<bold>ICEE-GBiLSTM</bold>
</td>
<td align="char" char=".">1.8746</td>
<td align="char" char=".">0.1253</td>
<td align="char" char=".">0.0941</td>
<td align="char" char=".">0.9878</td>
<td align="char" char=".">4.2916</td>
<td align="char" char=".">1.6153</td>
<td align="char" char=".">1.1508</td>
<td align="char" char=".">0.9686</td>
<td align="char" char=".">1.0525</td>
<td align="char" char=".">1.2008</td>
<td align="char" char=".">0.5561</td>
<td align="char" char=".">0.9399</td>
</tr>
<tr>
<td align="left">
<bold>EPS</bold>
</td>
<td align="char" char=".">
<bold>1.2657</bold>
</td>
<td align="char" char=".">
<bold>0.0904</bold>
</td>
<td align="char" char=".">
<bold>0.0640</bold>
</td>
<td align="char" char=".">
<bold>0.9936</bold>
</td>
<td align="char" char=".">
<bold>4.0156</bold>
</td>
<td align="char" char=".">
<bold>1.6096</bold>
</td>
<td align="char" char=".">
<bold>1.1372</bold>
</td>
<td align="char" char=".">
<bold>0.9687</bold>
</td>
<td align="char" char=".">
<bold>1.0064</bold>
</td>
<td align="char" char=".">
<bold>1.2049</bold>
</td>
<td align="char" char=".">
<bold>0.5312</bold>
</td>
<td align="char" char=".">
<bold>0.9402</bold>
</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn>
<p>
<italic>Note:</italic> The best indicator values are shown in bold type.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>In addition, for the three components of EPS, different data preprocessing methods are used as model inputs. The experimental results show that ICCEEMDAN is more effective than the other methods. For the EU ETS dataset, the prediction system using ICEEMDAN noise reduction technology has the highest prediction accuracy, and the average RMSE value of the three component models, <inline-formula id="inf122">
<mml:math id="m167">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="bold-italic">RMSE</mml:mi>
</mml:mrow>
<mml:mo stretchy="true">&#xaf;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>U</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.1081</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, is the best; the prediction performance of the model based on EEMD is the worst: <inline-formula id="inf123">
<mml:math id="m168">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="bold-italic">RMSE</mml:mi>
</mml:mrow>
<mml:mo stretchy="true">&#xaf;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>U</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.1313</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> in the SZ dataset, <inline-formula id="inf124">
<mml:math id="m169">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="bold-italic">MAE</mml:mi>
</mml:mrow>
<mml:mo stretchy="true">&#xaf;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>Z</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1.2518</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf125">
<mml:math id="m170">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="bold-italic">RMSE</mml:mi>
</mml:mrow>
<mml:mo stretchy="true">&#xaf;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>Z</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1.9671</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf126">
<mml:math id="m171">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="bold-italic">MAPE</mml:mi>
</mml:mrow>
<mml:mo stretchy="true">&#xaf;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>Z</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>4.8261</mml:mn>
<mml:mo>%</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>. In the BJ dataset, the average values of these three indicators are 0.5573, 1.2169, and 1.0573%, obviously better than those obtained using other data processing technologies.</p>
<p>IA is an effective index that can be used to measure the correlation and consistency between the predicted values and the original data. The higher the index value is, the better the fitting effect of the model is. The ICEEMDAN feature extraction technology proposed in this article achieves the highest IA of all the models tested under the three carbon price datasets. In the SZ dataset, the IA values are <inline-formula id="inf127">
<mml:math id="m172">
<mml:mrow>
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>E</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>Z</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.9644</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf128">
<mml:math id="m173">
<mml:mrow>
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>C</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>Z</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.9529</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf129">
<mml:math id="m174">
<mml:mrow>
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>B</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>Z</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.9696</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>. These values are 0.0138, 0.0025, and 0.0173 units higher than <inline-formula id="inf130">
<mml:math id="m175">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mtext>IA</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>A</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>E</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>Z</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf131">
<mml:math id="m176">
<mml:mrow>
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>A</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>C</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>Z</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf132">
<mml:math id="m177">
<mml:mrow>
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>A</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mi>B</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>Z</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>, respectively, and 0.0628, 0.0504 and 0.0657 units higher than <inline-formula id="inf133">
<mml:math id="m178">
<mml:mrow>
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>D</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>E</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>Z</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf134">
<mml:math id="m179">
<mml:mrow>
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>D</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>C</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>Z</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>&#x548c;<inline-formula id="inf135">
<mml:math id="m180">
<mml:mrow>
<mml:mi mathvariant="bold-italic">I</mml:mi>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>D</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mi>B</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>Z</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>, respectively. In summary, compared with other data preprocessing technologies, ICEEMDAN data preprocessing is more effective for data feature extraction and has incomparable advantages in improving the performance of the prediction&#x20;model.</p>
<p>
<bold>Key Finding:</bold> Compared with the original carbon price series and other classical data preprocessing techniques, ICEEMDAN preprocessing technology can extract the data characteristics of carbon prices more effectively, significantly enhances the prediction accuracy of the prediction model, and is a more reliable data preprocessing&#x20;tool.</p>
</sec>
</sec>
<sec id="s4-5">
<title>
<italic>Experiment 2</italic>: Point Forecasting: Comparison of the EPS With Reference Models</title>
<p>To verify the effectiveness of EPS in carbon price prediction, the traditional single forecast model and the classical hybrid prediction model are compared with EPS. These models include the traditional statistical models ARIMA and ICEEMDAN-ARIMA, the traditional single neural network models BP, ELM, GRNN, the deep learning models LSTM, CNN, and the classical hybrid prediction models GWO-BP and ICEEMDAN-GWO-BP. In addition, to explore the expansibility of the model, the experimental content of multistep point prediction is included in the experiment. In the multistep prediction, rolling prediction is adopted. The specific method used to perform multistep prediction is shown in <xref ref-type="fig" rid="F3">Figure&#x20;3</xref>. The experimental results are shown in <xref ref-type="table" rid="T7">Table&#x20;7</xref>. The detailed experimental analysis is described below.<list list-type="simple">
<list-item>
<p>1) In comparison with the traditional single prediction model, we find that EPS displays incomparable advantages in the four indicators in both one-step and multiple-step prediction. This&#x20;shows that the EPS developed by us is effective in predicting carbon prices. In addition, the MAPE values of GBiLSTM in the three datasets are <inline-formula id="inf136">
<mml:math id="m181">
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>P</mml:mi>
<mml:msubsup>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mi>B</mml:mi>
<mml:mtext>i</mml:mtext>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>U</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>3.3159</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf137">
<mml:math id="m182">
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>P</mml:mi>
<mml:msubsup>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mi>B</mml:mi>
<mml:mtext>i</mml:mtext>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>Z</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>13.1740</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf138">
<mml:math id="m183">
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>P</mml:mi>
<mml:msubsup>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mi>B</mml:mi>
<mml:mtext>i</mml:mtext>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>B</mml:mi>
<mml:mi>J</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>3.7406</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>; these values are better than those obtained using a single LSTM, proving the effectiveness of the GBiLSTM. The other two prediction components, ELM and CNN, have outstanding prediction performance in all single prediction models, so it is reasonable to choose them as the submodes of the EPS. In addition, we can see that for the traditional statistical model ARIMA, the average value of MAPE of the three stations is <inline-formula id="inf139">
<mml:math id="m184">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>MAPE</mml:mi>
</mml:mrow>
<mml:mo stretchy="true">&#xaf;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>I</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>11.0072</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>in one-step prediction; this is not as high as the prediction accuracy achieved using other prediction models, indicating that the traditional linear statistical model is not suitable for the prediction of carbon price series with high volatility and complexity.</p>
</list-item>
<list-item>
<p>2) We can observe that under different datasets, different prediction models have different prediction performances. Under the EU ETS dataset, the neural network ELM performs best, yielding <inline-formula id="inf140">
<mml:math id="m185">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>S</mml:mi>
<mml:msubsup>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>U</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.2148</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>; this is better than the RMSE values of the deep learning algorithms CNN and GBiLSTM, which are <inline-formula id="inf141">
<mml:math id="m186">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>S</mml:mi>
<mml:msubsup>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>U</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.2218</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf142">
<mml:math id="m187">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>S</mml:mi>
<mml:msubsup>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mi>B</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>U</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.2241</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, respectively. However, under the SZ and BJ datasets, the deep learning algorithms CNN and GBiLSTM achieve better prediction results than ELM. The same phenomenon occurs in multistep forecasting, and there the forecast advantage of the deep learning framework is more obvious. However, the prediction accuracy of EPS remains the highest under any of the tested datasets. The RMSE values in the one-step forecast are <inline-formula id="inf143">
<mml:math id="m188">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>S</mml:mi>
<mml:msubsup>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>U</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.0904</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf144">
<mml:math id="m189">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>S</mml:mi>
<mml:msubsup>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>Z</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1.6096</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf145">
<mml:math id="m190">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>S</mml:mi>
<mml:msubsup>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>B</mml:mi>
<mml:mi>J</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1.2046</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>. The RMSE values in the two-step forecast are <inline-formula id="inf146">
<mml:math id="m191">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>S</mml:mi>
<mml:msubsup>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>U</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.1829</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf147">
<mml:math id="m192">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>S</mml:mi>
<mml:msubsup>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>Z</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>3.3244</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf148">
<mml:math id="m193">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>S</mml:mi>
<mml:msubsup>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>B</mml:mi>
<mml:mi>J</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1.7567</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>. This shows that the combination strategy retains the forecasting advantages obtained by using different forecasting components and that it compensates for each model&#x2019;s defects; as a result, EPS has strong robustness and wide adaptability.</p>
</list-item>
</list>
</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>Prediction results obtained using EPS and comparison models under different prediction&#x20;steps.</p>
</caption>
<graphic xlink:href="fenvs-09-740093-g003.tif"/>
</fig>
<table-wrap id="T7" position="float">
<label>TABLE 7</label>
<caption>
<p>Comparison of the prediction ability of the proposed system with those of some traditional benchmark models and classic hybrid models.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th rowspan="2" align="left">Dataset</th>
<th rowspan="2" align="center">Model</th>
<th colspan="4" align="center">ONE-STEP</th>
<th colspan="4" align="center">TWO-STEP</th>
</tr>
<tr>
<th align="center">MAPE</th>
<th align="center">RMSE</th>
<th align="center">MAE</th>
<th align="center">IA</th>
<th align="center">MAPE</th>
<th align="center">RMSE</th>
<th align="center">MAE</th>
<th align="center">IA</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td rowspan="12" align="left">
<bold>EU ETS</bold>
</td>
<td align="left">
<bold>ELM</bold>
</td>
<td align="char" char=".">3.119</td>
<td align="char" char=".">0.2148</td>
<td align="char" char=".">0.1583</td>
<td align="char" char=".">0.9574</td>
<td align="char" char=".">4.2962</td>
<td align="char" char=".">0.2842</td>
<td align="char" char=".">0.2169</td>
<td align="char" char=".">0.9386</td>
</tr>
<tr>
<td align="left">
<bold>CNN</bold>
</td>
<td align="char" char=".">3.1581</td>
<td align="char" char=".">0.2218</td>
<td align="char" char=".">0.1659</td>
<td align="char" char=".">0.9544</td>
<td align="char" char=".">4.1304</td>
<td align="char" char=".">0.2805</td>
<td align="char" char=".">0.2112</td>
<td align="char" char=".">0.9392</td>
</tr>
<tr>
<td align="left">
<bold>GBiLSTM</bold>
</td>
<td align="char" char=".">3.3159</td>
<td align="char" char=".">0.2241</td>
<td align="char" char=".">0.1744</td>
<td align="char" char=".">0.9523</td>
<td align="char" char=".">4.1862</td>
<td align="char" char=".">0.2794</td>
<td align="char" char=".">0.2133</td>
<td align="char" char=".">0.9390</td>
</tr>
<tr>
<td align="left">
<bold>LSTM</bold>
</td>
<td align="char" char=".">3.5642</td>
<td align="char" char=".">0.2470</td>
<td align="char" char=".">0.1815</td>
<td align="char" char=".">0.9407</td>
<td align="char" char=".">4.2575</td>
<td align="char" char=".">0.2827</td>
<td align="char" char=".">0.2164</td>
<td align="char" char=".">0.9388</td>
</tr>
<tr>
<td align="left">
<bold>GRNN</bold>
</td>
<td align="char" char=".">3.5405</td>
<td align="char" char=".">0.2432</td>
<td align="char" char=".">0.1799</td>
<td align="char" char=".">0.9418</td>
<td align="char" char=".">4.7743</td>
<td align="char" char=".">0.3232</td>
<td align="char" char=".">0.2432</td>
<td align="char" char=".">0.9341</td>
</tr>
<tr>
<td align="left">
<bold>BP</bold>
</td>
<td align="char" char=".">3.2941</td>
<td align="char" char=".">0.2235</td>
<td align="char" char=".">0.1704</td>
<td align="char" char=".">0.9516</td>
<td align="char" char=".">4.3052</td>
<td align="char" char=".">0.2903</td>
<td align="char" char=".">0.2193</td>
<td align="char" char=".">0.9380</td>
</tr>
<tr>
<td align="left">
<bold>ARIMA</bold>
</td>
<td align="char" char=".">5.8692</td>
<td align="char" char=".">0.3789</td>
<td align="char" char=".">0.2947</td>
<td align="char" char=".">0.9327</td>
<td align="char" char=".">7.6365</td>
<td align="char" char=".">0.4814</td>
<td align="char" char=".">0.3011</td>
<td align="char" char=".">0.9036</td>
</tr>
<tr>
<td align="left">
<bold>ICEE-ARIMA</bold>
</td>
<td align="char" char=".">1.9758</td>
<td align="char" char=".">0.1323</td>
<td align="char" char=".">0.1000</td>
<td align="char" char=".">0.9862</td>
<td align="char" char=".">3.6109</td>
<td align="char" char=".">0.2401</td>
<td align="char" char=".">0.1839</td>
<td align="char" char=".">0.9403</td>
</tr>
<tr>
<td align="left">
<bold>GWO-BP</bold>
</td>
<td align="char" char=".">3.1281</td>
<td align="char" char=".">0.2149</td>
<td align="char" char=".">0.1584</td>
<td align="char" char=".">0.9571</td>
<td align="char" char=".">5.5972</td>
<td align="char" char=".">0.3500</td>
<td align="char" char=".">0.2828</td>
<td align="char" char=".">0.9322</td>
</tr>
<tr>
<td align="left">
<bold>SSA-GRNN</bold>
</td>
<td align="char" char=".">3.2039</td>
<td align="char" char=".">0.2170</td>
<td align="char" char=".">0.1621</td>
<td align="char" char=".">0.9548</td>
<td align="char" char=".">3.8876</td>
<td align="char" char=".">0.2632</td>
<td align="char" char=".">0.1986</td>
<td align="char" char=".">0.9385</td>
</tr>
<tr>
<td align="left">
<bold>ICEE-GWO-BP</bold>
</td>
<td align="char" char=".">1.2911</td>
<td align="char" char=".">
<bold>0.0891</bold>
</td>
<td align="char" char=".">0.0644</td>
<td align="char" char=".">0.9862</td>
<td align="char" char=".">2.9672</td>
<td align="char" char=".">0.1988</td>
<td align="char" char=".">0.1506</td>
<td align="char" char=".">0.9617</td>
</tr>
<tr>
<td align="left">
<bold>EPS</bold>
</td>
<td align="char" char=".">
<bold>1.2657</bold>
</td>
<td align="char" char=".">0.0904</td>
<td align="char" char=".">
<bold>0.0640</bold>
</td>
<td align="char" char=".">
<bold>0.9936</bold>
</td>
<td align="char" char=".">
<bold>2.6930</bold>
</td>
<td align="char" char=".">
<bold>0.1829</bold>
</td>
<td align="char" char=".">
<bold>0.1378</bold>
</td>
<td align="char" char=".">
<bold>0.9674</bold>
</td>
</tr>
<tr>
<td rowspan="12" align="left">
<bold>SZ</bold>
</td>
<td align="left">
<bold>ELM</bold>
</td>
<td align="char" char=".">13.3955</td>
<td align="char" char=".">4.4118</td>
<td align="char" char=".">3.5126</td>
<td align="char" char=".">0.7965</td>
<td align="char" char=".">22.4148</td>
<td align="char" char=".">6.6057</td>
<td align="char" char=".">6.6813</td>
<td align="char" char=".">0.4538</td>
</tr>
<tr>
<td align="left">
<bold>CNN</bold>
</td>
<td align="char" char=".">12.5333</td>
<td align="char" char=".">4.4030</td>
<td align="char" char=".">3.3666</td>
<td align="char" char=".">0.8007</td>
<td align="char" char=".">21.8746</td>
<td align="char" char=".">6.2643</td>
<td align="char" char=".">5.3190</td>
<td align="char" char=".">0.4686</td>
</tr>
<tr>
<td align="left">
<bold>GBiLSTM</bold>
</td>
<td align="char" char=".">13.1740</td>
<td align="char" char=".">4.3402</td>
<td align="char" char=".">3.4707</td>
<td align="char" char=".">0.7955</td>
<td align="char" char=".">22.034</td>
<td align="char" char=".">6.5272</td>
<td align="char" char=".">5.5683</td>
<td align="char" char=".">0.4617</td>
</tr>
<tr>
<td align="left">
<bold>LSTM</bold>
</td>
<td align="char" char=".">15.1983</td>
<td align="char" char=".">6.8455</td>
<td align="char" char=".">4.5637</td>
<td align="char" char=".">0.7514</td>
<td align="char" char=".">25.9514</td>
<td align="char" char=".">7.3299</td>
<td align="char" char=".">6.4834</td>
<td align="char" char=".">0.4352</td>
</tr>
<tr>
<td align="left">
<bold>GRNN</bold>
</td>
<td align="char" char=".">13.7152</td>
<td align="char" char=".">5.1715</td>
<td align="char" char=".">3.8576</td>
<td align="char" char=".">0.7654</td>
<td align="char" char=".">33.0502</td>
<td align="char" char=".">9.8132</td>
<td align="char" char=".">8.2422</td>
<td align="char" char=".">0.2973</td>
</tr>
<tr>
<td align="left">
<bold>BP</bold>
</td>
<td align="char" char=".">14.1428</td>
<td align="char" char=".">5.8032</td>
<td align="char" char=".">4.2144</td>
<td align="char" char=".">0.7608</td>
<td align="char" char=".">22.8251</td>
<td align="char" char=".">6.5453</td>
<td align="char" char=".">5.7117</td>
<td align="char" char=".">0.5004</td>
</tr>
<tr>
<td align="left">
<bold>ARIMA</bold>
</td>
<td align="char" char=".">21.8403</td>
<td align="char" char=".">7.8476</td>
<td align="char" char=".">5.9355</td>
<td align="char" char=".">0.6595</td>
<td align="char" char=".">35.1742</td>
<td align="char" char=".">10.0714</td>
<td align="char" char=".">8.7415</td>
<td align="char" char=".">0.4211</td>
</tr>
<tr>
<td align="left">
<bold>ICEE-ARIMA</bold>
</td>
<td align="char" char=".">8.9814</td>
<td align="char" char=".">3.2795</td>
<td align="char" char=".">2.5983</td>
<td align="char" char=".">0.8943</td>
<td align="char" char=".">12.8818</td>
<td align="char" char=".">4.2559</td>
<td align="char" char=".">3.3362</td>
<td align="char" char=".">0.7351</td>
</tr>
<tr>
<td align="left">
<bold>GWO-BP</bold>
</td>
<td align="char" char=".">12.3461</td>
<td align="char" char=".">4.1508</td>
<td align="char" char=".">3.2907</td>
<td align="char" char=".">0.8017</td>
<td align="char" char=".">18.0382</td>
<td align="char" char=".">5.3858</td>
<td align="char" char=".">4.5591</td>
<td align="char" char=".">0.5993</td>
</tr>
<tr>
<td align="left">
<bold>SSA-GRNN</bold>
</td>
<td align="char" char=".">5.9875</td>
<td align="char" char=".">2.2372</td>
<td align="char" char=".">1.6791</td>
<td align="char" char=".">0.9494</td>
<td align="char" char=".">14.3684</td>
<td align="char" char=".">4.5285</td>
<td align="char" char=".">3.6886</td>
<td align="char" char=".">0.6773</td>
</tr>
<tr>
<td align="left">
<bold>ICEE-GWO-BP</bold>
</td>
<td align="char" char=".">4.6025</td>
<td align="char" char=".">1.8286</td>
<td align="char" char=".">1.3027</td>
<td align="char" char=".">0.9631</td>
<td align="char" char=".">11.1825</td>
<td align="char" char=".">3.7085</td>
<td align="char" char=".">2.9526</td>
<td align="char" char=".">0.8039</td>
</tr>
<tr>
<td align="left">
<bold>EPS</bold>
</td>
<td align="char" char=".">
<bold>4.0156</bold>
</td>
<td align="char" char=".">
<bold>1.6096</bold>
</td>
<td align="char" char=".">
<bold>1.1372</bold>
</td>
<td align="char" char=".">
<bold>0.9687</bold>
</td>
<td align="char" char=".">
<bold>9.7600</bold>
</td>
<td align="char" char=".">
<bold>3.3244</bold>
</td>
<td align="char" char=".">
<bold>2.5621</bold>
</td>
<td align="char" char=".">
<bold>0.8701</bold>
</td>
</tr>
<tr>
<td rowspan="12" align="left">
<bold>BJ</bold>
</td>
<td align="left">
<bold>ELM</bold>
</td>
<td align="char" char=".">3.9351</td>
<td align="char" char=".">4.0281</td>
<td align="char" char=".">2.5107</td>
<td align="char" char=".">0.7225</td>
<td align="char" char=".">4.8978</td>
<td align="char" char=".">4.4902</td>
<td align="char" char=".">3.5380</td>
<td align="char" char=".">0.4689</td>
</tr>
<tr>
<td align="left">
<bold>CNN</bold>
</td>
<td align="char" char=".">3.7483</td>
<td align="char" char=".">3.8029</td>
<td align="char" char=".">2.4362</td>
<td align="char" char=".">0.7236</td>
<td align="char" char=".">4.8724</td>
<td align="char" char=".">4.4065</td>
<td align="char" char=".">3.5240</td>
<td align="char" char=".">0.4410</td>
</tr>
<tr>
<td align="left">
<bold>GBiLSTM</bold>
</td>
<td align="char" char=".">3.7406</td>
<td align="char" char=".">3.7846</td>
<td align="char" char=".">2.4355</td>
<td align="char" char=".">0.7239</td>
<td align="char" char=".">4.6629</td>
<td align="char" char=".">4.3132</td>
<td align="char" char=".">3.2137</td>
<td align="char" char=".">0.4852</td>
</tr>
<tr>
<td align="left">
<bold>LSTM</bold>
</td>
<td align="char" char=".">3.8462</td>
<td align="char" char=".">3.9033</td>
<td align="char" char=".">2.4871</td>
<td align="char" char=".">0.7231</td>
<td align="char" char=".">4.8787</td>
<td align="char" char=".">4.4753</td>
<td align="char" char=".">3.5231</td>
<td align="char" char=".">0.4749</td>
</tr>
<tr>
<td align="left">
<bold>GRNN</bold>
</td>
<td align="char" char=".">4.1345</td>
<td align="char" char=".">4.3247</td>
<td align="char" char=".">2.9737</td>
<td align="char" char=".">0.6273</td>
<td align="char" char=".">5.0379</td>
<td align="char" char=".">4.5516</td>
<td align="char" char=".">3.6191</td>
<td align="char" char=".">0.4416</td>
</tr>
<tr>
<td align="left">
<bold>BP</bold>
</td>
<td align="char" char=".">4.0350</td>
<td align="char" char=".">4.2179</td>
<td align="char" char=".">2.8450</td>
<td align="char" char=".">0.6355</td>
<td align="char" char=".">5.9108</td>
<td align="char" char=".">4.9031</td>
<td align="char" char=".">3.8741</td>
<td align="char" char=".">0.4379</td>
</tr>
<tr>
<td align="left">
<bold>ARIMA</bold>
</td>
<td align="char" char=".">5.3120</td>
<td align="char" char=".">4.6312</td>
<td align="char" char=".">3.7397</td>
<td align="char" char=".">0.5627</td>
<td align="char" char=".">8.1714</td>
<td align="char" char=".">6.4412</td>
<td align="char" char=".">4.7140</td>
<td align="char" char=".">0.4058</td>
</tr>
<tr>
<td align="left">
<bold>ICEE-ARIMA</bold>
</td>
<td align="char" char=".">2.4705</td>
<td align="char" char=".">2.0481</td>
<td align="char" char=".">1.2635</td>
<td align="char" char=".">0.9045</td>
<td align="char" char=".">2.9320</td>
<td align="char" char=".">2.5669</td>
<td align="char" char=".">1.5213</td>
<td align="char" char=".">0.6763</td>
</tr>
<tr>
<td align="left">
<bold>GWO-BP</bold>
</td>
<td align="char" char=".">2.9202</td>
<td align="char" char=".">3.0563</td>
<td align="char" char=".">1.5062</td>
<td align="char" char=".">0.7113</td>
<td align="char" char=".">4.0738</td>
<td align="char" char=".">3.6183</td>
<td align="char" char=".">2.1437</td>
<td align="char" char=".">0.4440</td>
</tr>
<tr>
<td align="left">
<bold>SSA-GRNN</bold>
</td>
<td align="char" char=".">3.0901</td>
<td align="char" char=".">2.6153</td>
<td align="char" char=".">1.5929</td>
<td align="char" char=".">0.6346</td>
<td align="char" char=".">3.9989</td>
<td align="char" char=".">3.4553</td>
<td align="char" char=".">2.0979</td>
<td align="char" char=".">0.4599</td>
</tr>
<tr>
<td align="left">
<bold>ICEE-GWO-BP</bold>
</td>
<td align="char" char=".">1.1544</td>
<td align="char" char=".">1.2383</td>
<td align="char" char=".">0.5966</td>
<td align="char" char=".">0.9488</td>
<td align="char" char=".">2.4931</td>
<td align="char" char=".">2.1482</td>
<td align="char" char=".">1.3095</td>
<td align="char" char=".">0.8943</td>
</tr>
<tr>
<td align="left">
<bold>EPS</bold>
</td>
<td align="char" char=".">
<bold>1.0064</bold>
</td>
<td align="char" char=".">
<bold>1.2049</bold>
</td>
<td align="char" char=".">
<bold>0.5312</bold>
</td>
<td align="char" char=".">
<bold>0.9402</bold>
</td>
<td align="char" char=".">
<bold>2.1558</bold>
</td>
<td align="char" char=".">
<bold>1.7567</bold>
</td>
<td align="char" char=".">
<bold>1.1272</bold>
</td>
<td align="char" char=".">
<bold>0.9071</bold>
</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn>
<p>
<italic>Note</italic>: ICEE is an abbreviation for ICEEMDAN. The best indicator values are shown in bold&#x20;type.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>In comparison with the classic hybrid forecasting models ICEE-GWO-BP, SSA-GRNN, and GWO-BP, several sets of hybrid forecasting methods have achieved good forecasting performance; however, because the index values used in these models are very similar, it is not easy to intuitively present the predictive ability of the model. In this case, we measure the percentage of improvement in the evaluation index to make the analysis more intuitive. The percentage of improvement in the evaluation index is a measure of the degree of improvement achieved by EPS compared with the index value of the comparison model; it can be expressed as follows:<disp-formula id="e46">
<mml:math id="m194">
<mml:mrow>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">P</mml:mi>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>l</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="bold-italic">Inde</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>l</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="bold-italic">Inde</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold-italic">Inde</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>l</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>&#x7c;</mml:mo>
</mml:mrow>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>100</mml:mn>
<mml:mo>%</mml:mo>
</mml:mrow>
</mml:math>
<label>(46)</label>
</disp-formula>where<inline-formula id="inf149">
<mml:math id="m195">
<mml:mrow>
<mml:msubsup>
<mml:mtext>P</mml:mtext>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>l</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>is the percentage of improvement indicators, <inline-formula id="inf150">
<mml:math id="m196">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mtext>Index</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>l</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> stands for the index value of the comparison model, and <inline-formula id="inf151">
<mml:math id="m197">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mtext>Index</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>is the index value of the&#x20;EPS.</p>
<p>In the EU ETS dataset, the improved MAPE values for one-step prediction of the three hybrid models are <inline-formula id="inf152">
<mml:math id="m198">
<mml:mrow>
<mml:msubsup>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mi>W</mml:mi>
<mml:mi>O</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>B</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1.9673</mml:mn>
<mml:mo>%</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf153">
<mml:math id="m199">
<mml:mrow>
<mml:msubsup>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>A</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>60.4950</mml:mn>
<mml:mo>%</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf154">
<mml:math id="m200">
<mml:mrow>
<mml:msubsup>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mi>W</mml:mi>
<mml:mi>O</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>B</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>59.5377</mml:mn>
<mml:mo>%</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>. The improved MAPE values of the two-step prediction are <inline-formula id="inf155">
<mml:math id="m201">
<mml:mrow>
<mml:msubsup>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mi>W</mml:mi>
<mml:mi>O</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>B</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>9.2410</mml:mn>
<mml:mo>%</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf156">
<mml:math id="m202">
<mml:mrow>
<mml:msubsup>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>A</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>30.7285</mml:mn>
<mml:mo>%</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf157">
<mml:math id="m203">
<mml:mrow>
<mml:msubsup>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mi>W</mml:mi>
<mml:mi>O</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>B</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>51.8867</mml:mn>
<mml:mo>%</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>. Under the BJ dataset, the&#x20;improved IA values predicted for the three hybrid models using the one-step method are <inline-formula id="inf158">
<mml:math id="m204">
<mml:mrow>
<mml:msubsup>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mi>W</mml:mi>
<mml:mi>O</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>B</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.9064</mml:mn>
<mml:mo>%</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf159">
<mml:math id="m205">
<mml:mrow>
<mml:msubsup>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>A</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>48.1563</mml:mn>
<mml:mo>%</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf160">
<mml:math id="m206">
<mml:mrow>
<mml:msubsup>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mi>W</mml:mi>
<mml:mi>O</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>B</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>33.3896</mml:mn>
<mml:mo>%</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>. The improved IA values obtained by two-step prediction are <inline-formula id="inf161">
<mml:math id="m207">
<mml:mrow>
<mml:msubsup>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mi>W</mml:mi>
<mml:mi>O</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>B</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1.4313</mml:mn>
<mml:mo>%</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf162">
<mml:math id="m208">
<mml:mrow>
<mml:msubsup>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>A</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>97.2385</mml:mn>
<mml:mo>%</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf163">
<mml:math id="m209">
<mml:mrow>
<mml:msubsup>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mi>W</mml:mi>
<mml:mi>O</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>B</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>104.3018</mml:mn>
<mml:mo>%</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>. The index improvement percentage more intuitively shows the improvement in prediction performance obtained using EPS. Compared with the classical hybrid prediction model, the EPS shows significant improvement in both prediction accuracy and fitting consistency.</p>
<p>
<xref ref-type="fig" rid="F3">Figure&#x20;3</xref> shows a comparison of the prediction results obtained using EPS and the comparison model when different numbers of prediction steps are&#x20;used.</p>
<p>
<bold>Key finding:</bold> The difference in the prediction results between the EPS system and other prediction models is significant. Specifically, under each dataset and for each prediction step, the EPS has better prediction performance. Therefore, it is concluded that the advanced ensemble prediction system has better carbon price forecasting ability and potential than the traditional single models and classical hybrid models.</p>
</sec>
<sec id="s4-6">
<title>
<italic>Experiment 3</italic>: Interval Forecasting: Uncertainty Analysis of Carbon Price</title>
<p>In Experiment 2, the accuracy and stability of the prediction system were discussed through the prediction method of certainty point estimation. However, the point prediction results do not reflect the uncertainty in the dataset. To further prove that the ESP system has a wider range of adaptability than other predictive models, this section uses the interval prediction method to mine the uncertainty of carbon prices. Unlike point prediction, interval prediction can provide the upper and lower bounds of the observed value, making it possible to construct the prediction interval under a given significance level. It can provide additional information for carbon price market policymakers and can help them analyze the carbon price market.</p>
<sec id="s4-6-1">
<title>Distribution Function of Prediction Error</title>
<p>In previous studies, most of the prediction errors of the prediction model defaulted to obey the normal distribution. However, the normal distribution function does not effectively reflect the distribution of forecast model errors. Therefore, this research develops five fitting distribution functions and uses the MLE method to conduct an in-depth investigation of the prediction error to obtain a distribution function (DF) with better fitting performance. The most suitable probability distribution for further interval prediction is selected.</p>
<p>In this section, five DFs, namely, extreme value, normal, logistic, stable, and t location-scale, are used to represent the distribution of carbon price prediction errors. <xref ref-type="table" rid="T1">Table&#x20;1</xref> shows the relative PDF of these DFs. <xref ref-type="table" rid="T8">Table&#x20;8</xref> lists the five DF parameters estimated by the MLE method. These parameters can be used to describe the scale and location of these DFs. In addition, the coefficient of determination (0 &#x2264; R<sup>2</sup> &#x2264; 1) and RMSE are used to determine the degree of fit of these DFs. The larger the R<sup>2</sup> value is, the lower the RMSE value is, and the better is the fitting ability of the DFs. The index values reflecting the fitting abilities of the five DFs are shown in <xref ref-type="table" rid="T9">Table&#x20;9</xref> and <xref ref-type="fig" rid="F4">Figure&#x20;4</xref>.</p>
<table-wrap id="T8" position="float">
<label>TABLE 8</label>
<caption>
<p>Parameter values of the different distribution functions determined by MLE.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Datasets</th>
<th colspan="2" align="center">Extreme value</th>
<th colspan="2" align="center">Logistic</th>
<th colspan="2" align="center">Normal</th>
<th colspan="4" align="center">Stable</th>
<th colspan="3" align="center">T Location-scale</th>
</tr>
<tr>
<th align="left">&#x2014;</th>
<th align="center">
<inline-formula id="inf164">
<mml:math id="m210">
<mml:mi>&#x3bc;</mml:mi>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf165">
<mml:math id="m211">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf166">
<mml:math id="m212">
<mml:mi>&#x3bc;</mml:mi>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf167">
<mml:math id="m213">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf168">
<mml:math id="m214">
<mml:mi>&#x3bc;</mml:mi>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf169">
<mml:math id="m215">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf170">
<mml:math id="m216">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf171">
<mml:math id="m217">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf172">
<mml:math id="m218">
<mml:mi>&#x3b2;</mml:mi>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf173">
<mml:math id="m219">
<mml:mi>&#x3b4;</mml:mi>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf174">
<mml:math id="m220">
<mml:mi>&#x3bc;</mml:mi>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf175">
<mml:math id="m221">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf176">
<mml:math id="m222">
<mml:mi>&#x3c5;</mml:mi>
</mml:math>
</inline-formula>
</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">
<bold>
<italic>EU ETS</italic>
</bold>
</td>
<td align="char" char=".">0.0197</td>
<td align="char" char=".">0.0342</td>
<td align="char" char=".">0.0028</td>
<td align="char" char=".">0.0189</td>
<td align="char" char=".">0.0030</td>
<td align="char" char=".">0.0334</td>
<td align="char" char=".">1.9347</td>
<td align="char" char=".">&#x2212;0.0264</td>
<td align="char" char=".">0.0227</td>
<td align="char" char=".">0.0031</td>
<td align="char" char=".">0.0030</td>
<td align="char" char=".">0.0314</td>
<td align="char" char=".">19.0078</td>
</tr>
<tr>
<td align="left">
<bold>
<italic>SZ</italic>
</bold>
</td>
<td align="char" char=".">0.2878</td>
<td align="char" char=".">0.7713</td>
<td align="char" char=".">&#x2212;0.0660</td>
<td align="char" char=".">0.3496</td>
<td align="char" char=".">&#x2212;0.0542</td>
<td align="char" char=".">0.6631</td>
<td align="char" char=".">1.7226</td>
<td align="char" char=".">0.2460</td>
<td align="char" char=".">0.3768</td>
<td align="char" char=".">&#x2212;0.092</td>
<td align="char" char=".">&#x2212;0.0691</td>
<td align="char" char=".">0.5031</td>
<td align="char" char=".">4.7444</td>
</tr>
<tr>
<td align="left">
<bold>
<italic>BJ</italic>
</bold>
</td>
<td align="char" char=".">0.1150</td>
<td align="char" char=".">0.4266</td>
<td align="char" char=".">&#x2212;0.0642</td>
<td align="char" char=".">0.1978</td>
<td align="char" char=".">&#x2212;0.0839</td>
<td align="char" char=".">0.4106</td>
<td align="char" char=".">1.2249</td>
<td align="char" char=".">&#x2212;0.1238</td>
<td align="char" char=".">0.1469</td>
<td align="char" char=".">-0.047</td>
<td align="char" char=".">&#x2212;0.0531</td>
<td align="char" char=".">0.1665</td>
<td align="char" char=".">1.5469</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="T9" position="float">
<label>TABLE 9</label>
<caption>
<p>R<sup>2</sup> and RMSE values of the five distribution functions by MLE.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Datasets</th>
<th colspan="2" align="center">EU ETS</th>
<th colspan="2" align="center">SZ</th>
<th colspan="2" align="center">BJ</th>
</tr>
<tr>
<th align="left">&#x2014;</th>
<th align="center">R<sup>2</sup>
</th>
<th align="center">RMSE</th>
<th align="center">R<sup>2</sup>
</th>
<th align="center">RMSE</th>
<th align="center">R<sup>2</sup>
</th>
<th align="center">RMSE</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">
<bold>Extreme value</bold>
</td>
<td align="char" char=".">0.9086</td>
<td align="char" char=".">1.1897</td>
<td align="char" char=".">0.8115</td>
<td align="char" char=".">0.0941</td>
<td align="char" char=".">0.6103</td>
<td align="char" char=".">0.2913</td>
</tr>
<tr>
<td align="left">
<bold>Logistic</bold>
</td>
<td align="char" char=".">0.9546</td>
<td align="char" char=".">0.8382</td>
<td align="char" char=".">0.9685</td>
<td align="char" char=".">0.0384</td>
<td align="char" char=".">0.8780</td>
<td align="char" char=".">0.1630</td>
</tr>
<tr>
<td align="left">
<bold>Normal</bold>
</td>
<td align="char" char=".">0.9760</td>
<td align="char" char=".">0.6095</td>
<td align="char" char=".">0.9512</td>
<td align="char" char=".">0.0479</td>
<td align="char" char=".">0.7207</td>
<td align="char" char=".">0.2466</td>
</tr>
<tr>
<td align="left">
<bold>Stable</bold>
</td>
<td align="char" char=".">0.9771</td>
<td align="char" char=".">0.5956</td>
<td align="char" char=".">0.9507</td>
<td align="char" char=".">0.0481</td>
<td align="char" char=".">0.9668</td>
<td align="char" char=".">0.0851</td>
</tr>
<tr>
<td align="left">
<bold>T location-scale</bold>
</td>
<td align="char" char=".">0.9877</td>
<td align="char" char=".">0.4355</td>
<td align="char" char=".">0.9791</td>
<td align="char" char=".">0.0313</td>
<td align="char" char=".">0.9671</td>
<td align="char" char=".">0.0846</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption>
<p>Five distribution functions fit the distribution of EPS&#x20;error.</p>
</caption>
<graphic xlink:href="fenvs-09-740093-g004.tif"/>
</fig>
<p>
<xref ref-type="table" rid="T9">Table&#x20;9</xref> shows that the t location-scale function fits the EPS prediction error best. Its R<sup>2</sup> value is higher than 0.96, and its RMSE value is the lowest, indicating that it can provide better estimates in all cases, followed by stable distribution, normal distribution, logistic distribution, and extreme value distribution. In addition, although the stable distribution has a slightly worse fitting effect than the t location-scale distribution, it is still better than the normal distribution that the previous prediction error hypothesis obeys; this further proves the necessity of fitting the distribution of the prediction error. In addition, the motivation for estimating the distribution function of the carbon price dataset in this section is to prepare for further research on the establishment of carbon price interval predictions, as discussed in Section <italic>Interval Prediction of Carbon Price</italic>.</p>
</sec>
<sec id="s4-6-2">
<title>Interval Prediction of Carbon Price</title>
<p>Unlike the deterministic information given by the point forecast, the interval forecast can provide the forecast range, a confidence level, and other uncertain information on future values; this information is helpful to decision-makers who are attempting to analyze and supervise the reasonable operation of the carbon price market. Owing to the generalization ability of the forecasting model, the complex patterns of carbon price series fluctuations and other factors inevitably produce forecast errors, and the ability to effectively transform the uncertainty caused by forecast errors into measurable features is of great significance. Therefore, in this study, a new interval prediction scheme based on modeling of the prediction error distribution in the model training phase is proposed.</p>
<p>According to the point prediction results of the proposed EPS system, the t location-scale distribution function, which determines the prediction error in Section <italic>Distribution Function&#x20;of Prediction Error</italic>, and the interval prediction method introduced in Section <italic>Interval Prediction Theory</italic>, the prediction interval is constructed under the given significance level &#x3b1;. To verify that the prediction interval constructed by the t location-scale model has the best fit, it is compared with the other four error distribution functions.</p>
<p>In addition, three evaluation indicators, FINAW, PICP, and AWD, listed in <xref ref-type="table" rid="T4">Table&#x20;4</xref>, are introduced in this section to present the effect of interval prediction. It is worth mentioning that the optimal interval prediction should satisfy the following conditions: under a given significance level <inline-formula id="inf177">
<mml:math id="m223">
<mml:mi>&#x3b1;</mml:mi>
</mml:math>
</inline-formula>, the larger the PICP value is (0 &#x2264; PICP &#x2264; 1), the smaller the FINAW value is, and the better is the prediction performance of interval prediction. However, it is obvious that there is a contradiction between FINAW and PICP. When the PICP value increases, the FINAW of the response average bandwidth will certainly increase. Therefore, the AWD index is introduced as a supplement to the evaluation index system. <xref ref-type="table" rid="T10">Table&#x20;10</xref> shows the prediction intervals of the EPS system as evaluated based on the three carbon price markets using five different error distributions.</p>
<table-wrap id="T10" position="float">
<label>TABLE 10</label>
<caption>
<p>Carbon price range prediction results based on EU ETS, SZ, and BJ under different significance levels.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Site</th>
<th align="center">PINC</th>
<th align="center">Distribution</th>
<th align="center">PICP</th>
<th align="center">FINAW</th>
<th align="center">AWD</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td rowspan="15" align="left">
<bold>EU ETS</bold>
</td>
<td rowspan="5" align="center">95%</td>
<td align="left">EPS<italic>-Extreme value</italic>
</td>
<td align="char" char=".">97.3822</td>
<td align="char" char=".">0.0701</td>
<td align="char" char=".">0.0027</td>
</tr>
<tr>
<td align="left">EPS<italic>-Logistic</italic>
</td>
<td align="char" char=".">83.2461</td>
<td align="char" char=".">0.0387</td>
<td align="char" char=".">0.0354</td>
</tr>
<tr>
<td align="left">EPS<italic>-Normal</italic>
</td>
<td align="char" char=".">95.2880</td>
<td align="char" char=".">0.0516</td>
<td align="char" char=".">0.0083</td>
</tr>
<tr>
<td align="left">EPS<italic>-Stable</italic>
</td>
<td align="char" char=".">95.8115</td>
<td align="char" char=".">0.05361</td>
<td align="char" char=".">0.0073</td>
</tr>
<tr>
<td align="left">EPS<italic>-TLS</italic>
</td>
<td align="char" char=".">95.8115</td>
<td align="char" char=".">0.0542</td>
<td align="char" char=".">0.0070</td>
</tr>
<tr>
<td rowspan="5" align="center">90%</td>
<td align="left">EPS<italic>-Extreme value</italic>
</td>
<td align="char" char=".">96.8586</td>
<td align="char" char=".">0.0573</td>
<td align="char" char=".">0.0062</td>
</tr>
<tr>
<td align="left">EPS<italic>-Logistic</italic>
</td>
<td align="char" char=".">74.8691</td>
<td align="char" char=".">0.0316</td>
<td align="char" char=".">0.0584</td>
</tr>
<tr>
<td align="left">EPS<italic>-Normal</italic>
</td>
<td align="char" char=".">90.0109</td>
<td align="char" char=".">0.0412</td>
<td align="char" char=".">0.0194</td>
</tr>
<tr>
<td align="left">EPS<italic>-Stable</italic>
</td>
<td align="char" char=".">90.5759</td>
<td align="char" char=".">0.0443</td>
<td align="char" char=".">0.0162</td>
</tr>
<tr>
<td align="left">EPS<italic>-T LS</italic>
</td>
<td align="char" char=".">90.0524</td>
<td align="char" char=".">0.0426</td>
<td align="char" char=".">0.0188</td>
</tr>
<tr>
<td rowspan="5" align="center">80%</td>
<td align="left">EPS<italic>-Extreme value</italic>
</td>
<td align="char" char=".">90.5759</td>
<td align="char" char=".">0.0434</td>
<td align="char" char=".">0.0155</td>
</tr>
<tr>
<td align="left">EPS<italic>-Logistic</italic>
</td>
<td align="char" char=".">62.3037</td>
<td align="char" char=".">0.0240</td>
<td align="char" char=".">0.1170</td>
</tr>
<tr>
<td align="left">EPS<italic>-Normal</italic>
</td>
<td align="char" char=".">80.5340</td>
<td align="char" char=".">0.0331</td>
<td align="char" char=".">0.0460</td>
</tr>
<tr>
<td align="left">EPS<italic>-Stable</italic>
</td>
<td align="char" char=".">79.5812</td>
<td align="char" char=".">0.0342</td>
<td align="char" char=".">0.0419</td>
</tr>
<tr>
<td align="left">EPS<italic>-T LS</italic>
</td>
<td align="char" char=".">81.675</td>
<td align="char" char=".">0.0352</td>
<td align="char" char=".">0.0378</td>
</tr>
<tr>
<td align="left">
<bold>Site</bold>
</td>
<td align="center">
<bold>PINC</bold>
</td>
<td align="left">
<bold>Distribution</bold>
</td>
<td align="center">
<bold>PICP</bold>
</td>
<td align="center">
<bold>FINAW</bold>
</td>
<td align="center">
<bold>AWD</bold>
</td>
</tr>
<tr>
<td rowspan="15" align="left">
<bold>SZ</bold>
</td>
<td rowspan="5" align="center">95%</td>
<td align="left">EPS<italic>-Extreme value</italic>
</td>
<td align="char" char=".">98.0132</td>
<td align="char" char=".">0.1811</td>
<td align="char" char=".">0.0029</td>
</tr>
<tr>
<td align="left">EPS<italic>-Logistic</italic>
</td>
<td align="char" char=".">94.7020</td>
<td align="char" char=".">0.1191</td>
<td align="char" char=".">0.0115</td>
</tr>
<tr>
<td align="left">EPS<italic>-Normal</italic>
</td>
<td align="char" char=".">95.3642</td>
<td align="char" char=".">0.1240</td>
<td align="char" char=".">0.0105</td>
</tr>
<tr>
<td align="left">EPS<italic>-Stable</italic>
</td>
<td align="char" char=".">95.3642</td>
<td align="char" char=".">0.1207</td>
<td align="char" char=".">0.0112</td>
</tr>
<tr>
<td align="left">EPS-<italic>TLS</italic>
</td>
<td align="char" char=".">95.3642</td>
<td align="char" char=".">0.1225</td>
<td align="char" char=".">0.0100</td>
</tr>
<tr>
<td rowspan="5" align="center">90%</td>
<td align="left">EPS<italic>-Extreme value</italic>
</td>
<td align="char" char=".">95.3642</td>
<td align="char" char=".">0.1477</td>
<td align="char" char=".">0.0051</td>
</tr>
<tr>
<td align="left">EPS<italic>-Logistic</italic>
</td>
<td align="char" char=".">92.0530</td>
<td align="char" char=".">0.0970</td>
<td align="char" char=".">0.0224</td>
</tr>
<tr>
<td align="left">EPS<italic>-Normal</italic>
</td>
<td align="char" char=".">90.0514</td>
<td align="char" char=".">0.1028</td>
<td align="char" char=".">0.0183</td>
</tr>
<tr>
<td align="left">EPS<italic>-Stable</italic>
</td>
<td align="char" char=".">92.0530</td>
<td align="char" char=".">0.0927</td>
<td align="char" char=".">0.0242</td>
</tr>
<tr>
<td align="left">EPS<italic>-TLS</italic>
</td>
<td align="char" char=".">92.0530</td>
<td align="char" char=".">0.0967</td>
<td align="char" char=".">0.0222</td>
</tr>
<tr>
<td rowspan="5" align="center">80%</td>
<td align="left">EPS<italic>-Extreme value</italic>
</td>
<td align="char" char=".">90.7285</td>
<td align="char" char=".">0.1121</td>
<td align="char" char=".">0.0158</td>
</tr>
<tr>
<td align="left">EPS<italic>-Logistic</italic>
</td>
<td align="char" char=".">81.4570</td>
<td align="char" char=".">0.0724</td>
<td align="char" char=".">0.0507</td>
</tr>
<tr>
<td align="left">EPS<italic>-Normal</italic>
</td>
<td align="char" char=".">81.4570</td>
<td align="char" char=".">0.0706</td>
<td align="char" char=".">0.0547</td>
</tr>
<tr>
<td align="left">EPS<italic>-Stable</italic>
</td>
<td align="char" char=".">81.4570</td>
<td align="char" char=".">0.0682</td>
<td align="char" char=".">0.0585</td>
</tr>
<tr>
<td align="left">EPS<italic>-TLS</italic>
</td>
<td align="char" char=".">86.7550</td>
<td align="char" char=".">0.0801</td>
<td align="char" char=".">0.0375</td>
</tr>
<tr>
<td align="left">
<bold>Site</bold>
</td>
<td align="center">
<bold>PINC</bold>
</td>
<td align="left">
<bold>Distribution</bold>
</td>
<td align="center">
<bold>PICP</bold>
</td>
<td align="center">
<bold>FINAW</bold>
</td>
<td align="center">
<bold>AWD</bold>
</td>
</tr>
<tr>
<td rowspan="15" align="left">
<bold>BJ</bold>
</td>
<td rowspan="5" align="center">95%</td>
<td align="left">EPS-<italic>Extreme value</italic>
</td>
<td align="char" char=".">92.0530</td>
<td align="char" char=".">0.1546</td>
<td align="char" char=".">0.0110</td>
</tr>
<tr>
<td align="left">EPS-<italic>Logistic</italic>
</td>
<td align="char" char=".">89.4040</td>
<td align="char" char=".">0.1054</td>
<td align="char" char=".">0.0199</td>
</tr>
<tr>
<td align="left">EPS-<italic>Normal</italic>
</td>
<td align="char" char=".">92.0530</td>
<td align="char" char=".">0.1177</td>
<td align="char" char=".">0.0137</td>
</tr>
<tr>
<td align="left">EPS-<italic>Stable</italic>
</td>
<td align="char" char=".">96.6887</td>
<td align="char" char=".">0.1550</td>
<td align="char" char=".">0.0054</td>
</tr>
<tr>
<td align="left">EPS-<italic>TLS</italic>
</td>
<td align="char" char=".">96.0265</td>
<td align="char" char=".">0.1397</td>
<td align="char" char=".">0.0065</td>
</tr>
<tr>
<td rowspan="5" align="center">90%</td>
<td align="left">EPS-<italic>Extreme value</italic>
</td>
<td align="char" char=".">92.0530</td>
<td align="char" char=".">0.1263</td>
<td align="char" char=".">0.0175</td>
</tr>
<tr>
<td align="left">EPS-<italic>Logistic</italic>
</td>
<td align="char" char=".">85.4305</td>
<td align="char" char=".">0.0848</td>
<td align="char" char=".">0.0401</td>
</tr>
<tr>
<td align="left">EPS-<italic>Normal</italic>
</td>
<td align="char" char=".">88.0795</td>
<td align="char" char=".">0.0987</td>
<td align="char" char=".">0.0258</td>
</tr>
<tr>
<td align="left">EPS-<italic>Stable</italic>
</td>
<td align="char" char=".">86.0927</td>
<td align="char" char=".">0.0900</td>
<td align="char" char=".">0.0354</td>
</tr>
<tr>
<td align="left">EPS-<italic>TLS</italic>
</td>
<td align="char" char=".">91.7951</td>
<td align="char" char=".">0.1017</td>
<td align="char" char=".">0.0224</td>
</tr>
<tr>
<td rowspan="5" align="center">80%</td>
<td align="left">EPS-<italic>Extreme value</italic>
</td>
<td align="char" char=".">88.7417</td>
<td align="char" char=".">0.0958</td>
<td align="char" char=".">0.0332</td>
</tr>
<tr>
<td align="left">EPS-<italic>Logistic</italic>
</td>
<td align="char" char=".">84.1060</td>
<td align="char" char=".">0.0632</td>
<td align="char" char=".">0.0793</td>
</tr>
<tr>
<td align="left">EPS-<italic>Normal</italic>
</td>
<td align="char" char=".">85.4305</td>
<td align="char" char=".">0.0769</td>
<td align="char" char=".">0.0524</td>
</tr>
<tr>
<td align="left">EPS-<italic>Stable</italic>
</td>
<td align="char" char=".">75.4967</td>
<td align="char" char=".">0.0520</td>
<td align="char" char=".">0.1179</td>
</tr>
<tr>
<td align="left">EPS-<italic>TLS</italic>
</td>
<td align="char" char=".">80.7947</td>
<td align="char" char=".">0.0522</td>
<td align="char" char=".">0.1134</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn>
<p>
<italic>Note</italic>: In the table above, FICP, FINAW, and AWD are selected to verify the prediction performance of different models, where <inline-formula id="inf178">
<mml:math id="m224">
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>P</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mstyle displaystyle="true">
<mml:msubsup>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>100</mml:mn>
<mml:mo>%</mml:mo>
</mml:mrow>
</mml:mstyle>
<mml:mo>/</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>,<inline-formula id="inf179">
<mml:math id="m225">
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mi>I</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>W</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mstyle displaystyle="true">
<mml:msubsup>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>U</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mstyle>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>100</mml:mn>
<mml:mo>%</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>, and<inline-formula id="inf180">
<mml:math id="m226">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>W</mml:mi>
<mml:mi>D</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mstyle displaystyle="true">
<mml:msubsup>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>W</mml:mi>
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mstyle>
<mml:mo>/</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>In theory, when the PICP is greater than the significance level, the constructed prediction interval is effective. As seen from <xref ref-type="table" rid="T10">Table&#x20;10</xref>, the models satisfying this condition are EPS-TLS and EPS-Extreme value; these models are effective at the 95, 90, and 80% significance levels. However, if the value of the PICP is the only goal, the result will become meaningless, as increased PICP inevitably leads to a larger FINAW. Based on different <inline-formula id="inf181">
<mml:math id="m227">
<mml:mi>&#x3b1;</mml:mi>
</mml:math>
</inline-formula>conditions, the value of <inline-formula id="inf182">
<mml:math id="m228">
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mi>I</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>A</mml:mi>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>E</mml:mi>
<mml:mi>x</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>e</mml:mi>
<mml:mtext>&#xa0;</mml:mtext>
<mml:mi>v</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>l</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>is significantly higher than the FINAW value obtained through modeling of other distributions. At the same time, considering that in Section <italic>Distribution Function of Prediction Error</italic>, the fitting of extreme value distribution to EPS prediction error is very bad, it can be considered that the prediction interval constructed by EPS-Extreme value is not reasonable.</p>
<p>The PICP of EPS-TLS in the data from the three carbon&#x20;trading markets is higher than the significance level<inline-formula id="inf183">
<mml:math id="m229">
<mml:mi>&#x3b1;</mml:mi>
</mml:math>
</inline-formula>. In addition, under different<inline-formula id="inf184">
<mml:math id="m230">
<mml:mi>&#x3b1;</mml:mi>
</mml:math>
</inline-formula>, the FINAW values of the three markets are&#x20;<inline-formula id="inf185">
<mml:math id="m231">
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mi>I</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>A</mml:mi>
<mml:msubsup>
<mml:mi>W</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>T</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>U</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.0542</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf186">
<mml:math id="m232">
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mi>I</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>A</mml:mi>
<mml:msubsup>
<mml:mi>W</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>T</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>Z</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.1225</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf187">
<mml:math id="m233">
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mi>I</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>A</mml:mi>
<mml:msubsup>
<mml:mi>W</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>T</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>B</mml:mi>
<mml:mi>J</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.1397</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>. The FINAW value is not optimal&#x20;in any of the five interval prediction models, but with only a small increase in the FINAW value, the other index values&#x20;achieve better results. All things considered, this can be accepted.</p>
<p>For the prediction interval constructed using a normal distribution, in the BJ dataset, the PICP values of EPS-Normal are <inline-formula id="inf188">
<mml:math id="m234">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:msubsup>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>N</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>l</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>B</mml:mi>
<mml:mi>J</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>92.0530</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf189">
<mml:math id="m235">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:msubsup>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>N</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>l</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>B</mml:mi>
<mml:mi>J</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>88.0795</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> under significance levels of 0.95 and 0.90, respectively; they fail to meet the condition of a level of <inline-formula id="inf190">
<mml:math id="m236">
<mml:mi>&#x3b1;</mml:mi>
</mml:math>
</inline-formula> that is greater than significance. This also reflects the necessity of detailed research on error distribution. For AWD indicators, although <inline-formula id="inf191">
<mml:math id="m237">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>W</mml:mi>
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>T</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is not all better than the benchmark model, there is little difference. Considering the comprehensive performance of the three indicators of the proposed scenario, EPS-TLS still has obvious advantages over the four benchmark models in constructing the prediction interval.</p>
<p>In addition, the carbon price prediction intervals generated by the three proposed schemes of the three carbon trading markets are shown in <xref ref-type="fig" rid="F5">Figure&#x20;5</xref>. It can be observed that EPS-TLS has a smaller bandwidth and is surrounded by these constructed prediction intervals in most target values. The constructed confidence interval is very effective.</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption>
<p>Carbon price prediction intervals generated by the three proposed schemes.</p>
</caption>
<graphic xlink:href="fenvs-09-740093-g005.tif"/>
</fig>
</sec>
</sec>
</sec>
<sec sec-type="discussion" id="s5">
<title>Discussion</title>
<p>In this section, we will discuss the robustness, application, and further development of EPS in the carbon price market.</p>
<sec id="s5-1">
<title>Robustness Discussion</title>
<p>Because the results of both deep learning and metaheuristic optimization algorithms are always accompanied by randomness and probability mechanisms, the results of each experiment will still have deviations even when the parameters are set exactly the same. At the same time, in the actual forecast, the actual values of the future carbon price cannot be predicted in advance; thus, it is impossible to use the evaluation index to verify the future value in advance. Therefore, the stability of the EPS is also an important factor that affects the prediction.</p>
<p>The standard deviation is an effective measure of system stability. It can be indicated as <inline-formula id="inf192">
<mml:math id="m238">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>SD</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>M</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:msubsup>
<mml:mo>&#x2211;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mi>n</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>M</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo>-</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>M</mml:mi>
<mml:mo>&#xaf;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mo>/</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:math>
</inline-formula>, where <italic>n</italic> is the number of training iterations, <italic>M</italic>
<sub>
<italic>K</italic>
</sub> is the predicted value of the K-th training result, and <italic>M</italic> is the average of the <italic>N</italic>-th results (<xref ref-type="bibr" rid="B37">Xiao et&#x20;al., 2017</xref>). The smaller the value of <italic>SD</italic>, the higher the stability of the&#x20;model.</p>
<p>To evaluate the stability of the different models, the <italic>SD</italic>
<sub>
<italic>(M)</italic>
</sub> values of four evaluation indices were calculated in 30 prediction experiments using three carbon price datasets.</p>
<p>
<xref ref-type="table" rid="T11">Table&#x20;11</xref> shows a comparison of the stability test results of different prediction systems based on ICEEMDAN processing. In the EU ETS dataset, ICEE-ELM has good stability (<inline-formula id="inf193">
<mml:math id="m239">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>E</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.0105</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>), but the stability is still slightly lower than that of the EPS prediction system (<inline-formula id="inf194">
<mml:math id="m240">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.0101</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>). In the BJ and SZ datasets, CNN has good prediction accuracy in previous experiments, but its robustness is not good, and the prediction results fluctuate greatly. In contrast, EPS obtains a smaller SD value regardless of which dataset is used. This further shows that different single prediction systems have different robustness when used with different datasets and indicates that EPS with a combination weighting strategy can be considered the prediction method that obtains the best prediction results.</p>
<table-wrap id="T11" position="float">
<label>TABLE 11</label>
<caption>
<p>Certainty of different forecasting methods.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Model</th>
<th align="center">ICEE-ELM</th>
<th align="center">ICEE-GBiLSTM</th>
<th align="center">ICEE-CNN</th>
<th align="center">EPS</th>
<th align="center">ICEE-LSTM</th>
<th align="center">ICEE-GWO-BP</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td colspan="7" align="left">
<bold>EU ETS</bold>
</td>
</tr>
<tr>
<td align="left">&#x2003;SD<sub>(MAPE)</sub>
</td>
<td align="char" char=".">0.0105</td>
<td align="char" char=".">0.0243</td>
<td align="char" char=".">0.0902</td>
<td align="char" char=".">
<bold>0.0101</bold>
</td>
<td align="char" char=".">0.3277</td>
<td align="char" char=".">0.0583</td>
</tr>
<tr>
<td align="left">&#x2003;SD<sub>(RMSE)</sub>
</td>
<td align="char" char=".">0.0004</td>
<td align="char" char=".">0.0007</td>
<td align="char" char=".">0.0041</td>
<td align="char" char=".">
<bold>0.0003</bold>
</td>
<td align="char" char=".">0.0042</td>
<td align="char" char=".">0.0007</td>
</tr>
<tr>
<td align="left">&#x2003;SD<sub>(MAE)</sub>
</td>
<td align="char" char=".">
<bold>0.0005</bold>
</td>
<td align="char" char=".">0.0008</td>
<td align="char" char=".">0.0044</td>
<td align="char" char=".">
<bold>0.0005</bold>
</td>
<td align="char" char=".">0.0033</td>
<td align="char" char=".">
<bold>0.0005</bold>
</td>
</tr>
<tr>
<td align="left">&#x2003;SD<sub>(IA)</sub>
</td>
<td align="char" char=".">
<bold>0.0001</bold>
</td>
<td align="char" char=".">
<bold>0.0001</bold>
</td>
<td align="char" char=".">0.0007</td>
<td align="char" char=".">
<bold>0.0001</bold>
</td>
<td align="char" char=".">0.0007</td>
<td align="char" char=".">
<bold>0.0001</bold>
</td>
</tr>
<tr>
<td colspan="7" align="left">
<bold>BJ</bold>
</td>
</tr>
<tr>
<td align="left">&#x2003;SD<sub>(MAPE)</sub>
</td>
<td align="char" char=".">0.0255</td>
<td align="char" char=".">0.0204</td>
<td align="char" char=".">0.0757</td>
<td align="char" char=".">
<bold>0.0147</bold>
</td>
<td align="char" char=".">0.0438</td>
<td align="char" char=".">0.0171</td>
</tr>
<tr>
<td align="left">&#x2003;SD<sub>(RMSE)</sub>
</td>
<td align="char" char=".">0.0091</td>
<td align="char" char=".">0.0117</td>
<td align="char" char=".">0.0224</td>
<td align="char" char=".">
<bold>0.0106</bold>
</td>
<td align="char" char=".">0.0212</td>
<td align="char" char=".">0.0107</td>
</tr>
<tr>
<td align="left">&#x2003;SD<sub>(MAE)</sub>
</td>
<td align="char" char=".">0.0129</td>
<td align="char" char=".">0.0123</td>
<td align="char" char=".">0.0379</td>
<td align="char" char=".">
<bold>0.0114</bold>
</td>
<td align="char" char=".">0.0321</td>
<td align="char" char=".">0.0819</td>
</tr>
<tr>
<td align="left">&#x2003;SD<sub>(IA)</sub>
</td>
<td align="char" char=".">
<bold>0.0011</bold>
</td>
<td align="char" char=".">
<bold>0.0011</bold>
</td>
<td align="char" char=".">0.0019</td>
<td align="char" char=".">0.0012</td>
<td align="char" char=".">0.0033</td>
<td align="char" char=".">0.0038</td>
</tr>
<tr>
<td colspan="7" align="left">
<bold>SZ</bold>
</td>
</tr>
<tr>
<td align="left">&#x2003;SD<sub>(MAPE)</sub>
</td>
<td align="char" char=".">0.1845</td>
<td align="char" char=".">0.1130</td>
<td align="char" char=".">0.7846</td>
<td align="char" char=".">
<bold>0.1124</bold>
</td>
<td align="char" char=".">0.3831</td>
<td align="char" char=".">0.1695</td>
</tr>
<tr>
<td align="left">&#x2003;SD<sub>(RMSE)</sub>
</td>
<td align="char" char=".">
<bold>0.0344</bold>
</td>
<td align="char" char=".">0.1363</td>
<td align="char" char=".">0.1745</td>
<td align="char" char=".">0.1352</td>
<td align="char" char=".">0.1443</td>
<td align="char" char=".">0.2029</td>
</tr>
<tr>
<td align="left">&#x2003;SD<sub>(MAE)</sub>
</td>
<td align="char" char=".">0.0448</td>
<td align="char" char=".">0.0062</td>
<td align="char" char=".">0.1934</td>
<td align="char" char=".">
<bold>0.0054</bold>
</td>
<td align="char" char=".">0.0179</td>
<td align="char" char=".">0.0348</td>
</tr>
<tr>
<td align="left">&#x2003;SD<sub>(IA)</sub>
</td>
<td align="char" char=".">0.0021</td>
<td align="char" char=".">0.0061</td>
<td align="char" char=".">0.0076</td>
<td align="char" char=".">0.0018</td>
<td align="char" char=".">0.0067</td>
<td align="char" char=".">
<bold>0.0015</bold>
</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn>
<p>
<italic>Note</italic>: ICEE is an abbreviation for ICEEMDAN. The best indicator values are shown in bold&#x20;type.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>It is worth mentioning that the average prediction stability of GBiLSTM in the three prediction datasets is better than that of the&#x20;traditional LSTM model; that is,<inline-formula id="inf195">
<mml:math id="m241">
<mml:mrow>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>GBiLSTM</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo stretchy="true">&#xaf;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.0496</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, but <inline-formula id="inf196">
<mml:math id="m242">
<mml:mrow>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo stretchy="true">&#xaf;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.0566</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>. The results show that the proposed GBiLSTM not only has better prediction accuracy than LSTM but also that its robustness is significantly improved.</p>
</sec>
<sec id="s5-2">
<title>Application of the Proposed Ensemble Prediction System</title>
<p>
<list list-type="simple">
<list-item>
<p>1) A stable carbon price prediction system plays a prominent role in the initial allocation of the carbon quota, in transaction pricing and in effective monitoring of market risk. The proposed EPS system not only shows accurate point prediction performance but also reasonably analyzes and mines the potential uncertainty of carbon prices by constructing the carbon price prediction interval based on error distribution fitting.</p>
</list-item>
<list-item>
<p>2) The proposed EPS system combines a deep learning framework with a traditional neural network and thereby provides a new idea for carbon price prediction and an effective reference tool that policymakers can use to research the volatility of the carbon market.</p>
</list-item>
<list-item>
<p>3) Comparing the EU ETS market data with the carbon price markets in Shenzhen and Beijing, it is helpful for China to analyze the evolution of the mature carbon trading market price in the EU; this will help the regulatory authorities adjust the policy and ensure the steady development of China&#x2019;s carbon market.</p>
</list-item>
<list-item>
<p>4) EPS has high practical value and strong expansibility and can easily fit highly volatile nonlinear time series. It thus provides a new intelligent supervision scheme for building a sound global carbon trading market in the future. At the same time, the use of a deep learning integrated forecasting system with high accuracy and strong stability is expected to become a new direction of energy and financial markets in the future.</p>
</list-item>
</list>
</p>
</sec>
<sec id="s5-3">
<title>Suggestions on Further Improvement of Carbon Price Market</title>
<p>More accurate prediction of carbon prices can provide some effective suggestions through which governments and enterprises can build and improve the carbon price market in the future. These are outlined&#x20;below.</p>
<sec id="s5-3-1">
<title>Improvement of the Initial Allocation Mechanism of Carbon Emission Rights</title>
<p>In the initial allocation of carbon quotas, we should pay attention to the fairness of allocation. First, the government should formulate incentive policies to encourage regional governments and local enterprises to reduce emissions and should give appropriate incentives or policy support to the regions and enterprises that use emission reduction technologies. Second, the initial allocation of carbon emission rights requires effective operation and an effective regulatory system; both of these components directly affect the efficiency and fairness of carbon quota allocation. Strengthening the construction of a carbon emission rights regulatory system will help achieve efficiency and fairness of resource allocation as China&#x2019;s total emission reduction target is being&#x20;met.</p>
</sec>
<sec id="s5-3-2">
<title>Rationalization of the Carbon Trading Pricing System</title>
<p>Owing to the imperfect development of the carbon trading market and the carbon trading pricing system, the carbon trading price is easily affected by monopoly enterprises. At present, there is a certain monopoly phenomenon in the carbon trading market in some regions of China. Some small buyer enterprises can only passively accept the carbon price, and this allows monopoly enterprises to disproportionately influence the supply and demand of the market and reduces total social welfare. The establishment of a reasonable pricing system that avoids monopoly price manipulation is conducive to the return of carbon prices to real value levels and to the optimal allocation of resources.</p>
</sec>
<sec id="s5-3-3">
<title>Improvement of the Carbon Market Risk Management and Control System</title>
<p>In the process of price fluctuation risk control, an accurate and effective price forecasting model can be used to monitor price fluctuations. Using the relevant data, such a model can be used to predict long-term and short-term carbon trading prices, predict future fluctuation trends, and establish an effective carbon trading price risk early warning index system to effectively monitor the volatility risk caused by market price fluctuations. Through prediction of the carbon trading market price, we can grasp the fluctuations in carbon prices and take measures in advance to exercise macro control and reduce the level of risk when large market price fluctuations&#x20;occur.</p>
</sec>
</sec>
</sec>
<sec sec-type="conclusion" id="s6">
<title>Conclusion</title>
<p>The availability of a reliable carbon price forecasting system is significant in the emission trading market because it can help decision-makers evaluate climate policies and adjust the emission ceiling to effectively maintain the reliable operation of the market. In this study, the EPS system, which adopts advanced data feature extraction and selection methods, combines the three optimal submodels through a multiobjective dragonfly optimization algorithm and explores the deterministic and uncertainty prediction of carbon price series. This study has several important implications: 1) ICEEMDAN is better than traditional signal decomposition at extracting data features. This can improve the accuracy of the prediction system. 2) The deep learning algorithm has a better ability than other algorithms to forecast carbon price series. The developed GBiLSTM model has better predicted performance and stability than the traditional LSTM. 3) Unlike previous studies in which it was assumed that the prediction error obeys a Gaussian distribution, this study explores five fitting distribution functions of prediction error, finds a more accurate error distribution function, and constructs a more reasonable carbon price prediction interval. The experimental results indicate that the EPS prediction system achieves the best prediction performance, with MAPE values of 1.2657, 4.0156, and 1.0064% for the three datasets. In addition, according to the optimal distribution fitting function of EPS prediction error, the carbon price prediction interval is constructed to mine the uncertainty of carbon price fluctuation. At various significance levels, the constructed prediction interval contains most of the observations, showing that the interval prediction has good performance. Therefore, the system is an effective supplement to the existing carbon price prediction research framework and contributes to the ability of the government to reduce market risk and stabilize the market.</p>
<p>Although the combined prediction system proposed in this article achieves good prediction performance, there are still some limitations due to practical factors. Future research will analyze the carbon price trend from two perspectives, historical carbon price series and external factors, to obtain more accurate and stable prediction results.</p>
</sec>
</body>
<back>
<sec id="s7">
<title>Data Availability Statement</title>
<p>The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.</p>
</sec>
<sec id="s8">
<title>Author Contributions</title>
<p>YY: Conceptualization, Methodology. HG: Writing-Reviewing and Editing, Data curation. YJ: Formal analysis, Supervision. AS: Software, Validation and Visualization, Investigation.</p>
</sec>
<sec id="s9">
<title>Funding</title>
<p>This work was supported by the Major Project of the National Social Science Fund (Grant No. 17ZDA093).</p>
</sec>
<sec sec-type="COI-statement" id="s10">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s11">
<title>Publisher&#x2019;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Arouri</surname>
<given-names>M. E. H.</given-names>
</name>
<name>
<surname>Jawadi</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Nguyen</surname>
<given-names>D. K.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>Nonlinearities in Carbon Spot-Futures price Relationships during Phase II of the EU ETS</article-title>. <source>Econ. Model.</source> <volume>29</volume> (<issue>3</issue>), <fpage>884</fpage>&#x2013;<lpage>892</lpage>. <pub-id pub-id-type="doi">10.1016/j.econmod.2011.11.003</pub-id> </citation>
</ref>
<ref id="B2">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Atsalakis</surname>
<given-names>G. S.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Using Computational Intelligence to Forecast Carbon Prices</article-title>. <source>Appl. Soft Comput.</source> <volume>43</volume>, <fpage>107</fpage>&#x2013;<lpage>116</lpage>. <pub-id pub-id-type="doi">10.1016/j.asoc.2016.02.029</pub-id> </citation>
</ref>
<ref id="B3">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Benz</surname>
<given-names>E. A.</given-names>
</name>
<name>
<surname>Tr&#xfc;ck</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2009</year>). <article-title>Modeling the price Dynamics of CO2 Emission Allowances</article-title>. <source>Energ. Econ.</source> <volume>31</volume>, <fpage>4</fpage>&#x2013;<lpage>15</lpage>. <pub-id pub-id-type="doi">10.1016/j.eneco.2008.07.003</pub-id> </citation>
</ref>
<ref id="B4">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Byun</surname>
<given-names>S. J.</given-names>
</name>
<name>
<surname>Cho</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2013</year>). <article-title>Forecasting Carbon Futures Volatility Using GARCH Models with Energy Volatilities</article-title>. <source>Energ. Econ.</source> <volume>40</volume>, <fpage>207</fpage>&#x2013;<lpage>221</lpage>. <pub-id pub-id-type="doi">10.1016/j.eneco.2013.06.017</pub-id> </citation>
</ref>
<ref id="B5">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Dynamic Ensemble Wind Speed Prediction Model Based on Hybrid Deep Reinforcement Learning</article-title>. <source>Adv. Eng. Inform.</source> <volume>48</volume>, <fpage>101290</fpage>. <pub-id pub-id-type="doi">10.1016/J.AEI.2021.101290</pub-id> </citation>
</ref>
<ref id="B6">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cheng</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>A New Combined Model Based on Multi-Objective Salp Swarm Optimization for Wind Speed Forecasting</article-title>. <source>Appl. Soft Comput.</source> <volume>92</volume>, <fpage>106294</fpage>. <pub-id pub-id-type="doi">10.1016/j.asoc.2020.106294</pub-id> </citation>
</ref>
<ref id="B7">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chevallier</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2009</year>). <article-title>Carbon Futures and Macroeconomic Risk Factors: A View from the EU ETS</article-title>. <source>Energ. Econ.</source> <volume>31</volume> (<issue>4</issue>), <fpage>614</fpage>&#x2013;<lpage>625</lpage>. <pub-id pub-id-type="doi">10.1016/j.eneco.2009.02.008</pub-id> </citation>
</ref>
<ref id="B8">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Chung</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Gulcehre</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Cho</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Bengio</surname>
<given-names>Y.</given-names>
</name>
</person-group> (<year>2014</year>). &#x201c;<article-title>Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling</article-title>,&#x201d; in <source>NIPS 2014 Workshop on Deep Learning</source>. <comment>ArXiv e-prints Available at: <ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/1412.3555">https://arxiv.org/abs/1412.3555</ext-link>
</comment>. </citation>
</ref>
<ref id="B9">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Colominas</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Schlotthauer</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Torres</surname>
<given-names>M. E.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>Improved Complete Ensemble EMD: a Suitable Tool for Biomedical Signal Processing</article-title>. <source>Biomed. Signal Process. Control.</source> <volume>14</volume>, <fpage>19</fpage>&#x2013;<lpage>29</lpage>. <pub-id pub-id-type="doi">10.1016/j.bspc.2014.06.009</pub-id> </citation>
</ref>
<ref id="B10">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Du</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Niu</surname>
<given-names>T.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Point and Interval Forecasting for Metal Prices Based on Variational Mode Decomposition and an Optimized Outlier-Robust Extreme Learning Machine</article-title>. <source>Resour. Pol.</source> <volume>69</volume>, <fpage>101881</fpage>. <pub-id pub-id-type="doi">10.1016/J.RESOURPOL.2020.101881</pub-id> </citation>
</ref>
<ref id="B11">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fan</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Tian</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>Chaotic Characteristic Identification for Carbon price and an Multi-Layer Perceptron Network Prediction Model</article-title>. <source>Expert Syst. Appl.</source> <volume>42</volume> (<issue>8</issue>), <fpage>3945</fpage>&#x2013;<lpage>3952</lpage>. <pub-id pub-id-type="doi">10.1016/j.eswa.2014.12.047</pub-id> </citation>
</ref>
<ref id="B12">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hochreiter</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Schmidhuber</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>1997</year>). <article-title>Long Short-Term Memory</article-title>. <source>Neural Comput.</source> <volume>9</volume>, <fpage>1735</fpage>&#x2013;<lpage>1780</lpage>. <pub-id pub-id-type="doi">10.1162/neco.1997.9.8.1735</pub-id> </citation>
</ref>
<ref id="B13">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Huang</surname>
<given-names>G-B.</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>Q-Y.</given-names>
</name>
<name>
<surname>Siew</surname>
<given-names>C-K.</given-names>
</name>
</person-group> (<year>2006</year>). <article-title>Extreme Learning Machine: Theory and Applications</article-title>. <source>Neurocomputing</source> <volume>1</volume>. <pub-id pub-id-type="doi">10.1016/j.neucom.2005.12.126</pub-id> </citation>
</ref>
<ref id="B14">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jiang</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Tao</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Dong</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Xiong</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Robust Low-Rank Multiple Kernel Learning with Compound Regularization</article-title>. <source>Eur. J.&#x20;Oper. Res.</source> <volume>295</volume>, <fpage>634</fpage>&#x2013;<lpage>647</lpage>. <pub-id pub-id-type="doi">10.1016/J.EJOR.2020.12.024</pub-id> </citation>
</ref>
<ref id="B15">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jiang</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Luo</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Dong</surname>
<given-names>Y.</given-names>
</name>
</person-group> (<year>2021a</year>). <article-title>Simultaneous Feature Selection and Clustering Based on Square Root Optimization</article-title>. <source>Eur. J.&#x20;Oper. Res.</source> <volume>289</volume>, <fpage>214</fpage>&#x2013;<lpage>231</lpage>. <pub-id pub-id-type="doi">10.1016/j.ejor.2020.06.045</pub-id> </citation>
</ref>
<ref id="B16">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jiang</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Zheng</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Dong</surname>
<given-names>Y.</given-names>
</name>
</person-group> (<year>2021b</year>). <article-title>Sparse and Robust Estimation with ridge Minimax Concave Penalty</article-title>. <source>Inf. Sci.</source> <volume>571</volume>, <fpage>154</fpage>&#x2013;<lpage>174</lpage>. <pub-id pub-id-type="doi">10.1016/J.INS.2021.04.047</pub-id> </citation>
</ref>
<ref id="B17">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jiang</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Niu</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>A Combined Forecasting System Based on Statistical Method, Artificial Neural Networks, and Deep Learning Methods for Short-Term Wind Speed Forecasting</article-title>. <source>Energy</source> <volume>217</volume>, <fpage>119361</fpage>. <pub-id pub-id-type="doi">10.1016/J.ENERGY.2020.119361</pub-id> </citation>
</ref>
<ref id="B18">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jin</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Guo</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Song</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>A Hybrid System Based on LSTM for Short-Term Power Load Forecasting</article-title>. <source>Energies</source> <volume>13</volume> (<issue>23</issue>), <fpage>6241</fpage>. <pub-id pub-id-type="doi">10.3390/en13236241</pub-id> </citation>
</ref>
<ref id="B19">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Jiang</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Niu</surname>
<given-names>X.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>A Combined Forecasting Model for Time Series: Application to Short-Term Wind Speed Forecasting</article-title>. <source>Appl. Energ.</source> <volume>259</volume>, <fpage>114137</fpage>. <pub-id pub-id-type="doi">10.1016/j.apenergy.2019.114137</pub-id> </citation>
</ref>
<ref id="B20">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Jiang</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Ensemble Forecasting System for Short-Term Wind Speed Forecasting Based on Optimal Sub-model Selection and Multi-Objective Version of Mayfly Optimization Algorithm</article-title>. <source>Expert Syst. Appl.</source> <volume>177</volume>, <fpage>114974</fpage>. <pub-id pub-id-type="doi">10.1016/J.ESWA.2021.114974</pub-id> </citation>
</ref>
<ref id="B21">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lu</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Azimi</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Carbon Trading Volume and price Forecasting in China Using Multiple Machine Learning Models</article-title>. <source>J.&#x20;Clean. Prod.</source> <volume>249</volume>, <fpage>119386</fpage>. <pub-id pub-id-type="doi">10.1016/j.jclepro.2019.119386</pub-id> </citation>
</ref>
<ref id="B22">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mirjalili</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Dragonfly Algorithm: a New Meta-Heuristic Optimization Technique for Solving Single-Objective, Discrete, and Multi-Objective Problems</article-title>. <source>Neural Comput. Applic</source> <volume>27</volume>, <fpage>1053</fpage>&#x2013;<lpage>1073</lpage>. <pub-id pub-id-type="doi">10.1007/s00521-015-1920-1</pub-id> </citation>
</ref>
<ref id="B23">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Niu</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Du</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Developing a Deep Learning Framework with Two-Stage Feature Selection for Multivariate Financial Time Series Forecasting</article-title>. <source>Expert Syst. Appl.</source> <volume>148</volume>, <fpage>113237</fpage>. <pub-id pub-id-type="doi">10.1016/j.eswa.2020.113237</pub-id> </citation>
</ref>
<ref id="B24">
<citation citation-type="web">
<person-group person-group-type="author">
<name>
<surname>Shi</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Yeung</surname>
<given-names>D. Y.</given-names>
</name>
<name>
<surname>Wong</surname>
<given-names>W. K.</given-names>
</name>
<name>
<surname>Woo</surname>
<given-names>W. C.</given-names>
</name>
</person-group> (<year>2015</year>). &#x201c;<article-title>Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting</article-title>,&#x201d; in <conf-name>Advances in Neural Information Processing Systems (NIPS 2015)</conf-name>. <comment>ArXiv e-prints Available at: <ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/1506.04214">https://arxiv.org/abs/1506.04214</ext-link>
</comment>. </citation>
</ref>
<ref id="B25">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Song</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Elite Opposition Learning and Exponential Function Steps-Based Dragonfly Algorithm for Global Optimization</article-title>. <source>IEEE Int. Conf. Inf. Autom. ICIA</source>. <pub-id pub-id-type="doi">10.1109/ICInfA.2017.8079080</pub-id> </citation>
</ref>
<ref id="B26">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Song</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Qin</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Qu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>F.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>The Forecasting Research of Early Warning Systems for Atmospheric Pollutants: A Case in Yangtze River Delta Region</article-title>. <source>Atmos. Environ.</source> <volume>118</volume>, <fpage>58</fpage>&#x2013;<lpage>69</lpage>. <pub-id pub-id-type="doi">10.1016/j.atmosenv.2015.06.032</pub-id> </citation>
</ref>
<ref id="B27">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sun</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Z.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>A Modified Whale Optimization Algorithm for Large-Scale Global Optimization Problems</article-title>. <source>Expert Syst. Appl.</source> <pub-id pub-id-type="doi">10.1016/j.eswa.2018.08.027</pub-id> </citation>
</ref>
<ref id="B28">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tian</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Hao</surname>
<given-names>Y.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Point and Interval Forecasting for Carbon price Based on an Improved Analysis-Forecast System</article-title>. <source>Appl. Math. Modelling(C)</source>. <pub-id pub-id-type="doi">10.1016/j.apm.2019.10.022</pub-id> </citation>
</ref>
<ref id="B29">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Torres</surname>
<given-names>M. E.</given-names>
</name>
<name>
<surname>Colominas</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Schlotthauer</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Flandrin</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2011</year>). &#x201c;<article-title>A Complete Ensemble Empirical Mode Decomposition with Adaptive Noise</article-title>,&#x201d; in <conf-name>ICASSP, IEEE International Conference on Acoustics, Speech and signal processing- Proceedings</conf-name>, <fpage>4144e7</fpage>. <pub-id pub-id-type="doi">10.1109/ICASSP.2011.5947265</pub-id> </citation>
</ref>
<ref id="B30">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Niu</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Du</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>W.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Ensemble Probabilistic Prediction Approach for Modeling Uncertainty in Crude Oil price</article-title>. <source>Appl. Soft Comput.</source> <volume>95</volume>, <fpage>106509</fpage>. <pub-id pub-id-type="doi">10.1016/j.asoc.2020.106509</pub-id> </citation>
</ref>
<ref id="B31">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Niu</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Lv</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Point and Interval Prediction for Non-ferrous Metals Based on a Hybrid Prediction Framework</article-title>. <source>Resour. Pol.</source> <volume>73</volume>, <fpage>102222</fpage>. <pub-id pub-id-type="doi">10.1016/J.RESOURPOL.2021.102222</pub-id> </citation>
</ref>
<ref id="B32">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Gao</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Research and Application of a Hybrid Wind Energy Forecasting System Based on Data Processing and an Optimized Extreme Learning Machine [J]</article-title>. <source>Energies</source> <volume>11</volume>, <fpage>1712</fpage>. <pub-id pub-id-type="doi">10.3390/en11071712</pub-id> </citation>
</ref>
<ref id="B33">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Tan</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Hong</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kirschen</surname>
<given-names>D. S.</given-names>
</name>
<name>
<surname>Kang</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Combining Probabilistic Load Forecasts</article-title>. <source>IEEE Trans. Smart Grid</source> <volume>10</volume>, <fpage>3664</fpage>&#x2013;<lpage>3674</lpage>. <pub-id pub-id-type="doi">10.1109/TSG.2018.2833869</pub-id> </citation>
</ref>
<ref id="B34">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Design of Damage Identification Algorithm for Mechanical Structures Based on Convolutional Neural Network</article-title>. <source>Concurrency Computat Pract. Exper</source> <volume>30</volume>, <fpage>e4891</fpage>. <pub-id pub-id-type="doi">10.1002/cpe.4891</pub-id> </citation>
</ref>
<ref id="B35">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>OCT Image Recognition of Cardiovascular Vulnerable Plaque Based on CNN</article-title>. <source>IEEE Access</source> <volume>8</volume>, <fpage>140767</fpage>&#x2013;<lpage>140776</lpage>. <pub-id pub-id-type="doi">10.1109/access.2020.3007599</pub-id> </citation>
</ref>
<ref id="B36">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wei</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Chongchong</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Cuiping</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Carbon Pricing Prediction Based on Wavelet Transform and K-ELM Optimized by Bat Optimization Algorithm in China ETS: the Case of Shanghai and Hubei Carbon Markets</article-title>. <source>Carbon Manage.</source> <volume>9</volume> (<issue>6</issue>), <fpage>605</fpage>&#x2013;<lpage>617</lpage>. <pub-id pub-id-type="doi">10.1080/17583004.2018.1522095</pub-id> </citation>
</ref>
<ref id="B37">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Xiao</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Qian</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Shao</surname>
<given-names>W.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Multi-step Wind Speed Forecasting Based on a Hybrid Forecasting Architecture and an Improved Bat Algorithm</article-title>. <source>Energ. Convers. Manage.</source> <volume>143</volume>, <fpage>410</fpage>&#x2013;<lpage>430</lpage>. <pub-id pub-id-type="doi">10.1016/j.enconman.2017.04.012</pub-id> </citation>
</ref>
<ref id="B38">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Short-term Electric Load Forecasting Based on Singular Spectrum Analysis and Support Vector Machine Optimized by Cuckoo Search Algorithm</article-title>. <source>Electric Power Syst. Res.</source> <volume>146</volume>, <fpage>270</fpage>&#x2013;<lpage>285</lpage>. <pub-id pub-id-type="doi">10.1016/j.epsr.2017.01.035</pub-id> </citation>
</ref>
<ref id="B39">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>A Novel Decomposition&#x2010;ensemble Model for Forecasting Short&#x2010;term Load&#x2010;time Series with Multiple Seasonal Patterns</article-title>. <source>Soft Comput.</source> <volume>65</volume>, <fpage>478</fpage>&#x2013;<lpage>494</lpage>. <pub-id pub-id-type="doi">10.1016/j.asoc.2018.01.017</pub-id> </citation>
</ref>
<ref id="B40">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhu</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Wei</surname>
<given-names>Y.</given-names>
</name>
</person-group> (<year>2013</year>). <article-title>Carbon price Forecasting with a Novel Hybrid ARIMA and Least Squares Support Vector Machines Methodology</article-title>. <source>Omega</source> <volume>41</volume>, <fpage>517</fpage>&#x2013;<lpage>524</lpage>. <pub-id pub-id-type="doi">10.1016/j.omega.2012.06.005</pub-id> </citation>
</ref>
<ref id="B41">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhu</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Yuan</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Ye</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Examining the Multi-Timescales of European Carbon Market with Grey Relational Analysis and Empirical Mode Decomposition</article-title>. <source>Physica A: Stat. Mech. its Appl.</source> <volume>517</volume>, <fpage>392</fpage>&#x2013;<lpage>399</lpage>. <pub-id pub-id-type="doi">10.1016/j.physa.2018.11.016</pub-id> </citation>
</ref>
<ref id="B42">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Carbon price Forecasting with Variational Mode Decomposition and Optimal Combined Model</article-title>. <source>Physica A: Stat. Mech. its Appl.</source> <volume>519</volume>, <fpage>140</fpage>&#x2013;<lpage>158</lpage>. <pub-id pub-id-type="doi">10.1016/j.physa.2018.12.017</pub-id> </citation>
</ref>
</ref-list>
<app-group>
<app>
<title>Appendix</title>
<table-wrap id="app1" position="float">
<label>APPENDIX A1</label>
<caption>
<p>The list of abbreviations in this study.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th colspan="4" align="left">List of terminologies (method and indices)</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">
<bold>EMD</bold>
</td>
<td align="left">Empirical model decomposition</td>
<td align="left">
<bold>EEMD</bold>
</td>
<td align="left">Ensemble empirical mode decomposition</td>
</tr>
<tr>
<td align="left">
<bold>ICEEMDAN</bold>
</td>
<td colspan="3" align="left">Improved complementary ensemble empirical mode decomposition with adaptive noise</td>
</tr>
<tr>
<td align="left">
<bold>GRU</bold>
</td>
<td align="left">Gated recurrent unit</td>
<td align="left">
<bold>CNN</bold>
</td>
<td align="left">Convolutional neural networks</td>
</tr>
<tr>
<td align="left">
<bold>ELM</bold>
</td>
<td align="left">Extreme learning machine</td>
<td align="left">
<bold>BP</bold>
</td>
<td align="left">Back propagation neural network</td>
</tr>
<tr>
<td align="left">
<bold>CDF</bold>
</td>
<td align="left">Cumulative density function</td>
<td align="left">
<bold>ARIMA</bold>
</td>
<td align="left">Autoregressive interval moving average model</td>
</tr>
<tr>
<td align="left">
<bold>GWO</bold>
</td>
<td align="left">Grey wolf optimization algorithm</td>
<td align="left">
<bold>MODA</bold>
</td>
<td align="left">Multiobjective dragonfly optimization algorithm</td>
</tr>
<tr>
<td align="left">
<bold>GWO-BP</bold>
</td>
<td align="left">BP after GWO algorithm</td>
<td align="left">
<bold>GBiLSTM</bold>
</td>
<td align="left">Bidirectional long short-term memory-gated recurrent unit</td>
</tr>
<tr>
<td align="left">
<bold>BiLSTM</bold>
</td>
<td align="left">Bidirectional long short-term memory</td>
<td align="left">
<bold>SSA</bold>
</td>
<td align="left">Singular spectrum analysis</td>
</tr>
<tr>
<td align="left">
<bold>FINAW</bold>
</td>
<td align="left">Forecast interval normalized average width</td>
<td align="left">
<bold>FICP</bold>
</td>
<td align="left">Forecast interval coverage probability</td>
</tr>
<tr>
<td align="left">
<bold>AWD</bold>
</td>
<td align="left">Accumulated width deviation of testing dataset</td>
<td align="left">
<bold>IMFs</bold>
</td>
<td align="left">Intrinsic mode functions</td>
</tr>
<tr>
<td align="left">
<bold>DF</bold>
</td>
<td align="left">Distribution function</td>
<td align="left">
<bold>CDF</bold>
</td>
<td align="left">Cumulative distribution function</td>
</tr>
<tr>
<td align="left">
<bold>MAPE</bold>
</td>
<td align="left">Mean absolute percentage error</td>
<td align="left">
<bold>MAE</bold>
</td>
<td align="left">Mean absolute error</td>
</tr>
<tr>
<td align="left">
<bold>RMSE</bold>
</td>
<td align="left">Root mean square error</td>
<td align="left">
<bold>IA</bold>
</td>
<td align="left">Concordance index</td>
</tr>
<tr>
<td align="left">
<bold>TLS</bold>
</td>
<td align="left">T location-scale function</td>
<td align="left">
<bold>GRNN</bold>
</td>
<td align="left">Generalized regression neural network</td>
</tr>
<tr>
<td rowspan="2" align="left">
<bold>R2</bold>
</td>
<td rowspan="2" align="left">Coefficient of determination</td>
<td align="left">
<bold>DL</bold>
</td>
<td rowspan="2" align="left">Deep learning probability density function</td>
</tr>
<tr>
<td align="left">
<bold>PDF</bold>
</td>
</tr>
<tr>
<td align="left">
<bold>ANNs</bold>
</td>
<td align="left">Artificial neural networks</td>
<td align="left">
<bold>LSTM</bold>
</td>
<td align="left">Long short-term memory</td>
</tr>
</tbody>
</table>
</table-wrap>
</app>
</app-group>
</back>
</article>