World-class research. Ultimate impact.
More on impact ›

Data Report ARTICLE Provisionally accepted The full-text will be published soon. Notify me

Front. Appl. Math. Stat. | doi: 10.3389/fams.2019.00057

RISK MEASURES AND INEQUALITY

 ALEXANDRA LIVADA1* and MARIA CHRISTINA ANAGNOSTOPOULOU2
  • 1Athens University of Economics and Business, Greece
  • 2Deloitte, United Kingdom

RISK MEASURES AND INEQUALITY
Alexandra Livada* Maria Christina Anagnostopoulou**

The purpose of this paper is to study the evolution of income inequality in European and USA countries along with the evolution of financial volatility. Do risk measures based on financial volatility help to forecast severe changes in inequality? Which of volatility models model are is more accurate for the estimation of risk measures and how this can be used for income inequality forecasting?
Income inequality measurement is based on the top income shares estimates in countries such as France, Greece, UK and USA. Most of the top income shares data are taken from the World Inequality Database. The financial risk measures are presented through the 95%VaR (Value at Risk) and ES (Expected shortfall) of three stock exchange indexes (S&P500, FTSE100, DAX30) which are based on ARCH/GARCH models. Due to inequality data shortage for recent years Tthe period covered is from 2000 to 2015.
We study and compare the stylized facts of the Stock Exchange and inequality indices. Then, we compare the performance of a class of six alternative univariate specifications of ARCH/GARCH models under two distributional assumptions is discussed for the estimation of one-day ahead 95% Value-at-Risk (VaR) and Expected Shortfall (ES) measures for three equity indices (DAX,FTSE!)),S&P500). We implement six alternative specifications of ARCH-GARCH family volatility models-GARCH(1,1), GARCH(1,2), GARCH(2,1), EGARCH(1,1), TGARCH(1,1), GJR GARCH(1,1) under two distributional assumptions, the Normal and Student-t distributions error. The alternative distributions allow to select a more flexible model for the return tails. The 95%VaR out of sample forecasts, for all models, are evaluated according to Kupiec’s and Christoffersen’s tests. ES is evaluated through the estimation of a Loss Function.



*Associate Professor, Athens University of Economics and Business
** Consultant, Deloitte London, UK (Part of this paper is based on my MSc thesis submitted at LSE)


INTRODUCTION
Financial volatility, risk measures and inequality are issues that concern not only finance and economics. It is a rather broad topic related to politics, education, health and many other areas in social sciences. The globalization and the economic crisis of 2008 have made volatility modeling one of the most attractive research topics. On the other hand income inequality is the main issue for the economic growth of any country. How do inequality, financial volatility and risk measures behave during the periods of economic crisis?
Academics and financial institutions develop sophisticated models in order to estimate and forecast volatility and market risk. Volatility, which is the main characteristic of any financial asset and its return, plays a very important role in many financial applications. Its primary role is to estimate the value of market risk. Well known measures for the estimation-evaluation of market risk is Value-at-Risk (VaR) and Expected Shortfall (ES) measures. Value-at-Risk is referring to a "portfolio's worst outcome which is likely to occur at a given confidence level" while Expected Shortfall is "the average of all losses which are greater or equal than Value-at-Risk". The family of ARCH-GARCH models is the main representative of the parametric models used for the calculation of Value-at-Risk and Expected Shortfall measures.
Though volatility in financial data is not directly observable it has some characteristics commonly seen in asset returns. These characteristics, known as “stylized facts” of financial time series returns are mainly summarized to the volatility clustering Mandelbrot (1963) and leverage effect Black (1976),along with a distribution which is heavy-tailed with downward skew and strong autocorrelations in squared returns.
The Generalized Autoregressive Conditional Heteroskedasticity (GARCH) model and its extensions are able to capture volatility clustering very efficiently. This is known in the literature as Univariate ARCH-GARCH modeling. One the other hand, there are many ways to observe and measure income inequality. One of the most popular measures of income inequality is the examination of top income shares which gives important indications of the structure of long-term economic development Kuznets (1953).
On the other hand, there is important voluminous recent literature developed especially after the 2008’s crisis (Ozan Eksi,2017 Stiglitz, 2013, Piketty, T. and Saez, E. 2010 )on how much volatility contributes to inequality.
The target of this paper is to present the inequality evolution through the evolution of top 1% income share and at the same time try to connect and to compare the performance of alternative ARCH/GARCH univariate models for the estimation of 95%Value-at-Risk (VaR) and 95%Expected Shortfall (ES) measures for three equity indices. We draw conclusions about which methods fit better according to certain statistical criteria. We implement six alternative specifications of ARCH-GARCH family volatility models -GARCH(1,1), GARCH(1,2), GARCH(2,1), EGARCH(1,1), TGARCH(1,1), GJR GARCH(1,1) under two distributional assumptions, the normal error and Student-t distributions. The alternative distributions allow selecting a more flexible model for the return tails. For this purpose daily data of three stock exchange indices have been used. The financial time series data used are the US S&P500, the British FTSE100, the German DAX. The period studied is from January, 2000 upto May, 2015. The criteria for the period chosen is the availability of inequality data. The criteria for the country choice are mainly inequality data availability from the same data bank for the same period of time. Specific Rcode packages (urgarch, mrgarch, Performance Analytics) have been used for the estimation. The VaR one-day-ahead out-of-sample forecasts are evaluated according to Kupiec’s and Christoffersen’s tests. ES is evaluated through the estimation of Loss Function.
This paper is structured as follows: In Section 2, we discuss briefly the financial data employed as well as inequality data. Then, in Section 3 we present the estimated coefficients of the univariate GARCH models which are the GARCH, EGARCH, TGARCH, GJR-GARCH under two alternative error distributions. In Section 4, volatility forecasts and 95%VaR, 95%ES estimates produced by alternative GARCH models is shown. Evaluation of various GARCH models’ average 95%VaR and 95%ES estimates based on Kupiec’s and Christoffersen’s criteria and respective Loss Function is also discussed. Finally, conclusions are presented in Section 5, as well as possible future research extensions.
2.1 Data and Stylized facts
For our study we use three daily stock index data: the Chicago Standard and Poors’s 500(S&P500), the London Financial Times Stock Exchange 100(FTSE100) as well as the German Stock Index (DAX30). The data cover the period January 2000-May 2015. For all three indices the evolution of their adjusted closing prices and their continuously compounded daily returns are illustrated in Figures 2.1 and 2.2 respectively.

2.1 Data and Stylized facts
For our study we use three daily stock index data: the Chicago Standard and Poors’s 500(S&P500), the London Financial Times Stock Exchange 100 (FTSE100) as well as the German Stock Index (DAX30). The data cover the period January 2000-May 2015. For all three indices the evolution of their adjusted closing prices and their continuously compounded daily returns are illustrated in Figures 2.1 and 2.2 respectively.
Figure 2. 1 Adjusted Closing Prices of DAX, FTSE100, S&P500 from Jan2000 to May2015






The most important financial events that are responsible for the main financial crises, depicted in Figure 2.1, are the following: In 2000 a massive fall in equity markets from over-speculation in tech stocks was produced by dot-com bubble pops. After
that, in 2001another junk bond crisis took place and the crisis in Argentina resulted in defaulting payment obligations by the government. Also, the September 11th attacks of the same year hindered various critical communication hubs creating great risk. Next year, 2002, a serious bond market crisis in Brazil took place. Five years later, in 2007 US real estate resulted in the collapse of massive international banks and financial institutions. In September 2008 Lehman Bros bankruptcy takes place, creating a great fall and in the 29th of the same month Dow Jones falls 778 points. In February of 2009, Financial Stability Plan is announced and Recovery Act is signed. This global crisis did not start at the same time for every country, for instance U.K officially enters recession in January 2009.
Visual inspection of indices in Figure 2.2 shows clearly that the mean is constant but the variance keeps changing over time, so the return series is not appeared to be a sequence of i.i.d. random variables. The stylized fact noticeable from the figures is the volatility clustering according to Madelbrott (1963) and Fama (1970). Table 2.1 provides summary statistics as well as Jarque- Bera statistic for testing normality. In all cases the null hypothesis of normality is rejected and there is evidence of excess kurtosis and asymmetry.
Figure 2. 2 Daily returns of Adjusted Closing Prices of DAX, FTSE100, S&P500, from Jan2000 to May2015








In Figure 2.3 we may observe the phenomenon of leverage by plotting the daily market prices and their volatility (standard deviations of the continuously compounded returns). The periods of market drops are followed by a large increase in volatility. The leverage effect is captured by asymmetric conditional volatility specification models presented in Section 2.1.


One of the most popular measures of income inequality is the examination of top income shares which gives important indications of the structure of long-term economic development Kuznets (1953). In this section we model the 1% top income share using a set of macroeconomic factors for the period 1971-2014. We test for the existence of a long-run relationship between 1% top income share (tis) and the other macroeconomic factors for the cases of France, UK and USA using the bounds test for cointegration proposed by Pesaran et al. (2001).
According to the current literature on income inequality, we utilize tax data for the period 1972-2014 in order to study top income shares, following the Piketty (2001) methodology.
More specifically, we investigate how and to what extent the main macroeconomic factors may affect income inequality as measured by top income shares. The macroeconomic indicators of most interest are the economic development and the openness of the economy, as well as education, financial development and inflation
The methodology of Autoregressive Distributed Lag (ARDL) cointegration procedure is employed to analyse empirically the long-run relationships and dynamic interaction among the variables of interest. The findings refer to the period before and after the economic crisis for USA, UK and France. If the results are favorable, we can continue forming the long-run relationship, calcu- lating the long-run multipliers, the short-run dynamics, the speed of the adjustment to the long-run equilibrium etc. If the results show no signs of cointegration among the variables, the formation of the long-run relationship and those that was described above would be meaningless because the results would be spurious.


In Table 2.2 we present some summary statistics about tis01 series for each country. The results in the table are calculated using the data for the period 1971 to 2014. The evolution of income inequality as this is captured from the 1% top income shares and the respective yearly changes are presented in Figures 2.4 and 2.5. We notice that there is an upward sharp increase of the 1%tis after the mid ‘80s period for all three countries. The 2008-09 economic crisis is captured with a violent decrease in France and UK but not for the USA. So, the financial implications of the 2008 economic crisis affected some countries more heavily than others


So, if we study carefylly the Figures 2.2 and 2.5 we notice that the deep recession in the stock market in 2008 does not affect immediately the 1% top income share. The inequality effect is not the same in all the four countries with respect time, duration and sharpness. That is, in Greece the consequences came almost immediately with sharper changes in the inequality index then other countries. There is also a long lasting period of continuously rising inequality. On the other hand, in USA, UK and France the effect of the crisis shows a decline in the 1% top income share which appears later (after 2010) but is not that sharp as in Greece (in mean levels) and shows a mixed year to year changes. The trend in USA seems to be unaffected while in UK and France there is a slight trend change which signals trend ambiguity.
But what if we try to connect these findings with the risk measures? Which model should we use in order to connect volatility with risk measures and then to inequality?
In the following section an econometric analysis is applied in order to choose the best fitting model to volatility data.









3. Estimation of GARCH models

In this section, we present and discuss for all three equity indices the estimation of three symmetric GARCH(p,q) models and three asymmetric GARCH(1,1) under two error distributions: the Normal and Student-t. The respective estimated specifications are: GARCH(1,1), GARCH(1,2) GARCH(2,1)and EGARCH(1,1), TGARCH(1,1) GJR-GARCH(1,1). For the estimation of these models we have used fGARCH package which follow model specification (2.11), as well as Performance Analytics and ugarch. For all models the AIC and SBC criteria are presented in Table 2.2. For each equity index the best symmetric GARCH(p,q) is selected according to the best SBC criterion and presented in Table 2.3. All three asymmetric models for each index are also presented analytically in Tables 2.4 to 2.6 under two error distributions: the normal and student-t. For all considered models,tests are conducted for their residuals for autocorrelation (Ljung and Box’s Q test) and for heteroscedasticity (ARCH LM tests).Almost all models do not violate the homoskedasticity hypothesis. In Table 3.1 the findings are:
For the symmetric GARCH models for all equity indices for both error distributions, the best model according to both AIC and SBC criteria is GARCH(2,1) with the only exception of FTSE, where GARCH(1,1) is suggested by SBC.
For the asymmetric GARCH models, under the Normal distribution for the indices S&P500, DAX and FTSE both AIC and SBC criteria suggest that TGARCH(1,1) is the best.
Under the Student-t: For both AIC and SBC the TGARCH(1,1) is the best for FTSE and DAX indices. EGARCH(1,1) is the best for S&P500.
In Table 3.2 where the estimated parameters of best symmetric GARCH models are presented we notice that the conditional variance parameters are highly significant and that the distribution of zt has significantly thicker tails than the normal. In Tables 3.3 and 3.4 where the estimated parameters of the asymmetric EGARCH(1,1) and TGARCH(1,1) specifications are presented we notice that all coefficients are statistically significant under both distributions. The leverage effect is present in all EGARCH(1,1) and TGARCH(1,1) models for all indices since the respective coefficients are significantly different from zero. The GJR presented in Table 2.6 for both distributions does not capture the overall volatility equally well for all indices as by EGARCH(1,1) and TGARCH . In all Tables the reported t-statistics are estimated from robust standard errors.




4. VALUE -at- RISK AND EXPECTED SHORTFALL
Value-at-Risk (VaR) is a statistic which reports, for a given portfolio, the financial risk which refers to the worst outcome that may occur over a certain period and at a certain confidence interval. In 1922, New York Stock Exchange introduced indirectly a reference to Value-at-Risk (VaR) when it was asked from its member firms to hold 10% capital of their assets. The widely known VaR measure was suggested by Baumol (1963). He adjusted the standard deviation to a confidence level parameter that reflected the user’s attitude to risk. This VaR measure has been used by bank regulators in order to determine bank capital requirements against the risk faced by financial institutions. In 1994, JP Morgan adopted VaR releasing RiskMetrics system on the Internet. An extended discussion for VaR can be found, among others, in Best(1999) and Dowd(2005) as well as in Bauwens et al(2012). There are also many references on VaR application (see among others Huang and Tseng, (2009), Alexander (2008), Angelidis and Degiannakis (2008)).
However, there are criticisms for VaR that the underlying statistical assumptions are violated (Taleb, 1997a &1997b and Hoppe,1998&1999) while Beder (1995) argued that alternative risks for management techniques produce different VaR forecasts and this might affect the accurancy of risk estimate. Moreover, Marshall and Siegel(1997) proved that if we consider the same portfolios we may find statistically significant differences, while according to Artzner et al. (1997, 1999) VaR is not necessarily sub-additive for the case of more than one instruments. In order to overcome VaR shortcomings, Artzner and Delbaen et al. (1997) introduced the Expected Shortfall (ES) risk measure, which estimates the expected value of loss in the case of a VaR violation. In this paper we will study both VaR and ES are estimated.

VaR, at a given probability level , is defined(Xekalaki &Degiannakis, 2010) as the predicted amount of financial loss of a portfolio over a given time horizon. If is the observed value of a portfolio at time , and y_t=ln(P_t/P_(t-1) )is the ln-returns for the period from to , then for a long trading position and under the assumption of standard normally distributed ln-returns, VaR is defined to be the value satisfying the condition below:


(2.1)

This implies that
,
(2.2)

where is the -th percentile of the standard normal distribution.
So, under the assumption that , the probability of a loss less than is equal to . “The value -1.645 is the value of VaR at a 95% level of confidence, or, in other words, for a capital of €10 million, the 95% VaR is equal to €164500. Thus, if a risk manager states that the daily VaR of a portfolio is €164500 at a 95% confidence level, it means that there are five chances in a 100 for a loss greater than €164500 to incur” (Xekalaki&Degiannakis, 2010).
Value-at-Risk can be is estimated via parametric, non-parametric and semiparametric techniques. In this study we apply the parametric approach, where VaR is considered through the ARCH-GARCH framework below.
Suppose that the ln-returns yt can be expressed by
, (2.3)
where is the expected return of portfolio for the period from to , and is the error term. The unpredictable part of the ln-returns is expressed with an ARCH process as presentedin Section 2.1, that is:


ε_t= σ_t z_t
σ_t=g(θ|Ι_(t-1)) (2.4)
z_t~ f(0,1;w)
where, as usual, zt denotes a random variable with density function f(0,1;w), with mean equal to zero and variance equal to one, and w is a parameter vector to be estimated. The VaR value in this case is given by
〖VaR〗_t^((1-p))=μ_t+f_a(z_t; w)σ_t (2.5)
where f_a(z_t;w) is the a-quantile of the assumed distribution, computed on the basis of the information set available at time t, and μ_(t+1|t) and σ_(t+1|t) are respectively the conditional mean and conditional standard deviation forecasts at time t+1 given the information at time-t. The one-step-ahead VaR forecast, based on ARCH model, is given by
〖VaR〗_(t+1|t)^((1-p))= μ_(t+1|t) +f_a (z_t;w^((t)) ) σ_(t+1\|t) (2.6)
Expected Shortfall( ES) provides information about the expected loss in the case of an extreme event summarizing the risk in just one number. It is a better risk measure since according to Yamai and Yoshiba (2005) it is more reliable during market turmoil. ES is a probability weighted average of tail loss and the one-step-ahead ES forecastaccording to Degiannakis and Xekalaki (2010) is defined mathematically as below:

〖ES〗_(t+1|t)^((1-p))=E(y_(t+1) |(y_(t+1)≤〖VaR〗_(t+1|t)^((1-p)) )) (2.7)

In order to calculate ES, we may follow Dowd (2005) who estimated VaR by slicing the tail into a large numberk ̃ of segments of equal probability mass. Then, VaR is associated with each segment and ES is calculated as the average of these VaR estimates as below:
〖ES〗_(t+1|t)^((1-p))=k ̃^(-1) ∑_(i=1)^k ̃▒(〖VAR〗_(t+1|t)^((1-p+ip〖(k ̃+1)〗^(-1) ) ) ) . (2.8)


4.1 Backtesting VaR
Financial institutions pay great attention to the accurate estimation of 95% or 99%VaR. In real life if VaR is overestimated it leads regulators to charge higher amount of capital than it is really needed and this has a negative impact on their performance. The risk that financial institutions face may not be covered by the regulatory capital left aside if VaR is underestimated. The most simple method for the accurate evaluation of VaR forecast is to count in how many cases the losses exceed the value of VaR. If this count is not substantially different from what is expected, then the VaR forecasts are sufficiently computed.
The most well-known statistical methods for evaluating VaR models are Kupiec’s (1995) and Christoffersen’s (1998,2003) which are back-testing measures. Inference of these methods focus on hypothesis testing about the percentages of times a portfolio loss has exceeded the estimated VaR.
Kupiec (1995) introduced a back testing method in terms of a hypothesis test for the expected percentage of VaR violations p=P(y_(t+1)≤〖VAR〗_(t+1|t)^((1-p) )) and used the observed rate at which the portfolio loss has been violated by the estimation of VaR in order to test whether the real coverage percentage p^* is statistically significantly different from the desired level p.
The tested hypothesis are:
H0 : p* = p (2.9)
H1 : p*≠ p
The likelihood ratio test under null hypothesis is:
LRun = 2log((1-N/T ̃ )^(T ̃-N) (N/T ̃ )^N )-2log((1-p)^(T ̃-N)-p^N ). (2.10)
where T ̃ is the period that loss is more than the estimated VaR. N is the number of trading days that follows a binomial distribution under H0 with parameters T ̃ and p. LRunis asymptotically χ2 distributed with one degree of freedom. This test is known as unconditional covariance test. The null hypothesis is rejected for both a statistically significantly low or high failure rate.

Christoffersen(1998) introduced a testing method based on the expected percentage of having a VaR violation event conditional on the number of times it occurred in the past in a first order Markov set up and tested whether VaR violation events are independent with each other or not. That is, he tested (3.11) for {I ̃_t }_(t=1)^∞ trading days:
π_ij=P(I ̃_t=i│I ̃_(t-1)=j)=P(I ̃_t=i),i,j=0,1 (2.11)
where: I ̃_(t+1)={█(1, ify_(t+1)<〖VaR〗_(t+1|t)^((1-p))@0, ify_(t+1)≥〖VaR〗_(t+1|t)^((1-p)) )┤ (2.12)
Therefore, the probability of observing a violation in VaR in two serial periods must be equal to the desired p value. The hypotheses tested are:
H0: π01 = π11=p (2.13)
H1: π01 ≠ π11
The hypotheses were tested using a LRin likelihood ratio statistic that follows asymptotically the χ2 distribution with one degree of freedom:
LRin= 2 (log⁡((〖1-π ̂〗_01 )^(n_(00 ) ) 〖π ̂_01〗^(n_01 ) (〖1-π ̂〗_11 )^(n_(10 ) ) π ̂_11^(n_11 )-log((1-N/T ̃ )^(n_(00 )+n_(10 ) ) (N/T ̃ )^(n_01+n_11 ) ))~ χ_1^2, (2.14)
Where π ̂_ij=n_ij/(∑_j▒n_ij ) is the sample estimate of π_ij and n_ij is the number of trading days with value i followed by j, for i,j=0,1.
Christoffersen (1998) also suggested to test simultaneously whether the true percentage p^* of failures is equal to the desired percentage and whether the VaR failure process is independently distributed. So, the following hypothesis were tested
H_0: p^*=p andπ_01= π_11=p (2.15)
H_1: p^*≠porπ_01≠π_11≠p
The 〖LR〗_cc follows χ^2 distribution with two degrees of freedom:
〖LR〗_cc = -2log((1-p)^(T ̃-N) p^N )+ 2log((1-π ̂_01 )^(n_00 ) π ̂_01^(n_01 ) (1-π ̂_11 )^(n_10 ) π ̂_11^(n_11 ) )~χ_2^2, (2.16)
Conditional coverage process rejects a model that gives either many or very few clustered violations.
Because VaR does not indicate the size of the expected loss, Degiannakis and Angelidis(2007) introduced utilizing loss function that are based on ES.
Ψ_((ES)t+1)={█((y_(t+1 )–〖ES〗_(t+1|t)^((1-p) ) )^2 , if y_(t+1)<〖VaR〗_(t+1|t)^((1-p)),@0, if y_(t+1) ≥ 〖VaR〗_(t+1|t)^((1-p)).)┤ (2.17)

When we compare alternative models, according to ES loss function we prefer the model that gives lowest total loss value. This is equal to the sum of the penalty scores, ∑_(t=1)^T▒Ψ_((ES)t+1) . In this study we estimate (3.17) augmented by one.

4.2 One-day-ahead Value-at- Risk and Expected Shortfall Forecasting

In this section we estimate the one-day-ahead 95% VaR and 95% ES values for all three indices. For each equity index we apply a model with constant mean ARMA(0,0) and conditional variance as a GARCH(1,1), GARCH(1,2), GARCH(2,1), EGARCH(1,1), TGARCH(1,1) and GJRGARCH(1,1) volatility specifications presented before, for daily ln-returns under two distributional assumptions (Normal and Student-t). For all equity indices and all above models we have estimated 95%VaR and 95% ES forecasts for 500 trading days based on a rolling sample. Tables 4.1 to 4.5 present, for each equity index and 12 volatility models, the average values for the one-day-ahead 95% VaR and 95% ES forecasts as these are defined in the previous Section. We also report the percentage of violations expressed by the number of violations over the out-of-sample 500 forecasts (N/T), Kupiec’s and Christoffersen’s p-values and the average value of the Loss Function based on the Expected Shortfall augmented by the value of one. A high p-value is preferred since this means that a good model will not overestimate or underestimate the true VaR. This is important because a high VaR estimation implies an obligation to allocate more capital than it is actually necessary.
For all equity indices we notice that the percentage of violations are greater for all indices and all model specifications under the assumption of Normal distribution compared with that of Student-t. Thus, the Student-t gives less violations. However, we have to take into account firstly, that the period forecasted is not very volatile and secondly the period of trading days we forecast may be considered small. In almost all cases according to both tests of Kupiec’s and Christoffersen’s p-values reported in the Tables 4.1 to 4.5 we fail to reject the null hypothesis.
For the case of FTSE, for the Normal distribution case the observed average 95%VaR gets values between (-1.31,-1.25) and for the Student-t distribution (-1.42,-1.38). The observed average 95% ES has values between (-1.6,-1.55) for the Normal distribution which are larger (algebraically) compared to the values of Student-t distribution (-1.84,-1.82). The ES Loss function in the Normal distribution has the lowest price for TGARCH(1,1) model (1.91), while in the Student-t case it has the lowest price in the three symmetric GARCH models (2.12).According to Kupiec’s LRun p-value,the best model in Normal distribution is the GJRGARCH(1,1) with p-value equal to 0.8384. In the Student-t distribution all models have very low Kupiec’s LRun p-values. For the cases, where Christoffersen’s p-values, are available these are in agreement with Kupiec’s.
Overall, our average 95%VaR one-day-ahead estimates based on univariate models suggest that all indices have the smaller score (larger in absolute values) under Student-t distribution.
According to Kupiec’s LRunp-value some models and indices share the highest p-values, for instance for normal distribution FTSE, HIS and DAX has a p-value equal to 0.838 for .the GJRGARCH(1,1). For the Student-t case there is no common behavior regarding unconditional and conditional coverage tests, showing lower p-values than normal distribution in the majority of indices and respective models.
Because a greater p-value of the model does not indicate superiority, in order to evaluate the models we computed Loss function based on ES . The score of the ES-Loss Function suggests as the best model the asymmetric TGARCH(1,1) model in almost all indices under the normal distribution except HIS for which the GJR is the case. For the Student-t case the ES loss function suggests for FTSE and S&P500 the GARCH(1,1), for DAX the TGARCH1,1).
So, we may conclude that in almost all cases null hypothesis is not rejected. One of the reasons is the sample size which in our case is a small one. This can be supported by Brooks and Persand (2002) and Angelidis, Benos, Degiannakis, (2004) who argue that the effect of the sample size on the performance of the models is not clear.






5. Conclusions
This paper is an attempt to combine financial volatility with income inequality and risk measures. It is an applied study where the evolution of income inequality is based on the top 1% income share for two European countries (France, Greece), UK and USA. These are countries with different levels of development but all of them have homogenous time series inequality data.
In this paper we study the evolution of income inequality in France,UK and USA countries along with the evolution of financial volatility. We would like to know weather risk measures based on financial volatility help to forecast severe changes in inequality and which models are more accurate. Income inequality measurement is based on the top income shares estimates in countries such as France, Greece, UK and USA. Most of the top income shares data are taken from the World Inequality Database. The financial risk measures are presented through the 95%VaR(Value Value-at at-Risk) and ES(Expected shortfall) of three stock exchange indexes (S&P500, FTSE100, DAX30) which are based on ARCH/GARCH models. The financial volatility period covered is from 2000 to 2015.
We first study and compare the stylized facts of the Stock Exchange and inequality indices. Then, we compare the performance of a class of alternative univariate ARCH/GARCH models for the estimation of one-day ahead 95% Value-at-Risk (VaR) and Expected Shortfall (ES) measures for three equity indices. We implement six alternative specifications of ARCH-GARCH family volatility models-: GARCH(1,1), GARCH(1,2), GARCH(2,1), EGARCH(1,1), TGARCH(1,1), GJR GARCH(1,1) under two distributional assumptions, the Normal and Student-t distributions error. The alternative distributions allow to select a more flexible model for the return tails. The 95%VaR out of sample forecasts, for all models, are evaluated according to Kupiec’s and Christoffersen’s tests. ES is evaluated through the estimation of a Loss Function.

Regardind Regarding income inequality we notice that there is an upward sharp increase of the 1%tis after the mid ‘80s period for all three four countries. The 2008-09 economic crisis is captured with a violent decrease in France and UK but not for the USA. So, the financial implications of the 2008 economic crisis affected some all countries. more heavily than others. However, the deep recession in the stock market in 2008 did not affect immediately the 1% top income share. The inequality effect is not the same in all the four countries with respect time, duration and sharpness. The inequality effect is not the same in all the four countries with respect time, duration and sharpness. That is, in Greece the consequences came almost immediately with sharper changes in the inequality index then other countries. There is also a long lasting period of continuously rising inequality. On the other hand, in USA, UK and France the effect of the crisis shows a decline in the 1% top income share which appears later (after 2010) but is not that sharp as in Greece (in mean levels) and shows a mixed year to year changes. The trend in USA seems to be unaffected while in UK and France there is a slight trend change which signals trend ambiguity.
But what if we try to connect these findings with the risk measures? Which model should we use in order to connect volatility with risk measures and then to inequality?
The applied stochastic time series analysis for the financial indices of FTSE100, DAX30, S&P500 proves that the best models are under the Student-t distribution. More specifically, the TGARCH(1,1) model is the best for FTSE and DAX indices while for S&P500, EGARCH(1,1) model fits better.



In oreder to find out which is the best model for the risk measureswe have explored, for three equity indices, the performance of six univariate and two multivariate GARCH models under two alternative distributions for the Normal and Student-t errors. The indices considered are: FTSE100, DAX30, S&P500. The models we study are the univariate GARCH(1,1), GARCH(1,2), GARCH(2,1) EGARCH(1,1), TGARCH(1,1), GJRGARCH(1,1) and the multivariate GO-GARCH(1,1) and DCC-GARCH(1,1). For each equity index, 12 univariate models were estimated and evaluated. For each volatility specification we estimated the one-day-ahead 95%Value-at-Risk (VaR) and 95% Expected Shortfall (ES) measures. In the univariate approach, we have estimated and reported the average95%VaR and the average 95%ES out-of-sample forecasts for all indices and all models, for the last 500 trading days of the studied period, based on a rolling sample. This means that the model parameters’ are re-estimated each trading day incorporating the most recent information. .

The evaluation of our models is based to AIC and SBC criteria. For the evaluation of VaR the Kupiec’s and Christoffersen’s tests are used as well as the ES Loss Function. Regarding the symmetric GARCH models for all equity indices and for both error distributions we conclude that the best model is GARCH(2,1) with the only exception of FTSE. For the asymmetric GARCH models, under the Normal distribution for the indices S&P500, DAX and FTSE criteria suggest that TGARCH(1,1) is the best. Under the Student-t distribution, the TGARCH(1,1) model is the best for FTSE and DAX indices. For S&P500, EGARCH(1,1) model fits better.

The estimated parameters of best symmetric GARCH models show highly significant conditional variance parameters while the distribution of zt is significantly thicker tailed than the normal. The asymmetric EGARCH(1,1) and TGARCH(1,1) have statistically significant coefficients under both distributions. The leverage effect is present in all EGARCH and TGARCH models for all indices since the respective coefficients are significantly different from zero. The GJR(1,1) for both distributions does not capture volatility equally well for all indices as by EGARCH(1,1) and TGARCH(1,1).
The 95%VaR and 95%ES forecasts show that for all models specifications of all equity indices the percentage of violations is greater under the assumption of Normal distribution compared with that of Student-t. We have to take into account that the forecasted period is not extremely volatile and that the period of trading days we forecast may be considered small. In almost all cases, tests of Kupiec’s and Christoffersen’s p-values fail to reject the null hypothesis.

Overall, our average 95%VaR one-day-ahead estimates based on univariate models suggest that all indices have the smaller value (larger in absolute values) under Student-t distribution.

According to Kupiec’s LRun p-value some models and indices share the highest p-values. More specifically, for normal distribution FTSE, HIS and DAX the GJRGARCH(1,1) has a p-value equal to 0.838. For the Student-t case there is no common behavior regarding unconditional and conditional coverage tests, showing lower p-values than normal distribution in the majority of indices and respective models. The score of the ES-Loss Function suggests the asymmetric TGARCH(1,1) as the best model in almost all indices under the normal distribution. For the Student-t case the ES Loss Function suggests as best for FTSE and S&P500 the GARCH(1,1) model and for DAX the TGARCH(1,1) model.

VaR (95%and 99%) one-day-head, for 500 trading days forecasts are visualized of all three indices studied and as expected we have more violations for the 95%VaR than for the 99%VaR. Risk measures used in this study capture adequately the volatility. We notice that business cycles are present in risk measures as in the inequality measures. At the beginning of 2014 there are low values of VaR and at the same time the top 1% share increases in 3 out of the 4 countries we study. However the opposite holds for the period just before 2014. This could be an indication that risk measures are not closely related to income inequality. For a more accurate result we need more data and a business cycle study connecting risk measures with volatility and income inequality.


Keywords: Financal estimation, Income inequality, Risk &, GARCH, time series

Received: 15 Feb 2019; Accepted: 30 Oct 2019.

Copyright: © 2019 LIVADA and ANAGNOSTOPOULOU. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Prof. ALEXANDRA LIVADA, Athens University of Economics and Business, Athens, Greece, livada@aueb.gr